text
stringlengths
559
401k
source
stringlengths
13
121
In mathematics, Dirichlet's unit theorem is a basic result in algebraic number theory due to Peter Gustav Lejeune Dirichlet. It determines the rank of the group of units in the ring OK of algebraic integers of a number field K. The regulator is a positive real number that determines how "dense" the units are. The statement is that the group of units is finitely generated and has rank (maximal number of multiplicatively independent elements) equal to where r1 is the number of real embeddings and r2 the number of conjugate pairs of complex embeddings of K. This characterisation of r1 and r2 is based on the idea that there will be as many ways to embed K in the complex number field as the degree n = [ K : Q ] {\displaystyle n=[K:\mathbb {Q} ]} ; these will either be into the real numbers, or pairs of embeddings related by complex conjugation, so that Note that if K is Galois over Q {\displaystyle \mathbb {Q} } then either r1 = 0 or r2 = 0. Other ways of determining r1 and r2 are use the primitive element theorem to write K = Q ( α ) {\displaystyle K=\mathbb {Q} (\alpha )} , and then r1 is the number of conjugates of α that are real, 2r2 the number that are complex; in other words, if f is the minimal polynomial of α over Q {\displaystyle \mathbb {Q} } , then r1 is the number of real roots and 2r2 is the number of non-real complex roots of f (which come in complex conjugate pairs); write the tensor product of fields K ⊗ Q R {\displaystyle K\otimes _{\mathbb {Q} }\mathbb {R} } as a product of fields, there being r1 copies of R {\displaystyle \mathbb {R} } and r2 copies of C {\displaystyle \mathbb {C} } . As an example, if K is a quadratic field, the rank is 1 if it is a real quadratic field, and 0 if an imaginary quadratic field. The theory for real quadratic fields is essentially the theory of Pell's equation. The rank is positive for all number fields besides Q {\displaystyle \mathbb {Q} } and imaginary quadratic fields, which have rank 0. The 'size' of the units is measured in general by a determinant called the regulator. In principle a basis for the units can be effectively computed; in practice the calculations are quite involved when n is large. The torsion in the group of units is the set of all roots of unity of K, which form a finite cyclic group. For a number field with at least one real embedding the torsion must therefore be only {1,−1}. There are number fields, for example most imaginary quadratic fields, having no real embeddings which also have {1,−1} for the torsion of its unit group. Totally real fields are special with respect to units. If L/K is a finite extension of number fields with degree greater than 1 and the units groups for the integers of L and K have the same rank then K is totally real and L is a totally complex quadratic extension. The converse holds too. (An example is K equal to the rationals and L equal to an imaginary quadratic field; both have unit rank 0.) The theorem not only applies to the maximal order OK but to any order O ⊂ OK. There is a generalisation of the unit theorem by Helmut Hasse (and later Claude Chevalley) to describe the structure of the group of S-units, determining the rank of the unit group in localizations of rings of integers. Also, the Galois module structure of Q ⊕ O K , S ⊗ Z Q {\displaystyle \mathbb {Q} \oplus O_{K,S}\otimes _{\mathbb {Z} }\mathbb {Q} } has been determined. == The regulator == Suppose that K is a number field and u 1 , … , u r {\displaystyle u_{1},\dots ,u_{r}} are a set of generators for the unit group of K modulo roots of unity. There will be r + 1 Archimedean places of K, either real or complex. For u ∈ K {\displaystyle u\in K} , write u ( 1 ) , … , u ( r + 1 ) {\displaystyle u^{(1)},\dots ,u^{(r+1)}} for the different embeddings into R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } and set Nj to 1 or 2 if the corresponding embedding is real or complex respectively. Then the r × (r + 1) matrix ( N j log ⁡ | u i ( j ) | ) i = 1 , … , r , j = 1 , … , r + 1 {\displaystyle \left(N_{j}\log \left|u_{i}^{(j)}\right|\right)_{i=1,\dots ,r,\;j=1,\dots ,r+1}} has the property that the sum of any row is zero (because all units have norm 1, and the log of the norm is the sum of the entries in a row). This implies that the absolute value R of the determinant of the submatrix formed by deleting one column is independent of the column. The number R is called the regulator of the algebraic number field (it does not depend on the choice of generators ui). It measures the "density" of the units: if the regulator is small, this means that there are "lots" of units. The regulator has the following geometric interpretation. The map taking a unit u to the vector with entries N j log ⁡ | u ( j ) | {\textstyle N_{j}\log \left|u^{(j)}\right|} has an image in the r-dimensional subspace of R r + 1 {\displaystyle \mathbb {R} ^{r+1}} consisting of all vectors whose entries have sum 0, and by Dirichlet's unit theorem the image is a lattice in this subspace. The volume of a fundamental domain of this lattice is R r + 1 {\displaystyle R{\sqrt {r+1}}} . The regulator of an algebraic number field of degree greater than 2 is usually quite cumbersome to calculate, though there are now computer algebra packages that can do it in many cases. It is usually much easier to calculate the product hR of the class number h and the regulator using the class number formula, and the main difficulty in calculating the class number of an algebraic number field is usually the calculation of the regulator. === Examples === The regulator of an imaginary quadratic field, or of the rational integers, is 1 (as the determinant of a 0 × 0 matrix is 1). The regulator of a real quadratic field is the logarithm of its fundamental unit: for example, that of Q ( 5 ) {\displaystyle \mathbb {Q} ({\sqrt {5}})} is log ⁡ 5 + 1 2 {\textstyle \log {\frac {{\sqrt {5}}+1}{2}}} . This can be seen as follows. A fundamental unit is ( 5 + 1 ) / 2 {\textstyle ({\sqrt {5}}+1)/2} , and its images under the two embeddings into R {\displaystyle \mathbb {R} } are ( 5 + 1 ) / 2 {\textstyle ({\sqrt {5}}+1)/2} and ( − 5 + 1 ) / 2 {\textstyle (-{\sqrt {5}}+1)/2} . So the r × (r + 1) matrix is [ 1 × log ⁡ | 5 + 1 2 | , 1 × log ⁡ | − 5 + 1 2 | ] . {\displaystyle \left[1\times \log \left|{\frac {{\sqrt {5}}+1}{2}}\right|,\quad 1\times \log \left|{\frac {-{\sqrt {5}}+1}{2}}\right|\ \right].} The regulator of the cyclic cubic field Q ( α ) {\displaystyle \mathbb {Q} (\alpha )} , where α is a root of x3 + x2 − 2x − 1, is approximately 0.5255. A basis of the group of units modulo roots of unity is {ε1, ε2} where ε1 = α2 + α − 1 and ε2 = 2 − α2. == Higher regulators == A 'higher' regulator refers to a construction for a function on an algebraic K-group with index n > 1 that plays the same role as the classical regulator does for the group of units, which is a group K1. A theory of such regulators has been in development, with work of Armand Borel and others. Such higher regulators play a role, for example, in the Beilinson conjectures, and are expected to occur in evaluations of certain L-functions at integer values of the argument. See also Beilinson regulator. == Stark regulator == The formulation of Stark's conjectures led Harold Stark to define what is now called the Stark regulator, similar to the classical regulator as a determinant of logarithms of units, attached to any Artin representation. == p-adic regulator == Let K be a number field and for each prime P of K above some fixed rational prime p, let UP denote the local units at P and let U1,P denote the subgroup of principal units in UP. Set U 1 = ∏ P | p U 1 , P . {\displaystyle U_{1}=\prod _{P|p}U_{1,P}.} Then let E1 denote the set of global units ε that map to U1 via the diagonal embedding of the global units in E. Since E1 is a finite-index subgroup of the global units, it is an abelian group of rank r1 + r2 − 1. The p-adic regulator is the determinant of the matrix formed by the p-adic logarithms of the generators of this group. Leopoldt's conjecture states that this determinant is non-zero. == See also == Elliptic unit Cyclotomic unit Shintani's unit theorem == Notes == == References == Cohen, Henri (1993). A Course in Computational Algebraic Number Theory. Graduate Texts in Mathematics. Vol. 138. Berlin, New York: Springer-Verlag. ISBN 978-3-540-55640-4. MR 1228206. Zbl 0786.11071. Elstrodt, Jürgen (2007). "The Life and Work of Gustav Lejeune Dirichlet (1805–1859)" (PDF). Clay Mathematics Proceedings. Archived from the original (PDF) on 2021-05-22. Retrieved 2010-06-13. Lang, Serge (1994). Algebraic number theory. Graduate Texts in Mathematics. Vol. 110 (2nd ed.). New York: Springer-Verlag. ISBN 0-387-94225-4. Zbl 0811.11001. Neukirch, Jürgen (1999). Algebraische Zahlentheorie. Grundlehren der mathematischen Wissenschaften. Vol. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021. Neukirch, Jürgen; Schmidt, Alexander; Wingberg, Kay (2000), Cohomology of Number Fields, Grundlehren der Mathematischen Wissenschaften, vol. 323, Berlin: Springer-Verlag, ISBN 978-3-540-66671-4, MR 1737196, Zbl 0948.11001
Wikipedia/Regulator_of_an_algebraic_number_field
In mathematics, the discriminant of an algebraic number field is a numerical invariant that, loosely speaking, measures the size of the (ring of integers of the) algebraic number field. More specifically, it is proportional to the squared volume of the fundamental domain of the ring of integers, and it regulates which primes are ramified. The discriminant is one of the most basic invariants of a number field, and occurs in several important analytic formulas such as the functional equation of the Dedekind zeta function of K {\displaystyle K} , and the analytic class number formula for K {\displaystyle K} . A theorem of Hermite states that there are only finitely many number fields of bounded discriminant, however determining this quantity is still an open problem, and the subject of current research. The discriminant of K {\displaystyle K} can be referred to as the absolute discriminant of K {\displaystyle K} to distinguish it from the relative discriminant of an extension K / L {\displaystyle K/L} of number fields. The latter is an ideal in the ring of integers of L {\displaystyle L} , and like the absolute discriminant it indicates which primes are ramified in K / L {\displaystyle K/L} . It is a generalization of the absolute discriminant allowing for L {\displaystyle L} to be bigger than Q {\displaystyle \mathbb {Q} } ; in fact, when L = Q {\displaystyle L=\mathbb {Q} } , the relative discriminant of K / Q {\displaystyle K/\mathbb {Q} } is the principal ideal of Z {\displaystyle \mathbb {Z} } generated by the absolute discriminant of K {\displaystyle K} . == Definition == Let K {\displaystyle K} be an algebraic number field, and let O K {\displaystyle {\mathcal {O}}_{K}} be its ring of integers. Let b 1 , … , b n {\displaystyle b_{1},\dots ,b_{n}} be an integral basis of O K {\displaystyle {\mathcal {O}}_{K}} (i.e. a basis as a Z {\displaystyle \mathbb {Z} } -module), and let { σ 1 , … , σ n } {\displaystyle \{\sigma _{1},\dots ,\sigma _{n}\}} be the set of embeddings of K {\displaystyle K} into the complex numbers (i.e. injective ring homomorphisms K → C {\displaystyle K\to \mathbb {C} } ). The discriminant of K {\displaystyle K} is the square of the determinant of the n × n {\displaystyle n\times n} matrix B {\displaystyle B} whose ( i , j ) {\displaystyle (i,j)} -entry is σ i ( b j ) {\displaystyle \sigma _{i}(b_{j})} . Symbolically, Δ K = det ( σ 1 ( b 1 ) σ 1 ( b 2 ) ⋯ σ 1 ( b n ) σ 2 ( b 1 ) ⋱ ⋮ ⋮ ⋱ ⋮ σ n ( b 1 ) ⋯ ⋯ σ n ( b n ) ) 2 . {\displaystyle \Delta _{K}=\det \left({\begin{array}{cccc}\sigma _{1}(b_{1})&\sigma _{1}(b_{2})&\cdots &\sigma _{1}(b_{n})\\\sigma _{2}(b_{1})&\ddots &&\vdots \\\vdots &&\ddots &\vdots \\\sigma _{n}(b_{1})&\cdots &\cdots &\sigma _{n}(b_{n})\end{array}}\right)^{2}.} Equivalently, the trace from K {\displaystyle K} to Q {\displaystyle \mathbb {Q} } can be used. Specifically, define the trace form to be the matrix whose ( i , j ) {\displaystyle (i,j)} -entry is Tr K / Q ⁡ ( b i b j ) {\displaystyle \operatorname {Tr} _{K/\mathbb {Q} }(b_{i}b_{j})} . This matrix equals B T B {\displaystyle B^{T}\!B} , so the square of the discriminant of K {\displaystyle K} is the determinant of this matrix. The discriminant of an order in K {\displaystyle K} with integral basis b 1 , … , b n {\displaystyle b_{1},\dots ,b_{n}} is defined in the same way. == Examples == Quadratic number fields: let d {\displaystyle d} be a square-free integer; then the discriminant of K = Q ( d ) {\displaystyle K=\mathbb {Q} ({\sqrt {d}})} is Δ K = { d if d ≡ 1 ( mod 4 ) 4 d if d ≡ 2 , 3 ( mod 4 ) . {\displaystyle \Delta _{K}=\left\{{\begin{array}{ll}d&{\text{if }}d\equiv 1{\pmod {4}}\\4d&{\text{if }}d\equiv 2,3{\pmod {4}}.\\\end{array}}\right.} An integer that occurs as the discriminant of a quadratic number field is called a fundamental discriminant. Cyclotomic fields: let n > 2 {\displaystyle n>2} be an integer, let ζ n {\displaystyle \zeta _{n}} be a primitive n-th root of unity, and let K n = Q ( ζ n ) {\displaystyle K_{n}=\mathbb {Q} (\zeta _{n})} be the n {\displaystyle n} -th cyclotomic field. The discriminant of K n {\displaystyle K_{n}} is given by Δ K n = ( − 1 ) φ ( n ) / 2 n φ ( n ) ∏ p | n p φ ( n ) / ( p − 1 ) {\displaystyle \Delta _{K_{n}}=(-1)^{\varphi (n)/2}{\frac {n^{\varphi (n)}}{\displaystyle \prod _{p|n}p^{\varphi (n)/(p-1)}}}} where φ ( n ) {\displaystyle \varphi (n)} is Euler's totient function, and the product in the denominator is over primes p {\displaystyle p} dividing n {\displaystyle n} . Power bases: In the case where the ring O K {\displaystyle {\mathcal {O}}_{K}} of algebraic integers has a power integral basis, that is, can be written as O K = Z [ α ] {\displaystyle {\mathcal {O}}_{K}=\mathbb {Z} [\alpha ]} , the discriminant of K {\displaystyle K} is equal to the discriminant of the minimal polynomial of α {\displaystyle \alpha } . To see this, one can choose the integral basis of O K {\displaystyle {\mathcal {O}}_{K}} to be b 1 = 1 , b 2 = α , b 3 = α 2 , … , b n = α n − 1 {\displaystyle b_{1}=1,b_{2}=\alpha ,b_{3}=\alpha ^{2},\dots ,b_{n}=\alpha ^{n-1}} . Then, the matrix B {\displaystyle B} in the definition is the Vandermonde matrix associated to α i = σ i ( α ) {\displaystyle \alpha _{i}=\sigma _{i}(\alpha )} , whose squared determinant is ∏ 1 ≤ i < j ≤ n ( α i − α j ) 2 {\displaystyle \prod _{1\leq i<j\leq n}(\alpha _{i}-\alpha _{j})^{2}} , which is exactly the definition of the discriminant of the minimal polynomial. Let K = Q ( α ) {\displaystyle K=\mathbb {Q} (\alpha )} be the number field obtained by adjoining a root α {\displaystyle \alpha } of the polynomial x 3 − x 2 − 2 x − 8 {\displaystyle x^{3}-x^{2}-2x-8} . This is Richard Dedekind's original example of a number field whose ring of integers does not possess a power basis. An integral basis is given by { 1 , α , α ( α + 1 ) 2 } {\displaystyle \left\{1,\alpha ,{\frac {\alpha (\alpha +1)}{2}}\right\}} and the discriminant of K {\displaystyle K} is −503. Repeated discriminants: the discriminant of a quadratic field uniquely identifies it, but this is not true, in general, for higher-degree number fields. For example, there are two non-isomorphic cubic fields of discriminant 3969. They are obtained by adjoining a root of the polynomial x 3 − 21 x + 28 {\displaystyle x^{3}-21x+28} or x 3 − 21 x − 35 {\displaystyle x^{3}-21x-35} , respectively. == Basic results == Brill's theorem: The sign of the discriminant is ( − 1 ) r 2 {\displaystyle (-1)^{r_{2}}} where r 2 {\displaystyle r_{2}} is the number of complex places of K {\displaystyle K} . A prime p {\displaystyle p} ramifies in K {\displaystyle K} if and only if p {\displaystyle p} divides Δ K {\displaystyle \Delta _{K}} . Stickelberger's theorem: Δ K ≡ 0 or 1 ( mod 4 ) . {\displaystyle \Delta _{K}\equiv 0{\text{ or }}1{\pmod {4}}.} Minkowski's bound: Let n {\displaystyle n} denote the degree of the extension K / Q {\displaystyle K/\mathbb {Q} } and r 2 {\displaystyle r_{2}} the number of complex places of K {\displaystyle K} , then | Δ K | 1 / 2 ≥ n n n ! ( π 4 ) r 2 ≥ n n n ! ( π 4 ) n / 2 . {\displaystyle |\Delta _{K}|^{1/2}\geq {\frac {n^{n}}{n!}}\left({\frac {\pi }{4}}\right)^{r_{2}}\geq {\frac {n^{n}}{n!}}\left({\frac {\pi }{4}}\right)^{n/2}.} Minkowski's theorem: If K {\displaystyle K} is not Q {\displaystyle \mathbb {Q} } , then | Δ K | > 1 {\displaystyle |\Delta _{K}|>1} (this follows directly from the Minkowski bound). Hermite–Minkowski theorem: Let N {\displaystyle N} be a positive integer. There are only finitely many (up to isomorphisms) algebraic number fields K {\displaystyle K} with | Δ K | < N {\displaystyle |\Delta _{K}|<N} . Again, this follows from the Minkowski bound together with Hermite's theorem (that there are only finitely many algebraic number fields with prescribed discriminant). == History == The definition of the discriminant of a general algebraic number field, K, was given by Dedekind in 1871. At this point, he already knew the relationship between the discriminant and ramification. Hermite's theorem predates the general definition of the discriminant with Charles Hermite publishing a proof of it in 1857. In 1877, Alexander von Brill determined the sign of the discriminant. Leopold Kronecker first stated Minkowski's theorem in 1882, though the first proof was given by Hermann Minkowski in 1891. In the same year, Minkowski published his bound on the discriminant. Near the end of the nineteenth century, Ludwig Stickelberger obtained his theorem on the residue of the discriminant modulo four. == Relative discriminant == The discriminant defined above is sometimes referred to as the absolute discriminant of K to distinguish it from the relative discriminant ΔK/L of an extension of number fields K/L, which is an ideal in OL. The relative discriminant is defined in a fashion similar to the absolute discriminant, but must take into account that ideals in OL may not be principal and that there may not be an OL basis of OK. Let {σ1, ..., σn} be the set of embeddings of K into C which are the identity on L. If b1, ..., bn is any basis of K over L, let d(b1, ..., bn) be the square of the determinant of the n by n matrix whose (i,j)-entry is σi(bj). Then, the relative discriminant of K/L is the ideal generated by the d(b1, ..., bn) as {b1, ..., bn} varies over all integral bases of K/L. (i.e. bases with the property that bi ∈ OK for all i.) Alternatively, the relative discriminant of K/L is the norm of the different of K/L. When L = Q, the relative discriminant ΔK/Q is the principal ideal of Z generated by the absolute discriminant ΔK . In a tower of fields K/L/F the relative discriminants are related by Δ K / F = N L / F ( Δ K / L ) Δ L / F [ K : L ] {\displaystyle \Delta _{K/F}={\mathcal {N}}_{L/F}\left({\Delta _{K/L}}\right)\Delta _{L/F}^{[K:L]}} where N {\displaystyle {\mathcal {N}}} denotes relative norm. === Ramification === The relative discriminant regulates the ramification data of the field extension K/L. A prime ideal p of L ramifies in K if, and only if, it divides the relative discriminant ΔK/L. An extension is unramified if, and only if, the discriminant is the unit ideal. The Minkowski bound above shows that there are no non-trivial unramified extensions of Q. Fields larger than Q may have unramified extensions: for example, for any field with class number greater than one, its Hilbert class field is a non-trivial unramified extension. == Root discriminant == The root discriminant of a degree n number field K is defined by the formula rd K = | Δ K | 1 / n . {\displaystyle \operatorname {rd} _{K}=|\Delta _{K}|^{1/n}.} The relation between relative discriminants in a tower of fields shows that the root discriminant does not change in an unramified extension. === Asymptotic lower bounds === Given nonnegative rational numbers ρ and σ, not both 0, and a positive integer n such that the pair (r,2s) = (ρn,σn) is in Z × 2Z, let αn(ρ, σ) be the infimum of rdK as K ranges over degree n number fields with r real embeddings and 2s complex embeddings, and let α(ρ, σ) = liminfn→∞ αn(ρ, σ). Then α ( ρ , σ ) ≥ 60.8 ρ 22.3 σ {\displaystyle \alpha (\rho ,\sigma )\geq 60.8^{\rho }22.3^{\sigma }} , and the generalized Riemann hypothesis implies the stronger bound α ( ρ , σ ) ≥ 215.3 ρ 44.7 σ . {\displaystyle \alpha (\rho ,\sigma )\geq 215.3^{\rho }44.7^{\sigma }.} There is also a lower bound that holds in all degrees, not just asymptotically: For totally real fields, the root discriminant is > 14, with 1229 exceptions. === Asymptotic upper bounds === On the other hand, the existence of an infinite class field tower can give upper bounds on the values of α(ρ, σ). For example, the infinite class field tower over Q(√-m) with m = 3·5·7·11·19 produces fields of arbitrarily large degree with root discriminant 2√m ≈ 296.276, so α(0,1) < 296.276. Using tamely ramified towers, Hajir and Maire have shown that α(1,0) < 954.3 and α(0,1) < 82.2, improving upon earlier bounds of Martinet. == Relation to other quantities == When embedded into K ⊗ Q R {\displaystyle K\otimes _{\mathbf {Q} }\mathbf {R} } , the volume of the fundamental domain of OK is | Δ K | {\displaystyle {\sqrt {|\Delta _{K}|}}} (sometimes a different measure is used and the volume obtained is 2 − r 2 | Δ K | {\displaystyle 2^{-r_{2}}{\sqrt {|\Delta _{K}|}}} , where r2 is the number of complex places of K). Due to its appearance in this volume, the discriminant also appears in the functional equation of the Dedekind zeta function of K, and hence in the analytic class number formula, and the Brauer–Siegel theorem. The relative discriminant of K/L is the Artin conductor of the regular representation of the Galois group of K/L. This provides a relation to the Artin conductors of the characters of the Galois group of K/L, called the conductor-discriminant formula. == Notes == == References == === Primary sources === Brill, Alexander von (1877), "Ueber die Discriminante", Mathematische Annalen, 12 (1): 87–89, doi:10.1007/BF01442468, JFM 09.0059.02, MR 1509928, S2CID 120947279, retrieved 2009-08-22 Dedekind, Richard (1871), Vorlesungen über Zahlentheorie von P.G. Lejeune Dirichlet (2 ed.), Vieweg, retrieved 2009-08-05 Dedekind, Richard (1878), "Über den Zusammenhang zwischen der Theorie der Ideale und der Theorie der höheren Congruenzen", Abhandlungen der Königlichen Gesellschaft der Wissenschaften zu Göttingen, 23 (1), retrieved 2009-08-20 Hermite, Charles (1857), "Extrait d'une lettre de M. C. Hermite à M. Borchardt sur le nombre limité d'irrationalités auxquelles se réduisent les racines des équations à coefficients entiers complexes d'un degré et d'un discriminant donnés", Crelle's Journal, 1857 (53): 182–192, doi:10.1515/crll.1857.53.182, S2CID 120694650, retrieved 2009-08-20 Kronecker, Leopold (1882), "Grundzüge einer arithmetischen Theorie der algebraischen Grössen", Crelle's Journal, 92: 1–122, JFM 14.0038.02, retrieved 2009-08-20 Minkowski, Hermann (1891a), "Ueber die positiven quadratischen Formen und über kettenbruchähnliche Algorithmen", Crelle's Journal, 1891 (107): 278–297, doi:10.1515/crll.1891.107.278, JFM 23.0212.01, retrieved 2009-08-20 Minkowski, Hermann (1891b), "Théorèmes d'arithmétiques", Comptes rendus de l'Académie des sciences, 112: 209–212, JFM 23.0214.01 Stickelberger, Ludwig (1897), "Über eine neue Eigenschaft der Diskriminanten algebraischer Zahlkörper", Proceedings of the First International Congress of Mathematicians, Zürich, pp. 182–193, JFM 29.0172.03 === Secondary sources === Bourbaki, Nicolas (1994). Elements of the history of mathematics. Translated by Meldrum, John. Berlin: Springer-Verlag. ISBN 978-3-540-64767-6. MR 1290116. Cohen, Henri (1993), A Course in Computational Algebraic Number Theory, Graduate Texts in Mathematics, vol. 138, Berlin, New York: Springer-Verlag, ISBN 978-3-540-55640-4, MR 1228206 Cohen, Henri; Diaz y Diaz, Francisco; Olivier, Michel (2002), "A Survey of Discriminant Counting", in Fieker, Claus; Kohel, David R. (eds.), Algorithmic Number Theory, Proceedings, 5th International Syposium, ANTS-V, University of Sydney, July 2002, Lecture Notes in Computer Science, vol. 2369, Berlin: Springer-Verlag, pp. 80–94, doi:10.1007/3-540-45455-1_7, ISBN 978-3-540-43863-2, ISSN 0302-9743, MR 2041075 Fröhlich, Albrecht; Taylor, Martin (1993), Algebraic number theory, Cambridge Studies in Advanced Mathematics, vol. 27, Cambridge University Press, ISBN 978-0-521-43834-6, MR 1215934 Koch, Helmut (1997), Algebraic Number Theory, Encycl. Math. Sci., vol. 62 (2nd printing of 1st ed.), Springer-Verlag, ISBN 3-540-63003-1, Zbl 0819.11044 Narkiewicz, Władysław (2004), Elementary and analytic theory of algebraic numbers, Springer Monographs in Mathematics (3 ed.), Berlin: Springer-Verlag, ISBN 978-3-540-21902-6, MR 2078267 Neukirch, Jürgen (1999). Algebraische Zahlentheorie. Grundlehren der mathematischen Wissenschaften. Vol. 322. Berlin: Springer-Verlag. ISBN 978-3-540-65399-8. MR 1697859. Zbl 0956.11021. Serre, Jean-Pierre (1967), "Local class field theory", in Cassels, J. W. S.; Fröhlich, Albrecht (eds.), Algebraic Number Theory, Proceedings of an instructional conference at the University of Sussex, Brighton, 1965, London: Academic Press, ISBN 0-12-163251-2, MR 0220701 Voight, John (2008), "Enumeration of totally real number fields of bounded root discriminant", in van der Poorten, Alfred J.; Stein, Andreas (eds.), Algorithmic number theory. Proceedings, 8th International Symposium, ANTS-VIII, Banff, Canada, May 2008, Lecture Notes in Computer Science, vol. 5011, Berlin: Springer-Verlag, pp. 268–281, arXiv:0802.0194, doi:10.1007/978-3-540-79456-1_18, ISBN 978-3-540-79455-4, MR 2467853, S2CID 30036220, Zbl 1205.11125 Washington, Lawrence (1997), Introduction to Cyclotomic Fields, Graduate Texts in Mathematics, vol. 83 (2 nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-94762-4, MR 1421575, Zbl 0966.11047 == Further reading == Milne, James S. (1998), Algebraic Number Theory, retrieved 2008-08-20
Wikipedia/Discriminant_of_an_algebraic_number_field
In mathematics, especially group theory, two elements a {\displaystyle a} and b {\displaystyle b} of a group are conjugate if there is an element g {\displaystyle g} in the group such that b = g a g − 1 . {\displaystyle b=gag^{-1}.} This is an equivalence relation whose equivalence classes are called conjugacy classes. In other words, each conjugacy class is closed under b = g a g − 1 {\displaystyle b=gag^{-1}} for all elements g {\displaystyle g} in the group. Members of the same conjugacy class cannot be distinguished by using only the group structure, and therefore share many properties. The study of conjugacy classes of non-abelian groups is fundamental for the study of their structure. For an abelian group, each conjugacy class is a set containing one element (singleton set). Functions that are constant for members of the same conjugacy class are called class functions. == Definition == Let G {\displaystyle G} be a group. Two elements a , b ∈ G {\displaystyle a,b\in G} are conjugate if there exists an element g ∈ G {\displaystyle g\in G} such that g a g − 1 = b , {\displaystyle gag^{-1}=b,} in which case b {\displaystyle b} is called a conjugate of a {\displaystyle a} and a {\displaystyle a} is called a conjugate of b . {\displaystyle b.} In the case of the general linear group GL ⁡ ( n ) {\displaystyle \operatorname {GL} (n)} of invertible matrices, the conjugacy relation is called matrix similarity. It can be easily shown that conjugacy is an equivalence relation and therefore partitions G {\displaystyle G} into equivalence classes. (This means that every element of the group belongs to precisely one conjugacy class, and the classes Cl ⁡ ( a ) {\displaystyle \operatorname {Cl} (a)} and Cl ⁡ ( b ) {\displaystyle \operatorname {Cl} (b)} are equal if and only if a {\displaystyle a} and b {\displaystyle b} are conjugate, and disjoint otherwise.) The equivalence class that contains the element a ∈ G {\displaystyle a\in G} is Cl ⁡ ( a ) = { g a g − 1 : g ∈ G } {\displaystyle \operatorname {Cl} (a)=\left\{gag^{-1}:g\in G\right\}} and is called the conjugacy class of a . {\displaystyle a.} The class number of G {\displaystyle G} is the number of distinct (nonequivalent) conjugacy classes. All elements belonging to the same conjugacy class have the same order. Conjugacy classes may be referred to by describing them, or more briefly by abbreviations such as "6A", meaning "a certain conjugacy class with elements of order 6", and "6B" would be a different conjugacy class with elements of order 6; the conjugacy class 1A is the conjugacy class of the identity which has order 1. In some cases, conjugacy classes can be described in a uniform way; for example, in the symmetric group they can be described by cycle type. == Examples == The symmetric group S 3 , {\displaystyle S_{3},} consisting of the 6 permutations of three elements, has three conjugacy classes: No change ( a b c → a b c ) {\displaystyle (abc\to abc)} . The single member has order 1. Transposing two ( a b c → a c b , a b c → b a c , a b c → c b a ) {\displaystyle (abc\to acb,abc\to bac,abc\to cba)} . The 3 members all have order 2. A cyclic permutation of all three ( a b c → b c a , a b c → c a b ) {\displaystyle (abc\to bca,abc\to cab)} . The 2 members both have order 3. These three classes also correspond to the classification of the isometries of an equilateral triangle. The symmetric group S 4 , {\displaystyle S_{4},} consisting of the 24 permutations of four elements, has five conjugacy classes, listed with their description, cycle type, member order, and members: No change. Cycle type = [14]. Order = 1. Members = { (1, 2, 3, 4) }. The single row containing this conjugacy class is shown as a row of black circles in the adjacent table. Interchanging two (other two remain unchanged). Cycle type = [1221]. Order = 2. Members = { (1, 2, 4, 3), (1, 4, 3, 2), (1, 3, 2, 4), (4, 2, 3, 1), (3, 2, 1, 4), (2, 1, 3, 4) }). The 6 rows containing this conjugacy class are highlighted in green in the adjacent table. A cyclic permutation of three (other one remains unchanged). Cycle type = [1131]. Order = 3. Members = { (1, 3, 4, 2), (1, 4, 2, 3), (3, 2, 4, 1), (4, 2, 1, 3), (4, 1, 3, 2), (2, 4, 3, 1), (3, 1, 2, 4), (2, 3, 1, 4) }). The 8 rows containing this conjugacy class are shown with normal print (no boldface or color highlighting) in the adjacent table. A cyclic permutation of all four. Cycle type = [41]. Order = 4. Members = { (2, 3, 4, 1), (2, 4, 1, 3), (3, 1, 4, 2), (3, 4, 2, 1), (4, 1, 2, 3), (4, 3, 1, 2) }). The 6 rows containing this conjugacy class are highlighted in orange in the adjacent table. Interchanging two, and also the other two. Cycle type = [22]. Order = 2. Members = { (2, 1, 4, 3), (4, 3, 2, 1), (3, 4, 1, 2) }). The 3 rows containing this conjugacy class are shown with boldface entries in the adjacent table. The proper rotations of the cube, which can be characterized by permutations of the body diagonals, are also described by conjugation in S 4 . {\displaystyle S_{4}.} In general, the number of conjugacy classes in the symmetric group S n {\displaystyle S_{n}} is equal to the number of integer partitions of n . {\displaystyle n.} This is because each conjugacy class corresponds to exactly one partition of { 1 , 2 , … , n } {\displaystyle \{1,2,\ldots ,n\}} into cycles, up to permutation of the elements of { 1 , 2 , … , n } . {\displaystyle \{1,2,\ldots ,n\}.} In general, the Euclidean group can be studied by conjugation of isometries in Euclidean space. Example Let G = S 3 {\displaystyle S_{3}} a = ( 23 ) {\displaystyle a=(23)} x = ( 123 ) {\displaystyle x=(123)} x − 1 = ( 321 ) {\displaystyle x^{-1}=(321)} Then x a x − 1 {\displaystyle xax^{-1}} = ( 123 ) ( 23 ) ( 321 ) = ( 31 ) {\displaystyle =(123)(23)(321)=(31)} = ( 31 ) {\displaystyle =(31)} is Conjugate of ( 23 ) {\displaystyle (23)} == Properties == The identity element is always the only element in its class, that is Cl ⁡ ( e ) = { e } . {\displaystyle \operatorname {Cl} (e)=\{e\}.} If G {\displaystyle G} is abelian then g a g − 1 = a {\displaystyle gag^{-1}=a} for all a , g ∈ G {\displaystyle a,g\in G} , i.e. Cl ⁡ ( a ) = { a } {\displaystyle \operatorname {Cl} (a)=\{a\}} for all a ∈ G {\displaystyle a\in G} (and the converse is also true: if all conjugacy classes are singletons then G {\displaystyle G} is abelian). If two elements a , b ∈ G {\displaystyle a,b\in G} belong to the same conjugacy class (that is, if they are conjugate), then they have the same order. More generally, every statement about a {\displaystyle a} can be translated into a statement about b = g a g − 1 , {\displaystyle b=gag^{-1},} because the map φ ( x ) = g x g − 1 {\displaystyle \varphi (x)=gxg^{-1}} is an automorphism of G {\displaystyle G} called an inner automorphism. See the next property for an example. If a {\displaystyle a} and b {\displaystyle b} are conjugate, then so are their powers a k {\displaystyle a^{k}} and b k . {\displaystyle b^{k}.} (Proof: if a = g b g − 1 {\displaystyle a=gbg^{-1}} then a k = ( g b g − 1 ) ( g b g − 1 ) ⋯ ( g b g − 1 ) = g b k g − 1 . {\displaystyle a^{k}=\left(gbg^{-1}\right)\left(gbg^{-1}\right)\cdots \left(gbg^{-1}\right)=gb^{k}g^{-1}.} ) Thus taking kth powers gives a map on conjugacy classes, and one may consider which conjugacy classes are in its preimage. For example, in the symmetric group, the square of an element of type (3)(2) (a 3-cycle and a 2-cycle) is an element of type (3), therefore one of the power-up classes of (3) is the class (3)(2) (where a {\displaystyle a} is a power-up class of a k {\displaystyle a^{k}} ). An element a ∈ G {\displaystyle a\in G} lies in the center Z ⁡ ( G ) {\displaystyle \operatorname {Z} (G)} of G {\displaystyle G} if and only if its conjugacy class has only one element, a {\displaystyle a} itself. More generally, if C G ⁡ ( a ) {\displaystyle \operatorname {C} _{G}(a)} denotes the centralizer of a ∈ G , {\displaystyle a\in G,} i.e., the subgroup consisting of all elements g {\displaystyle g} such that g a = a g , {\displaystyle ga=ag,} then the index [ G : C G ⁡ ( a ) ] {\displaystyle \left[G:\operatorname {C} _{G}(a)\right]} is equal to the number of elements in the conjugacy class of a {\displaystyle a} (by the orbit-stabilizer theorem). Take σ ∈ S n {\displaystyle \sigma \in S_{n}} and let m 1 , m 2 , … , m s {\displaystyle m_{1},m_{2},\ldots ,m_{s}} be the distinct integers which appear as lengths of cycles in the cycle type of σ {\displaystyle \sigma } (including 1-cycles). Let k i {\displaystyle k_{i}} be the number of cycles of length m i {\displaystyle m_{i}} in σ {\displaystyle \sigma } for each i = 1 , 2 , … , s {\displaystyle i=1,2,\ldots ,s} (so that ∑ i = 1 s k i m i = n {\displaystyle \sum \limits _{i=1}^{s}k_{i}m_{i}=n} ). Then the number of conjugates of σ {\displaystyle \sigma } is: n ! ( k 1 ! m 1 k 1 ) ( k 2 ! m 2 k 2 ) ⋯ ( k s ! m s k s ) . {\displaystyle {\frac {n!}{\left(k_{1}!m_{1}^{k_{1}}\right)\left(k_{2}!m_{2}^{k_{2}}\right)\cdots \left(k_{s}!m_{s}^{k_{s}}\right)}}.} == Conjugacy as group action == For any two elements g , x ∈ G , {\displaystyle g,x\in G,} let g ⋅ x := g x g − 1 . {\displaystyle g\cdot x:=gxg^{-1}.} This defines a group action of G {\displaystyle G} on G . {\displaystyle G.} The orbits of this action are the conjugacy classes, and the stabilizer of a given element is the element's centralizer. Similarly, we can define a group action of G {\displaystyle G} on the set of all subsets of G , {\displaystyle G,} by writing g ⋅ S := g S g − 1 , {\displaystyle g\cdot S:=gSg^{-1},} or on the set of the subgroups of G . {\displaystyle G.} == Conjugacy class equation == If G {\displaystyle G} is a finite group, then for any group element a , {\displaystyle a,} the elements in the conjugacy class of a {\displaystyle a} are in one-to-one correspondence with cosets of the centralizer C G ⁡ ( a ) . {\displaystyle \operatorname {C} _{G}(a).} This can be seen by observing that any two elements b {\displaystyle b} and c {\displaystyle c} belonging to the same coset (and hence, b = c z {\displaystyle b=cz} for some z {\displaystyle z} in the centralizer C G ⁡ ( a ) {\displaystyle \operatorname {C} _{G}(a)} ) give rise to the same element when conjugating a {\displaystyle a} : b a b − 1 = c z a ( c z ) − 1 = c z a z − 1 c − 1 = c a z z − 1 c − 1 = c a c − 1 . {\displaystyle bab^{-1}=cza(cz)^{-1}=czaz^{-1}c^{-1}=cazz^{-1}c^{-1}=cac^{-1}.} That can also be seen from the orbit-stabilizer theorem, when considering the group as acting on itself through conjugation, so that orbits are conjugacy classes and stabilizer subgroups are centralizers. The converse holds as well. Thus the number of elements in the conjugacy class of a {\displaystyle a} is the index [ G : C G ⁡ ( a ) ] {\displaystyle \left[G:\operatorname {C} _{G}(a)\right]} of the centralizer C G ⁡ ( a ) {\displaystyle \operatorname {C} _{G}(a)} in G {\displaystyle G} ; hence the size of each conjugacy class divides the order of the group. Furthermore, if we choose a single representative element x i {\displaystyle x_{i}} from every conjugacy class, we infer from the disjointness of the conjugacy classes that | G | = ∑ i [ G : C G ⁡ ( x i ) ] , {\displaystyle |G|=\sum _{i}\left[G:\operatorname {C} _{G}(x_{i})\right],} where C G ⁡ ( x i ) {\displaystyle \operatorname {C} _{G}(x_{i})} is the centralizer of the element x i . {\displaystyle x_{i}.} Observing that each element of the center Z ⁡ ( G ) {\displaystyle \operatorname {Z} (G)} forms a conjugacy class containing just itself gives rise to the class equation: | G | = | Z ⁡ ( G ) | + ∑ i [ G : C G ⁡ ( x i ) ] , {\displaystyle |G|=|{\operatorname {Z} (G)}|+\sum _{i}\left[G:\operatorname {C} _{G}(x_{i})\right],} where the sum is over a representative element from each conjugacy class that is not in the center. Knowledge of the divisors of the group order | G | {\displaystyle |G|} can often be used to gain information about the order of the center or of the conjugacy classes. === Example === Consider a finite p {\displaystyle p} -group G {\displaystyle G} (that is, a group with order p n , {\displaystyle p^{n},} where p {\displaystyle p} is a prime number and n > 0 {\displaystyle n>0} ). We are going to prove that every finite p {\displaystyle p} -group has a non-trivial center. Since the order of any conjugacy class of G {\displaystyle G} must divide the order of G , {\displaystyle G,} it follows that each conjugacy class H i {\displaystyle H_{i}} that is not in the center also has order some power of p k i , {\displaystyle p^{k_{i}},} where 0 < k i < n . {\displaystyle 0<k_{i}<n.} But then the class equation requires that | G | = p n = | Z ⁡ ( G ) | + ∑ i p k i . {\textstyle |G|=p^{n}=|{\operatorname {Z} (G)}|+\sum _{i}p^{k_{i}}.} From this we see that p {\displaystyle p} must divide | Z ⁡ ( G ) | , {\displaystyle |{\operatorname {Z} (G)}|,} so | Z ⁡ ( G ) | > 1. {\displaystyle |\operatorname {Z} (G)|>1.} In particular, when n = 2 , {\displaystyle n=2,} then G {\displaystyle G} is an abelian group since any non-trivial group element is of order p {\displaystyle p} or p 2 . {\displaystyle p^{2}.} If some element a {\displaystyle a} of G {\displaystyle G} is of order p 2 , {\displaystyle p^{2},} then G {\displaystyle G} is isomorphic to the cyclic group of order p 2 , {\displaystyle p^{2},} hence abelian. On the other hand, if every non-trivial element in G {\displaystyle G} is of order p , {\displaystyle p,} hence by the conclusion above | Z ⁡ ( G ) | > 1 , {\displaystyle |\operatorname {Z} (G)|>1,} then | Z ⁡ ( G ) | = p > 1 {\displaystyle |\operatorname {Z} (G)|=p>1} or p 2 . {\displaystyle p^{2}.} We only need to consider the case when | Z ⁡ ( G ) | = p > 1 , {\displaystyle |\operatorname {Z} (G)|=p>1,} then there is an element b {\displaystyle b} of G {\displaystyle G} which is not in the center of G . {\displaystyle G.} Note that C G ⁡ ( b ) {\displaystyle \operatorname {C} _{G}(b)} includes b {\displaystyle b} and the center which does not contain b {\displaystyle b} but at least p {\displaystyle p} elements. Hence the order of C G ⁡ ( b ) {\displaystyle \operatorname {C} _{G}(b)} is strictly larger than p , {\displaystyle p,} therefore | C G ⁡ ( b ) | = p 2 , {\displaystyle \left|\operatorname {C} _{G}(b)\right|=p^{2},} therefore b {\displaystyle b} is an element of the center of G , {\displaystyle G,} a contradiction. Hence G {\displaystyle G} is abelian and in fact isomorphic to the direct product of two cyclic groups each of order p . {\displaystyle p.} == Conjugacy of subgroups and general subsets == More generally, given any subset S ⊆ G {\displaystyle S\subseteq G} ( S {\displaystyle S} not necessarily a subgroup), define a subset T ⊆ G {\displaystyle T\subseteq G} to be conjugate to S {\displaystyle S} if there exists some g ∈ G {\displaystyle g\in G} such that T = g S g − 1 . {\displaystyle T=gSg^{-1}.} Let Cl ⁡ ( S ) {\displaystyle \operatorname {Cl} (S)} be the set of all subsets T ⊆ G {\displaystyle T\subseteq G} such that T {\displaystyle T} is conjugate to S . {\displaystyle S.} A frequently used theorem is that, given any subset S ⊆ G , {\displaystyle S\subseteq G,} the index of N ⁡ ( S ) {\displaystyle \operatorname {N} (S)} (the normalizer of S {\displaystyle S} ) in G {\displaystyle G} equals the cardinality of Cl ⁡ ( S ) {\displaystyle \operatorname {Cl} (S)} : | Cl ⁡ ( S ) | = [ G : N ( S ) ] . {\displaystyle |{\operatorname {Cl} (S)}|=[G:N(S)].} This follows since, if g , h ∈ G , {\displaystyle g,h\in G,} then g S g − 1 = h S h − 1 {\displaystyle gSg^{-1}=hSh^{-1}} if and only if g − 1 h ∈ N ⁡ ( S ) , {\displaystyle g^{-1}h\in \operatorname {N} (S),} in other words, if and only if g and h {\displaystyle g{\text{ and }}h} are in the same coset of N ⁡ ( S ) . {\displaystyle \operatorname {N} (S).} By using S = { a } , {\displaystyle S=\{a\},} this formula generalizes the one given earlier for the number of elements in a conjugacy class. The above is particularly useful when talking about subgroups of G . {\displaystyle G.} The subgroups can thus be divided into conjugacy classes, with two subgroups belonging to the same class if and only if they are conjugate. Conjugate subgroups are isomorphic, but isomorphic subgroups need not be conjugate. For example, an abelian group may have two different subgroups which are isomorphic, but they are never conjugate. == Geometric interpretation == Conjugacy classes in the fundamental group of a path-connected topological space can be thought of as equivalence classes of free loops under free homotopy. == Conjugacy class and irreducible representations in finite group == In any finite group, the number of nonisomorphic irreducible representations over the complex numbers is precisely the number of conjugacy classes. == See also == Topological conjugacy – Concept in topology FC-group – Group in group theory mathematics Conjugacy-closed subgroup == Notes == == References == Grillet, Pierre Antoine (2007). Abstract algebra. Graduate texts in mathematics. Vol. 242 (2 ed.). Springer. ISBN 978-0-387-71567-4. == External links == "Conjugate elements", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Class_equation
Solution of triangles (Latin: solutio triangulorum) is the main trigonometric problem of finding the characteristics of a triangle (angles and lengths of sides), when some of these are known. The triangle can be located on a plane or on a sphere. Applications requiring triangle solutions include geodesy, astronomy, construction, and navigation. == Solving plane triangles == A general form triangle has six main characteristics (see picture): three linear (side lengths a, b, c) and three angular (α, β, γ). The classical plane trigonometry problem is to specify three of the six characteristics and determine the other three. A triangle can be uniquely determined in this sense when given any of the following: Three sides (SSS) Two sides and the included angle (SAS, side-angle-side) Two sides and an angle not included between them (SSA), if the side length adjacent to the angle is shorter than the other side length. A side and the two angles adjacent to it (ASA) A side, the angle opposite to it and an angle adjacent to it (AAS). For all cases in the plane, at least one of the side lengths must be specified. If only the angles are given, the side lengths cannot be determined, because any similar triangle is a solution. === Trigonomic relations === The standard method of solving the problem is to use fundamental relations. Law of cosines a 2 = b 2 + c 2 − 2 b c cos ⁡ α b 2 = a 2 + c 2 − 2 a c cos ⁡ β c 2 = a 2 + b 2 − 2 a b cos ⁡ γ {\displaystyle {\begin{aligned}a^{2}&=b^{2}+c^{2}-2bc\cos \alpha \\b^{2}&=a^{2}+c^{2}-2ac\cos \beta \\c^{2}&=a^{2}+b^{2}-2ab\cos \gamma \end{aligned}}} Law of sines a sin ⁡ α = b sin ⁡ β = c sin ⁡ γ {\displaystyle {\frac {a}{\sin \alpha }}={\frac {b}{\sin \beta }}={\frac {c}{\sin \gamma }}} Sum of angles α + β + γ = 180 ∘ {\displaystyle \alpha +\beta +\gamma =180^{\circ }} Law of tangents a − b a + b = tan ⁡ 1 2 ( α − β ) tan ⁡ 1 2 ( α + β ) . {\displaystyle {\frac {a-b}{a+b}}={\frac {\tan {\frac {1}{2}}(\alpha -\beta )}{\tan {\tfrac {1}{2}}(\alpha +\beta )}}.} There are other (sometimes practically useful) universal relations: the law of cotangents and Mollweide's formula. ==== Notes ==== To find an unknown angle, the law of cosines is safer than the law of sines. The reason is that the value of sine for the angle of the triangle does not uniquely determine this angle. For example, if sin β = 0.5, the angle β can equal either 30° or 150°. Using the law of cosines avoids this problem: within the interval from 0° to 180° the cosine value unambiguously determines its angle. On the other hand, if the angle is small (or close to 180°), then it is more robust numerically to determine it from its sine than its cosine because the arc-cosine function has a divergent derivative at 1 (or −1). We assume that the relative position of specified characteristics is known. If not, the mirror reflection of the triangle will also be a solution. For example, three side lengths uniquely define either a triangle or its reflection. === Three sides given (SSS) === Let three side lengths a, b, c be specified. To find the angles α, β, the law of cosines can be used: α = arccos ⁡ b 2 + c 2 − a 2 2 b c β = arccos ⁡ a 2 + c 2 − b 2 2 a c . {\displaystyle {\begin{aligned}\alpha &=\arccos {\frac {b^{2}+c^{2}-a^{2}}{2bc}}\\[4pt]\beta &=\arccos {\frac {a^{2}+c^{2}-b^{2}}{2ac}}.\end{aligned}}} Then angle γ = 180° − α − β. Some sources recommend to find angle β from the law of sines but (as Note 1 above states) there is a risk of confusing an acute angle value with an obtuse one. Another method of calculating the angles from known sides is to apply the law of cotangents. Area using Heron's formula: A = s ( s − a ) ( s − b ) ( s − c ) {\displaystyle A={\sqrt {s(s-a)(s-b)(s-c)}}} where s = a + b + c 2 {\displaystyle s={\frac {a+b+c}{2}}} Heron's formula without using the semiperimeter: A = ( a + b + c ) ( b + c − a ) ( a + c − b ) ( a + b − c ) 4 {\displaystyle A={\frac {\sqrt {(a+b+c)(b+c-a)(a+c-b)(a+b-c)}}{4}}} === Two sides and the included angle given (SAS) === Here the lengths of sides a, b and the angle γ between these sides are known. The third side can be determined from the law of cosines: c = a 2 + b 2 − 2 a b cos ⁡ γ . {\displaystyle c={\sqrt {a^{2}+b^{2}-2ab\cos \gamma }}.} Now we use law of cosines to find the second angle: α = arccos ⁡ b 2 + c 2 − a 2 2 b c . {\displaystyle \alpha =\arccos {\frac {b^{2}+c^{2}-a^{2}}{2bc}}.} Finally, β = 180° − α − γ. === Two sides and non-included angle given (SSA) === This case is not solvable in all cases; a solution is guaranteed to be unique only if the side length adjacent to the angle is shorter than the other side length. Assume that two sides b, c and the angle β are known. The equation for the angle γ can be implied from the law of sines: sin ⁡ γ = c b sin ⁡ β . {\displaystyle \sin \gamma ={\frac {c}{b}}\sin \beta .} We denote further D = ⁠c/b⁠ sin β (the equation's right side). There are four possible cases: If D > 1, no such triangle exists because the side b does not reach line BC. For the same reason a solution does not exist if the angle β ≥ 90° and b ≤ c. If D = 1, a unique solution exists: γ = 90°, i.e., the triangle is right-angled. If D < 1 two alternatives are possible. If b ≥ c, then β ≥ γ (the larger side corresponds to a larger angle). Since no triangle can have two obtuse angles, γ is an acute angle and the solution γ = arcsin D is unique. If b < c, the angle γ may be acute: γ = arcsin D or obtuse: γ′ = 180° − γ. The figure on right shows the point C, the side b and the angle γ as the first solution, and the point C′, side b′ and the angle γ′ as the second solution. Once γ is obtained, the third angle α = 180° − β − γ. The third side can then be found from the law of sines: a = b sin ⁡ α sin ⁡ β {\displaystyle a=b\ {\frac {\sin \alpha }{\sin \beta }}} or from the law of cosines: a = c cos ⁡ β ± b 2 − c 2 sin 2 ⁡ β {\displaystyle a=c\cos \beta \pm {\sqrt {b^{2}-c^{2}\sin ^{2}\beta }}} === A side and two adjacent angles given (ASA) === The known characteristics are the side c and the angles α, β. The third angle γ = 180° − α − β. Two unknown sides can be calculated from the law of sines: a = c sin ⁡ α sin ⁡ γ = c sin ⁡ α sin ⁡ ( α + β ) b = c sin ⁡ β sin ⁡ γ = c sin ⁡ β sin ⁡ ( α + β ) {\displaystyle {\begin{aligned}a&=c\ {\frac {\sin \alpha }{\sin \gamma }}=c\ {\frac {\sin \alpha }{\sin(\alpha +\beta )}}\\[4pt]b&=c\ {\frac {\sin \beta }{\sin \gamma }}=c\ {\frac {\sin \beta }{\sin(\alpha +\beta )}}\end{aligned}}} === A side, one adjacent angle and the opposite angle given (AAS) === The procedure for solving an AAS triangle is same as that for an ASA triangle: First, find the third angle by using the angle sum property of a triangle, then find the other two sides using the law of sines. === Other given lengths === In many cases, triangles can be solved given three pieces of information some of which are the lengths of the triangle's medians, altitudes, or angle bisectors. Posamentier and Lehmann list the results for the question of solvability using no higher than square roots (i.e., constructibility) for each of the 95 distinct cases; 63 of these are constructible. == Solving spherical triangles == The general spherical triangle is fully determined by three of its six characteristics (3 sides and 3 angles). The lengths of the sides a, b, c of a spherical triangle are their central angles, measured in angular units rather than linear units. (On a unit sphere, the angle (in radians) and length around the sphere are numerically the same. On other spheres, the angle (in radians) is equal to the length around the sphere divided by the radius.) Spherical geometry differs from planar Euclidean geometry, so the solution of spherical triangles is built on different rules. For example, the sum of the three angles α + β + γ depends on the size of the triangle. In addition, similar triangles cannot be unequal, so the problem of constructing a triangle with specified three angles has a unique solution. The basic relations used to solve a problem are similar to those of the planar case: see Spherical law of cosines and Spherical law of sines. Among other relationships that may be useful are the half-side formula and Napier's analogies: tan ⁡ 1 2 c cos ⁡ 1 2 ( α − β ) = tan ⁡ 1 2 ( a + b ) cos ⁡ 1 2 ( α + β ) tan ⁡ 1 2 c sin ⁡ 1 2 ( α − β ) = tan ⁡ 1 2 ( a − b ) sin ⁡ 1 2 ( α + β ) cot ⁡ 1 2 γ cos ⁡ 1 2 ( a − b ) = tan ⁡ 1 2 ( α + β ) cos ⁡ 1 2 ( a + b ) cot ⁡ 1 2 γ sin ⁡ 1 2 ( a − b ) = tan ⁡ 1 2 ( α − β ) sin ⁡ 1 2 ( a + b ) . {\displaystyle {\begin{aligned}\tan {\tfrac {1}{2}}c\,\cos {\tfrac {1}{2}}(\alpha -\beta )&=\tan {\tfrac {1}{2}}(a+\,b)\cos {\tfrac {1}{2}}(\alpha +\beta )\\\tan {\tfrac {1}{2}}c\,\sin {\tfrac {1}{2}}(\alpha -\beta )&=\tan {\tfrac {1}{2}}(a\ \!-\,b)\sin {\tfrac {1}{2}}(\alpha +\beta )\\\cot {\tfrac {1}{2}}\gamma \ \!\cos {\tfrac {1}{2}}(a\ \!-\,b)&=\tan {\tfrac {1}{2}}(\alpha +\beta )\cos {\tfrac {1}{2}}(a+b)\\\cot {\tfrac {1}{2}}\gamma \,\sin {\tfrac {1}{2}}(a\ \!-\,b)&=\tan {\tfrac {1}{2}}(\alpha -\beta )\sin {\tfrac {1}{2}}(a+b).\end{aligned}}} === Three sides given (spherical SSS) === Known: the sides a, b, c (in angular units). The triangle's angles are computed using the spherical law of cosines: α = arccos ⁡ cos ⁡ a − cos ⁡ b cos ⁡ c sin ⁡ b sin ⁡ c , β = arccos ⁡ cos ⁡ b − cos ⁡ c cos ⁡ a sin ⁡ c sin ⁡ a , γ = arccos ⁡ cos ⁡ c − cos ⁡ a cos ⁡ b sin ⁡ a sin ⁡ b . {\displaystyle {\begin{aligned}\alpha &=\arccos {\frac {\cos a-\cos b\ \cos c}{\sin b\ \sin c}},\\[4pt]\beta &=\arccos {\frac {\cos b-\cos c\ \cos a}{\sin c\ \sin a}},\\[4pt]\gamma &=\arccos {\frac {\cos c-\cos a\ \cos b}{\sin a\ \sin b}}.\end{aligned}}} === Two sides and the included angle given (spherical SAS) === Known: the sides a, b and the angle γ between them. The side c can be found from the spherical law of cosines: c = arccos ⁡ ( cos ⁡ a cos ⁡ b + sin ⁡ a sin ⁡ b cos ⁡ γ ) . {\displaystyle c=\arccos \left(\cos a\cos b+\sin a\sin b\cos \gamma \right).} The angles α, β can be calculated as above, or by using Napier's analogies: α = arctan ⁡ 2 sin ⁡ a tan ⁡ 1 2 γ sin ⁡ ( b + a ) + cot ⁡ 1 2 γ sin ⁡ ( b − a ) , β = arctan ⁡ 2 sin ⁡ b tan ⁡ 1 2 γ sin ⁡ ( a + b ) + cot ⁡ 1 2 γ sin ⁡ ( a − b ) . {\displaystyle {\begin{aligned}\alpha &=\arctan \ {\frac {2\sin a}{\tan {\frac {1}{2}}\gamma \,\sin(b+a)+\cot {\frac {1}{2}}\gamma \,\sin(b-a)}},\\[4pt]\beta &=\arctan \ {\frac {2\sin b}{\tan {\frac {1}{2}}\gamma \,\sin(a+b)+\cot {\frac {1}{2}}\gamma \,\sin(a-b)}}.\end{aligned}}} This problem arises in the navigation problem of finding the great circle between two points on the earth specified by their latitude and longitude; in this application, it is important to use formulas which are not susceptible to round-off errors. For this purpose, the following formulas (which may be derived using vector algebra) can be used: c = arctan ⁡ ( sin ⁡ a cos ⁡ b − cos ⁡ a sin ⁡ b cos ⁡ γ ) 2 + ( sin ⁡ b sin ⁡ γ ) 2 cos ⁡ a cos ⁡ b + sin ⁡ a sin ⁡ b cos ⁡ γ , α = arctan ⁡ sin ⁡ a sin ⁡ γ sin ⁡ b cos ⁡ a − cos ⁡ b sin ⁡ a cos ⁡ γ , β = arctan ⁡ sin ⁡ b sin ⁡ γ sin ⁡ a cos ⁡ b − cos ⁡ a sin ⁡ b cos ⁡ γ , {\displaystyle {\begin{aligned}c&=\arctan {\frac {\sqrt {(\sin a\cos b-\cos a\sin b\cos \gamma )^{2}+(\sin b\sin \gamma )^{2}}}{\cos a\cos b+\sin a\sin b\cos \gamma }},\\[4pt]\alpha &=\arctan {\frac {\sin a\sin \gamma }{\sin b\cos a-\cos b\sin a\cos \gamma }},\\[4pt]\beta &=\arctan {\frac {\sin b\sin \gamma }{\sin a\cos b-\cos a\sin b\cos \gamma }},\end{aligned}}} where the signs of the numerators and denominators in these expressions should be used to determine the quadrant of the arctangent. === Two sides and non-included angle given (spherical SSA) === This problem is not solvable in all cases; a solution is guaranteed to be unique only if the side length adjacent to the angle is shorter than the other side length. Known: the sides b, c and the angle β not between them. A solution exists if the following condition holds: b > arcsin ( sin ⁡ c sin ⁡ β ) . {\displaystyle b>\arcsin \!{\bigl (}\sin c\,\sin \beta {\bigr )}.} The angle γ can be found from the spherical law of sines: γ = arcsin ⁡ sin ⁡ c sin ⁡ β sin ⁡ b . {\displaystyle \gamma =\arcsin {\frac {\sin c\,\sin \beta }{\sin b}}.} As for the plane case, if b < c then there are two solutions: γ and 180° - γ. We can find other characteristics by using Napier's analogies: a = 2 arctan ⁡ [ tan ⁡ 1 2 ( b − c ) sin ⁡ 1 2 ( β + γ ) sin ⁡ 1 2 ( β − γ ) ] , α = 2 arccot ⁡ [ tan ⁡ 1 2 ( β − γ ) sin ⁡ 1 2 ( b + c ) sin ⁡ 1 2 ( b − c ) ] . {\displaystyle {\begin{aligned}a&=2\arctan \left[\tan {\tfrac {1}{2}}(b-c)\ {\frac {\sin {\tfrac {1}{2}}(\beta +\gamma )}{\sin {\tfrac {1}{2}}(\beta -\gamma )}}\right],\\[4pt]\alpha &=2\operatorname {arccot} \left[\tan {\tfrac {1}{2}}(\beta -\gamma )\ {\frac {\sin {\tfrac {1}{2}}(b+c)}{\sin {\tfrac {1}{2}}(b-c)}}\right].\end{aligned}}} === A side and two adjacent angles given (spherical ASA) === Known: the side c and the angles α, β. First we determine the angle γ using the spherical law of cosines: γ = arccos ( sin ⁡ α sin ⁡ β cos ⁡ c − cos ⁡ α cos ⁡ β ) . {\displaystyle \gamma =\arccos \!{\bigl (}\sin \alpha \sin \beta \cos c-\cos \alpha \cos \beta {\bigr )}.\,} We can find the two unknown sides from the spherical law of cosines (using the calculated angle γ): a = arccos ⁡ cos ⁡ α + cos ⁡ β cos ⁡ γ sin ⁡ β sin ⁡ γ , b = arccos ⁡ cos ⁡ β + cos ⁡ α cos ⁡ γ sin ⁡ α sin ⁡ γ , {\displaystyle {\begin{aligned}a&=\arccos {\frac {\cos \alpha +\cos \beta \cos \gamma }{\sin \beta \sin \gamma }},\\[4pt]b&=\arccos {\frac {\cos \beta +\cos \alpha \cos \gamma }{\sin \alpha \sin \gamma }},\end{aligned}}} or by using Napier's analogies: a = arctan ⁡ 2 sin ⁡ α cot ⁡ 1 2 c sin ⁡ ( β + α ) + tan ⁡ 1 2 c sin ⁡ ( β − α ) , b = arctan ⁡ 2 sin ⁡ β cot ⁡ 1 2 c sin ⁡ ( α + β ) + tan ⁡ 1 2 c sin ⁡ ( α − β ) . {\displaystyle {\begin{aligned}a&=\arctan {\frac {2\sin \alpha }{\cot {\frac {1}{2}}c\,\sin(\beta +\alpha )+\tan {\frac {1}{2}}c\,\sin(\beta -\alpha )}},\\[4pt]b&=\arctan {\frac {2\sin \beta }{\cot {\frac {1}{2}}c\,\sin(\alpha +\beta )+\tan {\frac {1}{2}}c\,\sin(\alpha -\beta )}}.\end{aligned}}} === A side, one adjacent angle and the opposite angle given (spherical AAS) === Known: the side a and the angles α, β. The side b can be found from the spherical law of sines: b = arcsin ⁡ sin ⁡ a sin ⁡ β sin ⁡ α . {\displaystyle b=\arcsin {\frac {\sin a\,\sin \beta }{\sin \alpha }}.} If the angle for the side a is acute and α > β, another solution exists: b = π − arcsin ⁡ sin ⁡ a sin ⁡ β sin ⁡ α . {\displaystyle b=\pi -\arcsin {\frac {\sin a\,\sin \beta }{\sin \alpha }}.} We can find other characteristics by using Napier's analogies: c = 2 arctan ⁡ [ tan ⁡ 1 2 ( a − b ) sin ⁡ 1 2 ( α + β ) sin ⁡ 1 2 ( α − β ) ] , γ = 2 arccot ⁡ [ tan ⁡ 1 2 ( α − β ) sin ⁡ 1 2 ( a + b ) sin ⁡ 1 2 ( a − b ) ] . {\displaystyle {\begin{aligned}c&=2\arctan \left[\tan {\tfrac {1}{2}}(a-b)\ {\frac {\sin {\tfrac {1}{2}}(\alpha +\beta )}{\sin {\frac {1}{2}}(\alpha -\beta )}}\right],\\[4pt]\gamma &=2\operatorname {arccot} \left[\tan {\tfrac {1}{2}}(\alpha -\beta )\ {\frac {\sin {\tfrac {1}{2}}(a+b)}{\sin {\frac {1}{2}}(a-b)}}\right].\end{aligned}}} === Three angles given (spherical AAA) === Known: the angles α, β, γ. From the spherical law of cosines we infer: a = arccos ⁡ cos ⁡ α + cos ⁡ β cos ⁡ γ sin ⁡ β sin ⁡ γ , b = arccos ⁡ cos ⁡ β + cos ⁡ γ cos ⁡ α sin ⁡ γ sin ⁡ α , c = arccos ⁡ cos ⁡ γ + cos ⁡ α cos ⁡ β sin ⁡ α sin ⁡ β . {\displaystyle {\begin{aligned}a&=\arccos {\frac {\cos \alpha +\cos \beta \cos \gamma }{\sin \beta \sin \gamma }},\\[4pt]b&=\arccos {\frac {\cos \beta +\cos \gamma \cos \alpha }{\sin \gamma \sin \alpha }},\\[4pt]c&=\arccos {\frac {\cos \gamma +\cos \alpha \cos \beta }{\sin \alpha \sin \beta }}.\end{aligned}}} === Solving right-angled spherical triangles === The above algorithms become much simpler if one of the angles of a triangle (for example, the angle C) is the right angle. Such a spherical triangle is fully defined by its two elements, and the other three can be calculated using Napier's Pentagon or the following relations. sin ⁡ a = sin ⁡ c ⋅ sin ⁡ A {\displaystyle \sin a=\sin c\cdot \sin A} (from the spherical law of sines) tan ⁡ a = sin ⁡ b ⋅ tan ⁡ A {\displaystyle \tan a=\sin b\cdot \tan A} cos ⁡ c = cos ⁡ a ⋅ cos ⁡ b {\displaystyle \cos c=\cos a\cdot \cos b} (from the spherical law of cosines) tan ⁡ b = tan ⁡ c ⋅ cos ⁡ A {\displaystyle \tan b=\tan c\cdot \cos A} cos ⁡ A = cos ⁡ a ⋅ sin ⁡ B {\displaystyle \cos A=\cos a\cdot \sin B} (also from the spherical law of cosines) cos ⁡ c = cot ⁡ A ⋅ cot ⁡ B {\displaystyle \cos c=\cot A\cdot \cot B} == Some applications == === Triangulation === If one wants to measure the distance d from shore to a remote ship via triangulation, one marks on the shore two points with known distance l between them (the baseline). Let α, β be the angles between the baseline and the direction to the ship. From the formulae above (ASA case, assuming planar geometry) one can compute the distance as the triangle height: d = sin ⁡ α sin ⁡ β sin ⁡ ( α + β ) ℓ = tan ⁡ α tan ⁡ β tan ⁡ α + tan ⁡ β ℓ . {\displaystyle d={\frac {\sin \alpha \,\sin \beta }{\sin(\alpha +\beta )}}\ell ={\frac {\tan \alpha \,\tan \beta }{\tan \alpha +\tan \beta }}\ell .} For the spherical case, one can first compute the length of side from the point at α to the ship (i.e. the side opposite to β) via the ASA formula tan ⁡ b = 2 sin ⁡ β cot ⁡ 1 2 ℓ sin ⁡ ( α + β ) + tan ⁡ 1 2 ℓ sin ⁡ ( α − β ) , {\displaystyle \tan b={\frac {2\sin \beta }{\cot {\frac {1}{2}}\ell \,\sin(\alpha +\beta )+\tan {\frac {1}{2}}\ell \,\sin(\alpha -\beta )}},} and insert this into the AAS formula for the right subtriangle that contains the angle α and the sides b and d: sin ⁡ d = sin ⁡ b sin ⁡ α = tan ⁡ b 1 + tan 2 ⁡ b sin ⁡ α . {\displaystyle \sin d=\sin b\sin \alpha ={\frac {\tan b}{\sqrt {1+\tan ^{2}b}}}\sin \alpha .} (The planar formula is actually the first term of the Taylor expansion of d of the spherical solution in powers of ℓ.) This method is used in cabotage. The angles α, β are defined by observation of familiar landmarks from the ship. As another example, if one wants to measure the height h of a mountain or a high building, the angles α, β from two ground points to the top are specified. Let ℓ be the distance between these points. From the same ASA case formulas we obtain: h = sin ⁡ α sin ⁡ β sin ⁡ ( β − α ) ℓ = tan ⁡ α tan ⁡ β tan ⁡ β − tan ⁡ α ℓ . {\displaystyle h={\frac {\sin \alpha \,\sin \beta }{\sin(\beta -\alpha )}}\ell ={\frac {\tan \alpha \,\tan \beta }{\tan \beta -\tan \alpha }}\ell .} === The distance between two points on the globe === To calculate the distance between two points on the globe, Point A: latitude λA, longitude LA, and Point B: latitude λB, longitude LB we consider the spherical triangle ABC, where C is the North Pole. Some characteristics are: a = 90 ∘ − λ B , b = 90 ∘ − λ A , γ = L A − L B . {\displaystyle {\begin{aligned}a&=90^{\circ }-\lambda _{B},\\b&=90^{\circ }-\lambda _{A},\\\gamma &=L_{A}-L_{B}.\end{aligned}}} If two sides and the included angle given, we obtain from the formulas A B ¯ = R arccos [ sin ⁡ λ A sin ⁡ λ B + cos ⁡ λ A cos ⁡ λ B cos ⁡ ( L A − L B ) ] . {\displaystyle {\overline {AB}}=R\arccos \!{\Bigr [}\sin \lambda _{A}\sin \lambda _{B}+\cos \lambda _{A}\cos \lambda _{B}\cos(L_{A}-L_{B}){\Bigr ]}.} Here R is the Earth's radius. == See also == Congruence Hansen's problem Hinge theorem Lénárt sphere Snellius–Pothenot problem == References == Euclid (1956) [1925]. Sir Thomas Heath (ed.). The Thirteen Books of the Elements. Volume I. Translated with introduction and commentary. Dover. ISBN 0-486-60088-2. {{cite book}}: ISBN / Date incompatibility (help) == External links == Trigonometric Delights, by Eli Maor, Princeton University Press, 1998. Ebook version, in PDF format, full text presented. Trigonometry by Alfred Monroe Kenyon and Louis Ingold, The Macmillan Company, 1914. In images, full text presented. Google book. Spherical trigonometry on Math World. Intro to Spherical Trig. Includes discussion of The Napier circle and Napier's rules Spherical Trigonometry — for the use of colleges and schools by I. Todhunter, M.A., F.R.S. Historical Math Monograph posted by Cornell University Library. Triangulator – Triangle solver. Solve any plane triangle problem with the minimum of input data. Drawing of the solved triangle. TriSph – Free software to solve the spherical triangles, configurable to different practical applications and configured for gnomonic. Spherical Triangle Calculator – Solves spherical triangles. TrianCal – Triangles solver by Jesus S.
Wikipedia/Solution_of_triangles
Gauss's lemma in number theory gives a condition for an integer to be a quadratic residue. Although it is not useful computationally, it has theoretical significance, being involved in some proofs of quadratic reciprocity. It made its first appearance in Carl Friedrich Gauss's third proof (1808): 458–462  of quadratic reciprocity and he proved it again in his fifth proof (1818).: 496–501  == Statement of the lemma == For any odd prime p let a be an integer that is coprime to p. Consider the integers a , 2 a , 3 a , … , p − 1 2 a {\displaystyle a,2a,3a,\dots ,{\frac {p-1}{2}}a} and their least positive residues modulo p. These residues are all distinct, so there are (p − 1)/2 of them. Let n be the number of these residues that are greater than p/2. Then ( a p ) = ( − 1 ) n , {\displaystyle \left({\frac {a}{p}}\right)=(-1)^{n},} where ( a p ) {\displaystyle \left({\frac {a}{p}}\right)} is the Legendre symbol. === Example === Taking p = 11 and a = 7, the relevant sequence of integers is 7, 14, 21, 28, 35. After reduction modulo 11, this sequence becomes 7, 3, 10, 6, 2. Three of these integers are larger than 11/2 (namely 6, 7 and 10), so n = 3. Correspondingly Gauss's lemma predicts that ( 7 11 ) = ( − 1 ) 3 = − 1. {\displaystyle \left({\frac {7}{11}}\right)=(-1)^{3}=-1.} This is indeed correct, because 7 is not a quadratic residue modulo 11. The above sequence of residues 7, 3, 10, 6, 2 may also be written −4, 3, −1, −5, 2. In this form, the integers larger than 11/2 appear as negative numbers. It is also apparent that the absolute values of the residues are a permutation of the residues 1, 2, 3, 4, 5. == Proof == A fairly simple proof,: 458–462  reminiscent of one of the simplest proofs of Fermat's little theorem, can be obtained by evaluating the product Z = a ⋅ 2 a ⋅ 3 a ⋅ ⋯ ⋅ p − 1 2 a {\displaystyle Z=a\cdot 2a\cdot 3a\cdot \cdots \cdot {\frac {p-1}{2}}a} modulo p in two different ways. On one hand it is equal to Z = a ( p − 1 ) / 2 ( 1 ⋅ 2 ⋅ 3 ⋅ ⋯ ⋅ p − 1 2 ) {\displaystyle Z=a^{(p-1)/2}\left(1\cdot 2\cdot 3\cdot \cdots \cdot {\frac {p-1}{2}}\right)} The second evaluation takes more work. If x is a nonzero residue modulo p, let us define the "absolute value" of x to be | x | = { x if 1 ≤ x ≤ p − 1 2 , p − x if p + 1 2 ≤ x ≤ p − 1. {\displaystyle |x|={\begin{cases}x&{\mbox{if }}1\leq x\leq {\frac {p-1}{2}},\\p-x&{\mbox{if }}{\frac {p+1}{2}}\leq x\leq p-1.\end{cases}}} Since n counts those multiples ka which are in the latter range, and since for those multiples, −ka is in the first range, we have Z = ( − 1 ) n ( | a | ⋅ | 2 a | ⋅ | 3 a | ⋅ ⋯ ⋯ | p − 1 2 a | ) . {\displaystyle Z=(-1)^{n}\left(|a|\cdot |2a|\cdot |3a|\cdot \cdots \cdots \left|{\frac {p-1}{2}}a\right|\right).} Now observe that the values |ra| are distinct for r = 1, 2, …, (p − 1)/2. Indeed, we have | r a | ≡ | s a | ( mod p ) r a ≡ ± s a ( mod p ) r ≡ ± s ( mod p ) {\displaystyle {\begin{aligned}|ra|&\equiv |sa|&{\pmod {p}}\\ra&\equiv \pm sa&{\pmod {p}}\\r&\equiv \pm s&{\pmod {p}}\end{aligned}}} because a is coprime to p. This gives r = s, since r and s are positive least residues. But there are exactly (p − 1)/2 of them, so their values are a rearrangement of the integers 1, 2, …, (p − 1)/2. Therefore, Z = ( − 1 ) n ( 1 ⋅ 2 ⋅ 3 ⋅ ⋯ ⋅ p − 1 2 ) . {\displaystyle Z=(-1)^{n}\left(1\cdot 2\cdot 3\cdot \cdots \cdot {\frac {p-1}{2}}\right).} Comparing with our first evaluation, we may cancel out the nonzero factor 1 ⋅ 2 ⋅ 3 ⋅ ⋯ ⋅ p − 1 2 {\displaystyle 1\cdot 2\cdot 3\cdot \cdots \cdot {\frac {p-1}{2}}} and we are left with a ( p − 1 ) / 2 = ( − 1 ) n . {\displaystyle a^{(p-1)/2}=(-1)^{n}.} This is the desired result, because by Euler's criterion the left hand side is just an alternative expression for the Legendre symbol ( a p ) {\displaystyle \left({\frac {a}{p}}\right)} . == Generalization == For any odd prime p let a be an integer that is coprime to p. Let I ⊂ ( Z / p Z ) × {\displaystyle I\subset (\mathbb {Z} /p\mathbb {Z} )^{\times }} be a set such that ( Z / p Z ) × {\displaystyle (\mathbb {Z} /p\mathbb {Z} )^{\times }} is the disjoint union of I {\displaystyle I} and − I = { − i : i ∈ I } {\displaystyle -I=\{-i:i\in I\}} . Then ( a p ) = ( − 1 ) t {\displaystyle \left({\frac {a}{p}}\right)=(-1)^{t}} , where t = # { j ∈ I : a j ∈ − I } {\displaystyle t=\#\{j\in I:aj\in -I\}} . In the original statement, I = { 1 , 2 , … , p − 1 2 } {\displaystyle I=\{1,2,\dots ,{\frac {p-1}{2}}\}} . The proof is almost the same. == Applications == Gauss's lemma is used in many,: Ch. 1 : 9  but by no means all, of the known proofs of quadratic reciprocity. For example, Gotthold Eisenstein: 236  used Gauss's lemma to prove that if p is an odd prime then ( a p ) = ∏ n = 1 ( p − 1 ) / 2 sin ⁡ ( 2 π a n / p ) sin ⁡ ( 2 π n / p ) , {\displaystyle \left({\frac {a}{p}}\right)=\prod _{n=1}^{(p-1)/2}{\frac {\sin {(2\pi an/p)}}{\sin {(2\pi n/p)}}},} and used this formula to prove quadratic reciprocity. By using elliptic rather than circular functions, he proved the cubic and quartic reciprocity laws.: Ch. 8  Leopold Kronecker: Ex. 1.34  used the lemma to show that ( p q ) = sgn ⁡ ∏ i = 1 q − 1 2 ∏ k = 1 p − 1 2 ( k p − i q ) . {\displaystyle \left({\frac {p}{q}}\right)=\operatorname {sgn} \prod _{i=1}^{\frac {q-1}{2}}\prod _{k=1}^{\frac {p-1}{2}}\left({\frac {k}{p}}-{\frac {i}{q}}\right).} Switching p and q immediately gives quadratic reciprocity. It is also used in what are probably the simplest proofs of the "second supplementary law" ( 2 p ) = ( − 1 ) ( p 2 − 1 ) / 8 = { + 1 if p ≡ ± 1 ( mod 8 ) − 1 if p ≡ ± 3 ( mod 8 ) {\displaystyle \left({\frac {2}{p}}\right)=(-1)^{(p^{2}-1)/8}={\begin{cases}+1{\text{ if }}p\equiv \pm 1{\pmod {8}}\\-1{\text{ if }}p\equiv \pm 3{\pmod {8}}\end{cases}}} == Higher powers == Generalizations of Gauss's lemma can be used to compute higher power residue symbols. In his second monograph on biquadratic reciprocity,: §§69–71  Gauss used a fourth-power lemma to derive the formula for the biquadratic character of 1 + i in Z[i], the ring of Gaussian integers. Subsequently, Eisenstein used third- and fourth-power versions to prove cubic and quartic reciprocity.: Ch. 8  === nth power residue symbol === Let k be an algebraic number field with ring of integers O k , {\displaystyle {\mathcal {O}}_{k},} and let p ⊂ O k {\displaystyle {\mathfrak {p}}\subset {\mathcal {O}}_{k}} be a prime ideal. The ideal norm N p {\displaystyle \mathrm {N} {\mathfrak {p}}} of p {\displaystyle {\mathfrak {p}}} is defined as the cardinality of the residue class ring. Since p {\displaystyle {\mathfrak {p}}} is prime this is a finite field O k / p {\displaystyle {\mathcal {O}}_{k}/{\mathfrak {p}}} , so the ideal norm is N p = | O k / p | {\displaystyle \mathrm {N} {\mathfrak {p}}=|{\mathcal {O}}_{k}/{\mathfrak {p}}|} . Assume that a primitive nth root of unity ζ n ∈ O k , {\displaystyle \zeta _{n}\in {\mathcal {O}}_{k},} and that n and p {\displaystyle {\mathfrak {p}}} are coprime (i.e. n ∉ p {\displaystyle n\not \in {\mathfrak {p}}} ). Then no two distinct nth roots of unity can be congruent modulo p {\displaystyle {\mathfrak {p}}} . This can be proved by contradiction, beginning by assuming that ζ n r ≡ ζ n s {\displaystyle \zeta _{n}^{r}\equiv \zeta _{n}^{s}} mod p {\displaystyle {\mathfrak {p}}} , 0 < r < s ≤ n. Let t = s − r such that ζ n t ≡ 1 {\displaystyle \zeta _{n}^{t}\equiv 1} mod p {\displaystyle {\mathfrak {p}}} , and 0 < t < n. From the definition of roots of unity, x n − 1 = ( x − 1 ) ( x − ζ n ) ( x − ζ n 2 ) … ( x − ζ n n − 1 ) , {\displaystyle x^{n}-1=(x-1)(x-\zeta _{n})(x-\zeta _{n}^{2})\dots (x-\zeta _{n}^{n-1}),} and dividing by x − 1 gives x n − 1 + x n − 2 + ⋯ + x + 1 = ( x − ζ n ) ( x − ζ n 2 ) … ( x − ζ n n − 1 ) . {\displaystyle x^{n-1}+x^{n-2}+\dots +x+1=(x-\zeta _{n})(x-\zeta _{n}^{2})\dots (x-\zeta _{n}^{n-1}).} Letting x = 1 and taking residues mod p {\displaystyle {\mathfrak {p}}} , n ≡ ( 1 − ζ n ) ( 1 − ζ n 2 ) … ( 1 − ζ n n − 1 ) ( mod p ) . {\displaystyle n\equiv (1-\zeta _{n})(1-\zeta _{n}^{2})\dots (1-\zeta _{n}^{n-1}){\pmod {\mathfrak {p}}}.} Since n and p {\displaystyle {\mathfrak {p}}} are coprime, n ≢ 0 {\displaystyle n\not \equiv 0} mod p , {\displaystyle {\mathfrak {p}},} but under the assumption, one of the factors on the right must be zero. Therefore, the assumption that two distinct roots are congruent is false. Thus the residue classes of O k / p {\displaystyle {\mathcal {O}}_{k}/{\mathfrak {p}}} containing the powers of ζn are a subgroup of order n of its (multiplicative) group of units, ( O k / p ) × = O k / p − { 0 } . {\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }={\mathcal {O}}_{k}/{\mathfrak {p}}-\{0\}.} Therefore, the order of ( O k / p ) × {\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }} is a multiple of n, and N p = | O k / p | = | ( O k / p ) × | + 1 ≡ 1 ( mod n ) . {\displaystyle {\begin{aligned}\mathrm {N} {\mathfrak {p}}&=|{\mathcal {O}}_{k}/{\mathfrak {p}}|\\&=\left|({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }\right|+1\\&\equiv 1{\pmod {n}}.\end{aligned}}} There is an analogue of Fermat's theorem in O k {\displaystyle {\mathcal {O}}_{k}} . If α ∈ O k {\displaystyle \alpha \in {\mathcal {O}}_{k}} for α ∉ p {\displaystyle \alpha \not \in {\mathfrak {p}}} , then: Ch. 4.1  α N p − 1 ≡ 1 ( mod p ) , {\displaystyle \alpha ^{\mathrm {N} {\mathfrak {p}}-1}\equiv 1{\pmod {\mathfrak {p}}},} and since N p ≡ 1 {\displaystyle \mathrm {N} {\mathfrak {p}}\equiv 1} mod n, α N p − 1 n ≡ ζ n s ( mod p ) {\displaystyle \alpha ^{\frac {\mathrm {N} {\mathfrak {p}}-1}{n}}\equiv \zeta _{n}^{s}{\pmod {\mathfrak {p}}}} is well-defined and congruent to a unique nth root of unity ζns. This root of unity is called the nth-power residue symbol for O k , {\displaystyle {\mathcal {O}}_{k},} and is denoted by ( α p ) n = ζ n s ≡ α N p − 1 n ( mod p ) . {\displaystyle {\begin{aligned}\left({\frac {\alpha }{\mathfrak {p}}}\right)_{n}&=\zeta _{n}^{s}\\&\equiv \alpha ^{\frac {\mathrm {N} {\mathfrak {p}}-1}{n}}{\pmod {\mathfrak {p}}}.\end{aligned}}} It can be proven that: Prop. 4.1  ( α p ) n = 1 {\displaystyle \left({\frac {\alpha }{\mathfrak {p}}}\right)_{n}=1} if and only if there is an η ∈ O k {\displaystyle \eta \in {\mathcal {O}}_{k}} such that α ≡ ηn mod p {\displaystyle {\mathfrak {p}}} . === 1/n systems === Let μ n = { 1 , ζ n , ζ n 2 , … , ζ n n − 1 } {\displaystyle \mu _{n}=\{1,\zeta _{n},\zeta _{n}^{2},\dots ,\zeta _{n}^{n-1}\}} be the multiplicative group of the nth roots of unity, and let A = { a 1 , a 2 , … , a m } {\displaystyle A=\{a_{1},a_{2},\dots ,a_{m}\}} be representatives of the cosets of ( O k / p ) × / μ n . {\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }/\mu _{n}.} Then A is called a 1/n system mod p . {\displaystyle {\mathfrak {p}}.} : Ch. 4.2  In other words, there are m n = N p − 1 {\displaystyle mn=\mathrm {N} {\mathfrak {p}}-1} numbers in the set A μ = { a i ζ n j : 1 ≤ i ≤ m , 0 ≤ j ≤ n − 1 } , {\displaystyle A\mu =\{a_{i}\zeta _{n}^{j}\;:\;1\leq i\leq m,\;\;\;0\leq j\leq n-1\},} and this set constitutes a representative set for ( O k / p ) × . {\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }.} The numbers 1, 2, … (p − 1)/2, used in the original version of the lemma, are a 1/2 system (mod p). Constructing a 1/n system is straightforward: let M be a representative set for ( O k / p ) × . {\displaystyle ({\mathcal {O}}_{k}/{\mathfrak {p}})^{\times }.} Pick any a 1 ∈ M {\displaystyle a_{1}\in M} and remove the numbers congruent to a 1 , a 1 ζ n , a 1 ζ n 2 , … , a 1 ζ n n − 1 {\displaystyle a_{1},a_{1}\zeta _{n},a_{1}\zeta _{n}^{2},\dots ,a_{1}\zeta _{n}^{n-1}} from M. Pick a2 from M and remove the numbers congruent to a 2 , a 2 ζ n , a 2 ζ n 2 , … , a 2 ζ n n − 1 {\displaystyle a_{2},a_{2}\zeta _{n},a_{2}\zeta _{n}^{2},\dots ,a_{2}\zeta _{n}^{n-1}} Repeat until M is exhausted. Then {a1, a2, … am} is a 1/n system mod p . {\displaystyle {\mathfrak {p}}.} === The lemma for nth powers === Gauss's lemma may be extended to the nth power residue symbol as follows.: Prop. 4.3  Let ζ n ∈ O k {\displaystyle \zeta _{n}\in {\mathcal {O}}_{k}} be a primitive nth root of unity, p ⊂ O k {\displaystyle {\mathfrak {p}}\subset {\mathcal {O}}_{k}} a prime ideal, γ ∈ O k , n γ ∉ p , {\displaystyle \gamma \in {\mathcal {O}}_{k},\;\;n\gamma \not \in {\mathfrak {p}},} (i.e. p {\displaystyle {\mathfrak {p}}} is coprime to both γ and n) and let A = {a1, a2, …, am} be a 1/n system mod p . {\displaystyle {\mathfrak {p}}.} Then for each i, 1 ≤ i ≤ m, there are integers π(i), unique (mod m), and b(i), unique (mod n), such that γ a i ≡ ζ n b ( i ) a π ( i ) ( mod p ) , {\displaystyle \gamma a_{i}\equiv \zeta _{n}^{b(i)}a_{\pi (i)}{\pmod {\mathfrak {p}}},} and the nth-power residue symbol is given by the formula ( γ p ) n = ζ n b ( 1 ) + b ( 2 ) + ⋯ + b ( m ) . {\displaystyle \left({\frac {\gamma }{\mathfrak {p}}}\right)_{n}=\zeta _{n}^{b(1)+b(2)+\dots +b(m)}.} The classical lemma for the quadratic Legendre symbol is the special case n = 2, ζ2 = −1, A = {1, 2, …, (p − 1)/2}, b(k) = 1 if ak > p/2, b(k) = 0 if ak < p/2. ==== Proof ==== The proof of the nth-power lemma uses the same ideas that were used in the proof of the quadratic lemma. The existence of the integers π(i) and b(i), and their uniqueness (mod m) and (mod n), respectively, come from the fact that Aμ is a representative set. Assume that π(i) = π(j) = p, i.e. γ a i ≡ ζ n r a p ( mod p ) {\displaystyle \gamma a_{i}\equiv \zeta _{n}^{r}a_{p}{\pmod {\mathfrak {p}}}} and γ a j ≡ ζ n s a p ( mod p ) . {\displaystyle \gamma a_{j}\equiv \zeta _{n}^{s}a_{p}{\pmod {\mathfrak {p}}}.} Then ζ n s − r γ a i ≡ ζ n s a p ≡ γ a j ( mod p ) {\displaystyle \zeta _{n}^{s-r}\gamma a_{i}\equiv \zeta _{n}^{s}a_{p}\equiv \gamma a_{j}{\pmod {\mathfrak {p}}}} Because γ and p {\displaystyle {\mathfrak {p}}} are coprime both sides can be divided by γ, giving ζ n s − r a i ≡ a j ( mod p ) , {\displaystyle \zeta _{n}^{s-r}a_{i}\equiv a_{j}{\pmod {\mathfrak {p}}},} which, since A is a 1/n system, implies s = r and i = j, showing that π is a permutation of the set {1, 2, …, m}. Then on the one hand, by the definition of the power residue symbol, ( γ a 1 ) ( γ a 2 ) … ( γ a m ) = γ N p − 1 n a 1 a 2 … a m ≡ ( γ p ) n a 1 a 2 … a m ( mod p ) , {\displaystyle {\begin{aligned}(\gamma a_{1})(\gamma a_{2})\dots (\gamma a_{m})&=\gamma ^{\frac {\mathrm {N} {\mathfrak {p}}-1}{n}}a_{1}a_{2}\dots a_{m}\\&\equiv \left({\frac {\gamma }{\mathfrak {p}}}\right)_{n}a_{1}a_{2}\dots a_{m}{\pmod {\mathfrak {p}}},\end{aligned}}} and on the other hand, since π is a permutation, ( γ a 1 ) ( γ a 2 ) … ( γ a m ) ≡ ζ n b ( 1 ) a π ( 1 ) ζ n b ( 2 ) a π ( 2 ) … ζ n b ( m ) a π ( m ) ( mod p ) ≡ ζ n b ( 1 ) + b ( 2 ) + ⋯ + b ( m ) a π ( 1 ) a π ( 2 ) … a π ( m ) ( mod p ) ≡ ζ n b ( 1 ) + b ( 2 ) + ⋯ + b ( m ) a 1 a 2 … a m ( mod p ) , {\displaystyle {\begin{aligned}(\gamma a_{1})(\gamma a_{2})\dots (\gamma a_{m})&\equiv {\zeta _{n}^{b(1)}a_{\pi (1)}}{\zeta _{n}^{b(2)}a_{\pi (2)}}\dots {\zeta _{n}^{b(m)}a_{\pi (m)}}&{\pmod {\mathfrak {p}}}\\&\equiv \zeta _{n}^{b(1)+b(2)+\dots +b(m)}a_{\pi (1)}a_{\pi (2)}\dots a_{\pi (m)}&{\pmod {\mathfrak {p}}}\\&\equiv \zeta _{n}^{b(1)+b(2)+\dots +b(m)}a_{1}a_{2}\dots a_{m}&{\pmod {\mathfrak {p}}},\end{aligned}}} so ( γ p ) n a 1 a 2 … a m ≡ ζ n b ( 1 ) + b ( 2 ) + ⋯ + b ( m ) a 1 a 2 … a m ( mod p ) , {\displaystyle \left({\frac {\gamma }{\mathfrak {p}}}\right)_{n}a_{1}a_{2}\dots a_{m}\equiv \zeta _{n}^{b(1)+b(2)+\dots +b(m)}a_{1}a_{2}\dots a_{m}{\pmod {\mathfrak {p}}},} and since for all 1 ≤ i ≤ m, ai and p {\displaystyle {\mathfrak {p}}} are coprime, a1a2…am can be cancelled from both sides of the congruence, ( γ p ) n ≡ ζ n b ( 1 ) + b ( 2 ) + ⋯ + b ( m ) ( mod p ) , {\displaystyle \left({\frac {\gamma }{\mathfrak {p}}}\right)_{n}\equiv \zeta _{n}^{b(1)+b(2)+\dots +b(m)}{\pmod {\mathfrak {p}}},} and the theorem follows from the fact that no two distinct nth roots of unity can be congruent (mod p {\displaystyle {\mathfrak {p}}} ). == Relation to the transfer in group theory == Let G be the multiplicative group of nonzero residue classes in Z/pZ, and let H be the subgroup {+1, −1}. Consider the following coset representatives of H in G, 1 , 2 , 3 , … , p − 1 2 . {\displaystyle 1,2,3,\dots ,{\frac {p-1}{2}}.} Applying the machinery of the transfer to this collection of coset representatives, we obtain the transfer homomorphism ϕ : G → H , {\displaystyle \phi :G\to H,} which turns out to be the map that sends a to (−1)n, where a and n are as in the statement of the lemma. Gauss's lemma may then be viewed as a computation that explicitly identifies this homomorphism as being the quadratic residue character. == See also == Zolotarev's lemma == References ==
Wikipedia/Gauss's_lemma_(number_theory)
In data communications, flow control is the process of managing the rate of data transmission between two nodes to prevent a fast sender from overwhelming a slow receiver. Flow control should be distinguished from congestion control, which is used for controlling the flow of data when congestion has actually occurred. Flow control mechanisms can be classified by whether or not the receiving node sends feedback to the sending node. Flow control is important because it is possible for a sending computer to transmit information at a faster rate than the destination computer can receive and process it. This can happen if the receiving computers have a heavy traffic load in comparison to the sending computer, or if the receiving computer has less processing power than the sending computer. == Stop-and-wait == Stop-and-wait flow control is the simplest form of flow control. In this method the message is broken into multiple frames, and the receiver indicates its readiness to receive a frame of data. The sender waits for a receipt acknowledgement (ACK) after every frame for a specified time (called a time out). The receiver sends the ACK to let the sender know that the frame of data was received correctly. The sender will then send the next frame only after the ACK. === Operations === Sender: Transmits a single frame at a time. Sender waits to receive ACK within time out. Receiver: Transmits acknowledgement (ACK) as it receives a frame. Go to step 1 when ACK is received, or time out is hit. If a frame or ACK is lost during transmission then the frame is re-transmitted. This re-transmission process is known as ARQ (automatic repeat request). The problem with Stop-and-wait is that only one frame can be transmitted at a time, and that often leads to inefficient transmission, because until the sender receives the ACK it cannot transmit any new packet. During this time both the sender and the channel are unutilised. === Pros and cons of stop and wait === Pros The only advantage of this method of flow control is its simplicity. Cons The sender needs to wait for the ACK after every frame it transmits. This is a source of inefficiency, and is particularly bad when the propagation delay is much longer than the transmission delay. Stop and wait can also create inefficiencies when sending longer transmissions. When longer transmissions are sent there is more likely chance for error in this protocol. If the messages are short the errors are more likely to be detected early. More inefficiency is created when single messages are broken into separate frames because it makes the transmission longer. == Sliding window == A method of flow control in which a receiver gives a transmitter permission to transmit data until a window is full. When the window is full, the transmitter must stop transmitting until the receiver advertises a larger window. Sliding-window flow control is best utilized when the buffer size is limited and pre-established. During a typical communication between a sender and a receiver the receiver allocates buffer space for n frames (n is the buffer size in frames). The sender can send and the receiver can accept n frames without having to wait for an acknowledgement. A sequence number is assigned to frames in order to help keep track of those frames which did receive an acknowledgement. The receiver acknowledges a frame by sending an acknowledgement that includes the sequence number of the next frame expected. This acknowledgement announces that the receiver is ready to receive n frames, beginning with the number specified. Both the sender and receiver maintain what is called a window. The size of the window is less than or equal to the buffer size. Sliding window flow control has far better performance than stop-and-wait flow control. For example, in a wireless environment if data rates are low and noise level is very high, waiting for an acknowledgement for every packet that is transferred is not very feasible. Therefore, transferring data as a bulk would yield a better performance in terms of higher throughput. Sliding window flow control is a point to point protocol assuming that no other entity tries to communicate until the current data transfer is complete. The window maintained by the sender indicates which frames it can send. The sender sends all the frames in the window and waits for an acknowledgement (as opposed to acknowledging after every frame). The sender then shifts the window to the corresponding sequence number, thus indicating that frames within the window starting from the current sequence number can be sent. === Go back N === An automatic repeat request (ARQ) algorithm, used for error correction, in which a negative acknowledgement (NACK) causes retransmission of the word in error as well as the next N–1 words. The value of N is usually chosen such that the time taken to transmit the N words is less than the round trip delay from transmitter to receiver and back again. Therefore, a buffer is not needed at the receiver. The normalized propagation delay (a) = propagation time (Tp)⁄transmission time (Tt), where Tp = length (L) over propagation velocity (V) and Tt = bitrate (r) over framerate (F). So that a =LF⁄Vr. To get the utilization you must define a window size (N). If N is greater than or equal to 2a + 1 then the utilization is 1 (full utilization) for the transmission channel. If it is less than 2a + 1 then the equation N⁄1+2a must be used to compute utilization. === Selective repeat === Selective repeat is a connection oriented protocol in which both transmitter and receiver have a window of sequence numbers. The protocol has a maximum number of messages that can be sent without acknowledgement. If this window becomes full, the protocol is blocked until an acknowledgement is received for the earliest outstanding message. At this point the transmitter is clear to send more messages. == Comparison == This section is geared towards the idea of comparing stop-and-wait, sliding window with the subsets of go back N and selective repeat. === Stop-and-wait === Error free: 1 2 a + 1 {\displaystyle {\frac {1}{2a+1}}} . With errors: 1 − P 2 a + 1 {\displaystyle {\frac {1-P}{2a+1}}} . === Selective repeat === We define throughput T as the average number of blocks communicated per transmitted block. It is more convenient to calculate the average number of transmissions necessary to communicate a block, a quantity we denote by 0, and then to determine T from the equation T = 1 b {\displaystyle T={\frac {1}{b}}} . == Transmit flow control == Transmit flow control may occur: between data terminal equipment (DTE) and a switching center, via data circuit-terminating equipment (DCE), the opposite types interconnected straightforwardly, or between two devices of the same type (two DTEs, or two DCEs), interconnected by a crossover cable. The transmission rate may be controlled because of network or DTE requirements. Transmit flow control can occur independently in the two directions of data transfer, thus permitting the transfer rates in one direction to be different from the transfer rates in the other direction. Transmit flow control can be either stop-and-wait, or use a sliding window. Flow control can be performed either by control signal lines in a data communication interface (see serial port and RS-232), or by reserving in-band control characters to signal flow start and stop (such as the ASCII codes for XON/XOFF). === Hardware flow control === In common RS-232 there are pairs of control lines which are usually referred to as hardware flow control: RTS (request to send) and CTS (clear to send), used in RTS flow control DTR (data terminal ready) and DSR (data set ready), used in DTR flow control Hardware flow control is typically handled by the DTE or "master end", as it is first raising or asserting its line to command the other side: In the case of RTS control flow, DTE sets its RTS, which signals the opposite end (the slave end such as a DCE) to begin monitoring its data input line. When ready for data, the slave end will raise its complementary line, CTS in this example, which signals the master to start sending data, and for the master to begin monitoring the slave's data output line. If either end needs to stop the data, it lowers its respective "data readiness" line. For PC-to-modem and similar links, in the case of DTR flow control, DTR/DSR are raised for the entire modem session (say a dialup internet call where DTR is raised to signal the modem to dial, and DSR is raised by the modem when the connection is complete), and RTS/CTS are raised for each block of data. An example of hardware flow control is a half-duplex radio modem to computer interface. In this case, the controlling software in the modem and computer may be written to give priority to incoming radio signals such that outgoing data from the computer is paused by lowering CTS if the modem detects a reception. Polarity: RS-232 level signals are inverted by the driver ICs, so line polarity is TxD-, RxD-, CTS+, RTS+ (clear to send when HI, data 1 is a LO) for microprocessor pins the signals are TxD+, RxD+, CTS-, RTS- (clear to send when LO, data 1 is a HI) === Software flow control === Conversely, XON/XOFF is usually referred to as software flow control. == Open-loop flow control == The open-loop flow control mechanism is characterized by having no feedback between the receiver and the transmitter. This simple means of control is widely used. The allocation of resources must be a "prior reservation" or "hop-to-hop" type. Open-loop flow control has inherent problems with maximizing the utilization of network resources. Resource allocation is made at connection setup using a CAC (connection admission control) and this allocation is made using information that is already "old news" during the lifetime of the connection. Often there is an over-allocation of resources and reserved but unused capacities are wasted. Open-loop flow control is used by ATM in its CBR, VBR and UBR services (see traffic contract and congestion control). Open-loop flow control incorporates two controls; the controller and a regulator. The regulator is able to alter the input variable in response to the signal from the controller. An open-loop system has no feedback or feed forward mechanism, so the input and output signals are not directly related and there is increased traffic variability. There is also a lower arrival rate in such system and a higher loss rate. In an open control system, the controllers can operate the regulators at regular intervals, but there is no assurance that the output variable can be maintained at the desired level. While it may be cheaper to use this model, the open-loop model can be unstable. == Closed-loop flow control == The closed-loop flow control mechanism is characterized by the ability of the network to report pending network congestion back to the transmitter. This information is then used by the transmitter in various ways to adapt its activity to existing network conditions. Closed-loop flow control is used by ABR (see traffic contract and congestion control). Transmit flow control described above is a form of closed-loop flow control. This system incorporates all the basic control elements, such as, the sensor, transmitter, controller and the regulator. The sensor is used to capture a process variable. The process variable is sent to a transmitter which translates the variable to the controller. The controller examines the information with respect to a desired value and initiates a correction action if required. The controller then communicates to the regulator what action is needed to ensure that the output variable value is matching the desired value. Therefore, there is a high degree of assurance that the output variable can be maintained at the desired level. The closed-loop control system can be a feedback or a feed forward system: A feedback closed-loop system has a feed-back mechanism that directly relates the input and output signals. The feed-back mechanism monitors the output variable and determines if additional correction is required. The output variable value that is fed backward is used to initiate that corrective action on a regulator. Most control loops in the industry are of the feedback type. In a feed-forward closed loop system, the measured process variable is an input variable. The measured signal is then used in the same fashion as in a feedback system. The closed-loop model produces lower loss rate and queuing delays, as well as it results in congestion-responsive traffic. The closed-loop model is always stable, as the number of active lows is bounded. == See also == Software flow control Computer networking Traffic contract Congestion control Teletraffic engineering in broadband networks Teletraffic engineering Ethernet flow control Handshaking == References == Sliding window: [1] last accessed 27 November 2012. == External links == RS-232 flow control and handshaking
Wikipedia/Flow_control_(data)
In computer science, a control-flow graph (CFG) is a representation, using graph notation, of all paths that might be traversed through a program during its execution. The control-flow graph was conceived by Frances E. Allen, who noted that Reese T. Prosser used boolean connectivity matrices for flow analysis before. The CFG is essential to many compiler optimizations and static-analysis tools. == Definition == In a control-flow graph each node in the graph represents a basic block, i.e. a straight-line sequence of code with a single entry point and a single exit point, where no branches or jumps occur within the block. Basic blocks start with jump targets and end with jumps or branch instructions. Directed edges are used to represent jumps in the control flow. There are, in most presentations, two specially designated blocks: the entry block, through which control enters into the flow graph, and the exit block, through which all control flow leaves. Because of its construction procedure, in a CFG, every edge A→B has the property that: outdegree(A) > 1 or indegree(B) > 1 (or both). The CFG can thus be obtained, at least conceptually, by starting from the program's (full) flow graph—i.e. the graph in which every node represents an individual instruction—and performing an edge contraction for every edge that falsifies the predicate above, i.e. contracting every edge whose source has a single exit and whose destination has a single entry. This contraction-based algorithm is of no practical importance, except as a visualization aid for understanding the CFG construction, because the CFG can be more efficiently constructed directly from the program by scanning it for basic blocks. == Example == Consider the following fragment of code: 0: (A) t0 = read_num 1: (A) if t0 mod 2 == 0 2: (B) print t0 + " is even." 3: (B) goto 5 4: (C) print t0 + " is odd." 5: (D) end program In the above, we have 4 basic blocks: A from 0 to 1, B from 2 to 3, C at 4 and D at 5. In particular, in this case, A is the "entry block", D the "exit block" and lines 4 and 5 are jump targets. A graph for this fragment has edges from A to B, A to C, B to D and C to D. == Reachability == Reachability is a graph property useful in optimization. If a subgraph is not connected from the subgraph containing the entry block, that subgraph is unreachable during any execution, and so is unreachable code; under normal conditions it can be safely removed. If the exit block is unreachable from the entry block, an infinite loop may exist. Not all infinite loops are detectable, see Halting problem. A halting order may also exist there. Unreachable code and infinite loops are possible even if the programmer does not explicitly code them: optimizations like constant propagation and constant folding followed by jump threading can collapse multiple basic blocks into one, cause edges to be removed from a CFG, etc., thus possibly disconnecting parts of the graph. == Domination relationship == A block M dominates a block N if every path from the entry that reaches block N has to pass through block M. The entry block dominates all blocks. In the reverse direction, block M postdominates block N if every path from N to the exit has to pass through block M. The exit block postdominates all blocks. It is said that a block M immediately dominates block N if M dominates N, and there is no intervening block P such that M dominates P and P dominates N. In other words, M is the last dominator on all paths from entry to N. Each block has a unique immediate dominator. Similarly, there is a notion of immediate postdominator, analogous to immediate dominator. The dominator tree is an ancillary data structure depicting the dominator relationships. There is an arc from Block M to Block N if M is an immediate dominator of N. This graph is a tree, since each block has a unique immediate dominator. This tree is rooted at the entry block. The dominator tree can be calculated efficiently using Lengauer–Tarjan's algorithm. A postdominator tree is analogous to the dominator tree. This tree is rooted at the exit block. == Special edges == A back edge is an edge that points to a block that has already been met during a depth-first (DFS) traversal of the graph. Back edges are typical of loops. A critical edge is an edge which is neither the only edge leaving its source block, nor the only edge entering its destination block. These edges must be split: a new block must be created in the middle of the edge, in order to insert computations on the edge without affecting any other edges. An abnormal edge is an edge whose destination is unknown. Exception handling constructs can produce them. These edges tend to inhibit optimization. An impossible edge (also known as a fake edge) is an edge which has been added to the graph solely to preserve the property that the exit block postdominates all blocks. It cannot ever be traversed. == Loop management == A loop header (sometimes called the entry point of the loop) is a dominator that is the target of a loop-forming back edge. The loop header dominates all blocks in the loop body. A block may be a loop header for more than one loop. A loop may have multiple entry points, in which case it has no "loop header". Suppose block M is a dominator with several incoming edges, some of them being back edges (so M is a loop header). It is advantageous to several optimization passes to break M up into two blocks Mpre and Mloop. The contents of M and back edges are moved to Mloop, the rest of the edges are moved to point into Mpre, and a new edge from Mpre to Mloop is inserted (so that Mpre is the immediate dominator of Mloop). In the beginning, Mpre would be empty, but passes like loop-invariant code motion could populate it. Mpre is called the loop pre-header, and Mloop would be the loop header. == Reducibility == A reducible CFG is one with edges that can be partitioned into two disjoint sets: forward edges, and back edges, such that: Forward edges form a directed acyclic graph with all nodes reachable from the entry node. For all back edges (A, B), node B dominates node A. Structured programming languages are often designed such that all CFGs they produce are reducible, and common structured programming statements such as IF, FOR, WHILE, BREAK, and CONTINUE produce reducible graphs. To produce irreducible graphs, statements such as GOTO are needed. Irreducible graphs may also be produced by some compiler optimizations. == Loop connectedness == The loop connectedness of a CFG is defined with respect to a given depth-first search tree (DFST) of the CFG. This DFST should be rooted at the start node and cover every node of the CFG. Edges in the CFG which run from a node to one of its DFST ancestors (including itself) are called back edges. The loop connectedness is the largest number of back edges found in any cycle-free path of the CFG. In a reducible CFG, the loop connectedness is independent of the DFST chosen. Loop connectedness has been used to reason about the time complexity of data-flow analysis. == Inter-procedural control-flow graph == While control-flow graphs represent the control flow of a single procedure, inter-procedural control-flow graphs represent the control flow of whole programs. == See also == Abstract syntax tree Flowchart Control-flow diagram Control-flow analysis Data-flow analysis Interval (graph theory) Program dependence graph Cyclomatic complexity Static single assignment Compiler construction Intermediate representation == References == == External links == The Machine-SUIF Control Flow Graph Library GNU Compiler Collection Internals Paper "Infrastructure for Profile Driven Optimizations in GCC Compiler" by Zdeněk Dvořák et al. Examples Avrora – Control-Flow Graph Tool Archived 2011-08-25 at the Wayback Machine
Wikipedia/Control-flow_graph
In computer programming, a virtual method table (VMT), virtual function table, virtual call table, dispatch table, vtable, or vftable is a mechanism used in a programming language to support dynamic dispatch (or run-time method binding). Whenever a class defines a virtual function (or method), most compilers add a hidden member variable to the class that points to an array of pointers to (virtual) functions called the virtual method table. These pointers are used at runtime to invoke the appropriate function implementations, because at compile time it may not yet be known if the base function is to be called or a derived one implemented by a class that inherits from the base class. There are many different ways to implement such dynamic dispatch, but use of virtual method tables is especially common among C++ and related languages (such as D and C#). Languages that separate the programmatic interface of objects from the implementation, like Visual Basic and Delphi, also tend to use this approach, because it allows objects to use a different implementation simply by using a different set of method pointers. The method allows creation of external libraries, where other techniques perhaps may not. Suppose a program contains three classes in an inheritance hierarchy: a superclass, Cat, and two subclasses, HouseCat and Lion. Class Cat defines a virtual function named speak, so its subclasses may provide an appropriate implementation (e.g. either meow or roar). When the program calls the speak function on a Cat reference (which can refer to an instance of Cat, or an instance of HouseCat or Lion), the code must be able to determine which implementation of the function the call should be dispatched to. This depends on the actual class of the object, not the class of the reference to it (Cat). The class cannot generally be determined statically (that is, at compile time), so neither can the compiler decide which function to call at that time. The call must be dispatched to the right function dynamically (that is, at run time) instead. == Implementation == An object's virtual method table will contain the addresses of the object's dynamically bound methods. Method calls are performed by fetching the method's address from the object's virtual method table. The virtual method table is the same for all objects belonging to the same class, and is therefore typically shared between them. Objects belonging to type-compatible classes (for example siblings in an inheritance hierarchy) will have virtual method tables with the same layout: the address of a given method will appear at the same offset for all type-compatible classes. Thus, fetching the method's address from a given offset into a virtual method table will get the method corresponding to the object's actual class. The C++ standards do not mandate exactly how dynamic dispatch must be implemented, but compilers generally use minor variations on the same basic model. Typically, the compiler creates a separate virtual method table for each class. When an object is created, a pointer to this table, called the virtual table pointer, vpointer or VPTR, is added as a hidden member of this object. As such, the compiler must also generate "hidden" code in the constructors of each class to initialize a new object's virtual table pointer to the address of its class's virtual method table. Many compilers place the virtual table pointer as the last member of the object; other compilers place it as the first; portable source code works either way. For example, g++ previously placed the pointer at the end of the object. == Example == Consider the following class declarations in C++ syntax: used to derive the following class: and the following piece of C++ code: g++ 3.4.6 from GCC produces the following 32-bit memory layout for the object b2: b2: +0: pointer to virtual method table of B2 +4: value of int_in_b2 virtual method table of B2: +0: B2::f2() and the following memory layout for the object d: d: +0: pointer to virtual method table of D (for B1) +4: value of int_in_b1 +8: pointer to virtual method table of D (for B2) +12: value of int_in_b2 +16: value of int_in_d Total size: 20 Bytes. virtual method table of D (for B1): +0: B1::f1() // B1::f1() is not overridden virtual method table of D (for B2): +0: D::f2() // B2::f2() is overridden by D::f2() // The location of B2::f2 is not in the virtual method table for D Note that those functions not carrying the keyword virtual in their declaration (such as fnonvirtual() and d()) do not generally appear in the virtual method table. There are exceptions for special cases as posed by the default constructor. Also note the virtual destructors in the base classes, B1 and B2. They are necessary to ensure delete d can free up memory not just for D, but also for B1 and B2, if d is a pointer or reference to the types B1 or B2. They were excluded from the memory layouts to keep the example simple. Overriding of the method f2() in class D is implemented by duplicating the virtual method table of B2 and replacing the pointer to B2::f2() with a pointer to D::f2(). == Multiple inheritance and thunks == The g++ compiler implements the multiple inheritance of the classes B1 and B2 in class D using two virtual method tables, one for each base class. (There are other ways to implement multiple inheritance, but this is the most common.) This leads to the necessity for "pointer fixups", also called thunks, when casting. Consider the following C++ code: While d and b1 will point to the same memory location after execution of this code, b2 will point to the location d+8 (eight bytes beyond the memory location of d). Thus, b2 points to the region within d that "looks like" an instance of B2, i.e., has the same memory layout as an instance of B2. == Invocation == A call to d->f1() is handled by dereferencing d's D::B1 vpointer, looking up the f1 entry in the virtual method table, and then dereferencing that pointer to call the code. Single inheritance In the case of single inheritance (or in a language with only single inheritance), if the vpointer is always the first element in d (as it is with many compilers), this reduces to the following pseudo-C++: Where *d refers to the virtual method table of D and [0] refers to the first method in the virtual method table. The parameter d becomes the "this" pointer to the object. Multiple inheritance In the more general case, calling B1::f1() or D::f2() is more complicated: The call to d->f1() passes a B1 pointer as a parameter. The call to d->f2() passes a B2 pointer as a parameter. This second call requires a fixup to produce the correct pointer. The location of B2::f2 is not in the virtual method table for D. By comparison, a call to d->fnonvirtual() is much simpler: == Efficiency == A virtual call requires at least an extra indexed dereference and sometimes a "fixup" addition, compared to a non-virtual call, which is simply a jump to a compiled-in pointer. Therefore, calling virtual functions is inherently slower than calling non-virtual functions. An experiment done in 1996 indicates that approximately 6–13% of execution time is spent simply dispatching to the correct function, though the overhead can be as high as 50%. The cost of virtual functions may not be so high on modern CPU architectures due to much larger caches and better branch prediction. Furthermore, in environments where JIT compilation is not in use, virtual function calls usually cannot be inlined. In certain cases it may be possible for the compiler to perform a process known as devirtualization in which, for instance, the lookup and indirect call are replaced with a conditional execution of each inlined body, but such optimizations are not common. To avoid this overhead, compilers usually avoid using virtual method tables whenever the call can be resolved at compile time. Thus, the call to f1 above may not require a table lookup because the compiler may be able to tell that d can only hold a D at this point, and D does not override f1. Or the compiler (or optimizer) may be able to detect that there are no subclasses of B1 anywhere in the program that override f1. The call to B1::f1 or B2::f2 will probably not require a table lookup because the implementation is specified explicitly (although it does still require the 'this'-pointer fixup). == Comparison with alternatives == The virtual method table is generally a good performance trade-off to achieve dynamic dispatch, but there are alternatives, such as binary tree dispatch, with higher performance in some typical cases, but different trade-offs. However, virtual method tables only allow for single dispatch on the special "this" parameter, in contrast to multiple dispatch (as in CLOS, Dylan, or Julia), where the types of all parameters can be taken into account in dispatching. Virtual method tables also only work if dispatching is constrained to a known set of methods, so they can be placed in a simple array built at compile time, in contrast to duck typing languages (such as Smalltalk, Python or JavaScript). Languages that provide either or both of these features often dispatch by looking up a string in a hash table, or some other equivalent method. There are a variety of techniques to make this faster (e.g., interning/tokenizing method names, caching lookups, just-in-time compilation). == See also == Virtual function Virtual inheritance Branch table == Notes == == References == Margaret A. Ellis and Bjarne Stroustrup (1990) The Annotated C++ Reference Manual. Reading, MA: Addison-Wesley. (ISBN 0-201-51459-1)
Wikipedia/Virtual_method_table
The primary focus of this article is asynchronous control in digital electronic systems. In a synchronous system, operations (instructions, calculations, logic, etc.) are coordinated by one, or more, centralized clock signals. An asynchronous system, in contrast, has no global clock. Asynchronous systems do not depend on strict arrival times of signals or messages for reliable operation. Coordination is achieved using event-driven architecture triggered by network packet arrival, changes (transitions) of signals, handshake protocols, and other methods. == Modularity == Asynchronous systems – much like object-oriented software – are typically constructed out of modular 'hardware objects', each with well-defined communication interfaces. These modules may operate at variable speeds, whether due to data-dependent processing, dynamic voltage scaling, or process variation. The modules can then be combined to form a correct working system, without reference to a global clock signal. Typically, low power is obtained since components are activated only on demand. Furthermore, several asynchronous styles have been shown to accommodate clocked interfaces, and thereby support mixed-timing design. Hence, asynchronous systems match well the need for correct-by-construction methodologies in assembling large-scale heterogeneous and scalable systems. == Design styles == There is a large spectrum of asynchronous design styles, with tradeoffs between robustness and performance (and other parameters such as power). The choice of design style depends on the application target: reliability/ease-of-design vs. speed. The most robust designs use 'delay-insensitive circuits', whose operation is correct regardless of gate and wire delays; however, only limited useful systems can be designed with this style. Slightly less robust, but much more useful, are quasi-delay-insensitive circuits (also known as speed-independent circuits), such as delay-insensitive minterm synthesis, which operate correctly regardless of gate delays; however, wires at each fanout point must be tuned for roughly equal delays. Less robust but faster circuits, requiring simple localized one-sided timing constraints, include controllers using fundamental-mode operation (i.e. with setup/hold requirements on when new inputs can be received), and bundled datapaths using matched delays (see below). At the extreme, high-performance "timed circuits" have been proposed, which use tight two-side timing constraints, where the clock can still be avoided but careful physical delay tuning is required, such as for some high-speed pipeline applications.. == Asynchronous communication == Asynchronous communication is typically performed on communication channels. Communication is used both to synchronize operations of the concurrent system as well as to pass data. A simple channel typically consists of two wires: a request and an acknowledge. In a '4-phase handshaking protocol' (or return-to-zero), the request is asserted by the sender component, and the receiver responds by asserting the acknowledge; then both signals are de-asserted in turn. In a '2-phase handshaking protocol' (or transition-signalling), the requester simply toggles the value on the request wire (once), and the receiver responds by toggling the value on the acknowledge wire. Channels can also be extended to communicate data. == Asynchronous datapaths == Asynchronous datapaths are typically encoded using several schemes. Robust schemes use two wires or 'rails' for each bit, called 'dual-rail encoding'. In this case, first rail is asserted to transmit a 0 value, or the second rail is asserted to transmit a 1 value. The asserted rail is then reset to zero before the next data value is transmitted, thereby indicating 'no data' or a 'spacer' state. A less robust, but widely used and practical scheme, is called 'single-rail bundled data'. Here, a single-rail (i.e. synchronous-style) function block can be used, with an accompanying worst-case matched delay. After valid data inputs arrive, a request signal is asserted as the input to the matched delay. When the matched delay produces a 'done' output, the block guaranteed to have completed computation. While this scheme has timing constraints, they are simple, localized (unlike in synchronous systems), and one-sided, hence are usually easy to validate. == Literature == The literature in this field exists in a variety of conference and journal proceedings. The leading symposium is the IEEE Async Symposium (International Symposium on Asynchronous Circuits and Systems), founded in 1994. A variety of asynchronous papers have also been published since the mid-1980s in such conferences as IEEE/ACM Design Automation Conference, IEEE International Conference on Computer Design, IEEE/ACM International Conference on Computer-Aided Design, International Solid-State Circuits Conference Archived 2010-03-16 at the Wayback Machine, and Advanced Research in VLSI, as well as in leading journals such as IEEE Transactions on VLSI Systems, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, and Transactions on Distributed Computing. == See also == Design flow (EDA) Electronic design automation Integrated circuit design Isochronous timing Mesochronous network Perfect clock gating Plesiochronous system == References == Adapted from Steve Nowick's column in the ACM SIGDA e-newsletter by Igor Markov Original text is available at https://web.archive.org/web/20060624073502/http://www.sigda.org/newsletter/2006/eNews_060115.html == External links == ARM ARM996HS clockless processor Archived 2016-03-03 at the Wayback Machine Navarre AsyncArt. N-Protocol: Asynchronous Design Methodology for FPGAs Asynchronous design page at Newcastle University Workcraft: toolset for asynchronous circuit synthesis and verification https://workcraft.org/
Wikipedia/Asynchronous_systems
In computer science, control-flow analysis (CFA) is a static-code-analysis technique for determining the control flow of a program. The control flow is expressed as a control-flow graph (CFG). For both functional programming languages and object-oriented programming languages, the term CFA, and elaborations such as k-CFA, refer to specific algorithms that compute control flow. For many imperative programming languages, the control flow of a program is explicit in a program's source code. As a result, interprocedural control-flow analysis implicitly usually refers to a static analysis technique for determining the receivers of function or method calls in computer programs written in a higher-order programming language. For example, in a programming language with higher-order functions like Scheme, the target of a function call may not be explicit: in the isolated expression it is unclear to which procedure f may refer. A control-flow analysis must consider where this expression could be invoked and what argument it may receive to determine the possible targets. Techniques such as abstract interpretation, constraint solving, and type systems may be used for control-flow analysis. == See also == Control-flow diagram (CFD) Data-flow analysis Cartesian product algorithm Pointer analysis == References == == External links == for textbook intraprocedural CFA in imperative languages CFA in functional programs (survey) for the relationship between CFA analysis in functional languages and points-to analysis in imperative/OOP languages
Wikipedia/Control_flow_analysis
In computer science, a generator is a routine that can be used to control the iteration behaviour of a loop. All generators are also iterators. A generator is very similar to a function that returns an array, in that a generator has parameters, can be called, and generates a sequence of values. However, instead of building an array containing all the values and returning them all at once, a generator yields the values one at a time, which requires less memory and allows the caller to get started processing the first few values immediately. In short, a generator looks like a function but behaves like an iterator. Generators can be implemented in terms of more expressive control flow constructs, such as coroutines or first-class continuations. Generators, also known as semicoroutines, are a special case of (and weaker than) coroutines, in that they always yield control back to the caller (when passing a value back), rather than specifying a coroutine to jump to; see comparison of coroutines with generators. == Uses == Generators are usually invoked inside loops. The first time that a generator invocation is reached in a loop, an iterator object is created that encapsulates the state of the generator routine at its beginning, with arguments bound to the corresponding parameters. The generator's body is then executed in the context of that iterator until a special yield action is encountered; at that time, the value provided with the yield action is used as the value of the invocation expression. The next time the same generator invocation is reached in a subsequent iteration, the execution of the generator's body is resumed after the yield action, until yet another yield action is encountered. In addition to the yield action, execution of the generator body can also be terminated by a finish action, at which time the innermost loop enclosing the generator invocation is terminated. In more complicated situations, a generator may be used manually outside of a loop to create an iterator, which can then be used in various ways. Because generators compute their yielded values only on demand, they are useful for representing streams, such as sequences that would be expensive or impossible to compute at once. These include e.g. infinite sequences and live data streams. When eager evaluation is desirable (primarily when the sequence is finite, as otherwise evaluation will never terminate), one can either convert to a list, or use a parallel construction that creates a list instead of a generator. For example, in Python a generator g can be evaluated to a list l via l = list(g), while in F# the sequence expression seq { ... } evaluates lazily (a generator or sequence) but [ ... ] evaluates eagerly (a list). In the presence of generators, loop constructs of a language – such as for and while – can be reduced into a single loop ... end loop construct; all the usual loop constructs can then be comfortably simulated by using suitable generators in the right way. For example, a ranged loop like for x = 1 to 10 can be implemented as iteration through a generator, as in Python's for x in range(1, 10). Further, break can be implemented as sending finish to the generator and then using continue in the loop. == Languages providing generators == Generators first appeared in CLU (1975), were a prominent feature in the string manipulation language Icon (1977) and are now available in Python (2001), C#, Ruby, PHP, ECMAScript (as of ES6/ES2015), and other languages. In CLU and C#, generators are called iterators, and in Ruby, enumerators. === Lisp === The final Common Lisp standard does not natively provide generators, yet various library implementations exist, such as SERIES documented in CLtL2 or pygen. === CLU === A yield statement is used to implement iterators over user-defined data abstractions. === Icon === Every expression (including loops) is a generator. The language has many generators built-in and even implements some of the logic semantics using the generator mechanism (logical disjunction or "OR" is done this way). Printing squares from 0 to 20 can be achieved using a co-routine by writing: However, most of the time custom generators are implemented with the "suspend" keyword which functions exactly like the "yield" keyword in CLU. === C === C does not have generator functions as a language construct, but, as they are a subset of coroutines, it is simple to implement them using any framework that implements stackful coroutines, such as libdill. On POSIX platforms, when the cost of context switching per iteration is not a concern, or full parallelism rather than merely concurrency is desired, a very simple generator function framework can be implemented using pthreads and pipes. === C++ === It is possible to introduce generators into C++ using pre-processor macros. The resulting code might have aspects that are very different from native C++, but the generator syntax can be very uncluttered. The set of pre-processor macros defined in this source allow generators defined with the syntax as in the following example: This can then be iterated using: Moreover, C++11 allows foreach loops to be applied to any class that provides the begin and end functions. It's then possible to write generator-like classes by defining both the iterable methods (begin and end) and the iterator methods (operator!=, operator++ and operator*) in the same class. For example, it is possible to write the following program: A basic range implementation would look like that: === Perl === Perl does not natively provide generators, but support is provided by the Coro::Generator module which uses the Coro co-routine framework. Example usage: === Raku === Example parallel to Icon uses Raku (formerly/aka Perl 6) Range class as one of several ways to achieve generators with the language. Printing squares from 0 to 20 can be achieved by writing: However, most of the time custom generators are implemented with "gather" and "take" keywords in a lazy context. === Tcl === In Tcl 8.6, the generator mechanism is founded on named coroutines. === Haskell === In Haskell, with its lazy evaluation model, every datum created with a non-strict data constructor is generated on demand. For example, where (:) is a non-strict list constructor, cons, and $ is just a "called-with" operator, used for parenthesization. This uses the standard adaptor function, which walks down the list and stops on the first element that doesn't satisfy the predicate. If the list has been walked before until that point, it is just a strict data structure, but if any part hadn't been walked through before, it will be generated on demand. List comprehensions can be freely used: === Racket === Racket provides several related facilities for generators. First, its for-loop forms work with sequences, which are a kind of a producer: and these sequences are also first-class values: Some sequences are implemented imperatively (with private state variables) and some are implemented as (possibly infinite) lazy lists. Also, new struct definitions can have a property that specifies how they can be used as sequences. But more directly, Racket comes with a generator library for a more traditional generator specification. For example, Note that the Racket core implements powerful continuation features, providing general (re-entrant) continuations that are composable, and also delimited continuations. Using this, the generator library is implemented in Racket. === PHP === The community of PHP implemented generators in PHP 5.5. Details can be found in the original Request for Comments: Generators. Infinite Fibonacci sequence: Fibonacci sequence with limit: Any function which contains a yield statement is automatically a generator function. === Ruby === Ruby supports generators (starting from version 1.9) in the form of the built-in Enumerator class. === Java === Java has had a standard interface for implementing iterators since its early days, and since Java 5, the "foreach" construction makes it easy to loop over objects that provide the java.lang.Iterable interface. (The Java collections framework and other collections frameworks, typically provide iterators for all collections.) Or get an Iterator from the Java 8 super-interface BaseStream of Stream interface. Output: === C# === An example C# 2.0 generator (the yield is available since C# version 2.0): Both of these examples utilize generics, but this is not required. yield keyword also helps in implementing custom stateful iterations over a collection as discussed in this discussion. It is possible to use multiple yield return statements and they are applied in sequence on each iteration: === XL === In XL, iterators are the basis of 'for' loops: === F# === F# provides generators via sequence expressions, since version 1.9.1. These can define a sequence (lazily evaluated, sequential access) via seq { ... }, a list (eagerly evaluated, sequential access) via [ ... ] or an array (eagerly evaluated, indexed access) via [| ... |] that contain code that generates values. For example, forms a sequence of squares of numbers from 0 to 14 by filtering out numbers from the range of numbers from 0 to 25. === Python === Generators were added to Python in version 2.2 in 2001. An example generator: In Python, a generator can be thought of as an iterator that contains a frozen stack frame. Whenever next() is called on the iterator, Python resumes the frozen frame, which executes normally until the next yield statement is reached. The generator's frame is then frozen again, and the yielded value is returned to the caller. PEP 380 (implemented in Python 3.3) adds the yield from expression, allowing a generator to delegate part of its operations to another generator or iterable. ==== Generator expressions ==== Python has a syntax modeled on that of list comprehensions, called a generator expression that aids in the creation of generators. The following extends the first example above by using a generator expression to compute squares from the countfrom generator function: === ECMAScript === ECMAScript 6 (a.k.a. Harmony) introduced generator functions. An infinite Fibonacci sequence can be written using a function generator: === R === The iterators package can be used for this purpose. === Smalltalk === Example in Pharo Smalltalk: The Golden ratio generator below returns to each invocation 'goldenRatio next' a better approximation to the Golden Ratio. The expression below returns the next 10 approximations. See more in A hidden gem in Pharo: Generator. == See also == List comprehension for another construct that generates a sequence of values Iterator for the concept of producing a list one element at a time Iteratee for an alternative Lazy evaluation for producing values when needed Corecursion for potentially infinite data by recursion instead of yield Coroutine for even more generalization from subroutine Continuation for generalization of control flow == Notes == == References == Stephan Murer, Stephen Omohundro, David Stoutamire and Clemens Szyperski: Iteration abstraction in Sather. ACM Transactions on Programming Languages and Systems, 18(1):1-15 (1996) [1]
Wikipedia/Generator_(computer_science)
In computer programming, a statement is a syntactic unit of an imperative programming language that expresses some action to be carried out. A program written in such a language is formed by a sequence of one or more statements. A statement may have internal components (e.g. expressions). Many programming languages (e.g. Ada, Algol 60, C, Java, Pascal) make a distinction between statements and definitions/declarations. A definition or declaration specifies the data on which a program is to operate, while a statement specifies the actions to be taken with that data. Statements which cannot contain other statements are simple; those which can contain other statements are compound. The appearance of a statement (and indeed a program) is determined by its syntax or grammar. The meaning of a statement is determined by its semantics. == Simple statements == Simple statements are complete in themselves; these include assignments, subroutine calls, and a few statements which may significantly affect the program flow of control (e.g. goto, return, stop/halt). In some languages, input and output, assertions, and exits are handled by special statements, while other languages use calls to predefined subroutines. assignment Fortran: variable = expression Pascal, Algol 60, Ada: variable := expression; C, C#, C++, PHP, Java: variable = expression; call Fortran: CALL subroutine name(parameters) C, C++, Java, PHP, Pascal, Ada: subroutine name(parameters); assertion C, C++, PHP: assert(relational expression); Java: assert relational expression; goto Fortran: GOTO numbered-label Algol 60: goto label; C, C++, PHP, Pascal: goto label; return Fortran: RETURN value C, C++, Java, PHP: return value; stop/halt/exit Fortran: STOP number C, C++: exit(expression) PHP: exit number; == Compound statements == Compound statements may contain (sequences of) statements, nestable to any reasonable depth, and generally involve tests to decide whether or not to obey or repeat these contained statements. Notation for the following examples: <statement> is any single statement (could be simple or compound). <sequence> is any sequence of zero or more <statements> Some programming languages provide a general way of grouping statements together, so that any single <statement> can be replaced by a group: Algol 60: begin <sequence> end Pascal: begin <sequence> end C, PHP, Java: { <sequence> } Other programming languages have a different special terminator on each kind of compound statement, so that one or more statements are automatically treated as a group: Ada: if test then <sequence> end if; Many compound statements are loop commands or choice commands. In theory only one of each of these types of commands is required. In practice there are various special cases which occur quite often; these may make a program easier to understand, may make programming easier, and can often be implemented much more efficiently. There are many subtleties not mentioned here; see the linked articles for details. count-controlled loop: Algol 60: for index := 1 step 1 until limit do <statement> ; Pascal: for index := 1 to limit do <statement> ; C, Java: for ( index = 1; index <= limit; index += 1) <statement> ; Ada: for index in 1..limit loop <sequence> end loop Fortran 90: condition-controlled loop with test at start of loop: Algol 60: for index := expression while test do <statement> ; Pascal: while test do <statement> ; C, Java: while (test) <statement> ; Ada: while test loop <sequence> end loop Fortran 90: condition-controlled loop with test at end of loop: Pascal: repeat <sequence> until test; { note reversed test } C, Java: do { <sequence> } while (test) ; Ada: loop <sequence> exit when test; end loop; condition-controlled loop with test in the middle of the loop: C: do { <sequence> if (test) break; <sequence> } while (true) ; Ada: loop <sequence> exit when test; <sequence> end loop; if-statement simple situation: Algol 60:if test then <unconditional statement> ; Pascal: if test then <statement> ; C, Java: if (test) <statement> ; Ada: if test then <sequence> end if; Fortran 77+: if-statement two-way choice: Algol 60: if test then <unconditional statement> else <statement> ; Pascal: if test then <statement> else <statement> ; C, Java: if (test) <statement> else <statement> ; Ada: if test then <sequence> else <sequence> end if; Fortran 77+: case/switch statement multi-way choice: Pascal: case c of 'a': alert(); 'q': quit(); end; Ada: case c is when 'a' => alert(); when 'q' => quit(); end case; C, Java: switch (c) { case 'a': alert(); break; case 'q': quit(); break; } Exception handling: Ada: begin protected code except when exception specification => exception handler Java: try { protected code } catch (exception specification) { exception handler } finally { cleanup } Python: try: protected code except exception specification: exception handler else: no exceptions finally: cleanup == Syntax == Apart from assignments and subroutine calls, most languages start each statement with a special word (e.g. goto, if, while, etc.) as shown in the above examples. Various methods have been used to describe the form of statements in different languages; the more formal methods tend to be more precise: Algol 60 used Backus–Naur form (BNF) which set a new level for language grammar specification. Up until Fortran 77, the language was described in English prose with examples, From Fortran 90 onwards, the language was described using a variant of BNF. Cobol used a two-dimensional metalanguage. Pascal used both syntax diagrams and equivalent BNF. BNF uses recursion to express repetition, so various extensions have been proposed to allow direct indication of repetition. === Statements and keywords === Some programming language grammars reserve keywords or mark them specially, and do not allow them to be used as identifiers. This often leads to grammars which are easier to parse, requiring less lookahead. ==== No distinguished keywords ==== Fortran and PL/1 do not have reserved keywords, allowing statements like: in PL/1: IF IF = THEN THEN ... (the second IF and the first THEN are variables). in Fortran: IF (A) X = 10... conditional statement (with other variants) IF (A) = 2 assignment to a subscripted variable named IF As spaces were optional up to Fortran 95, a typo could completely change the meaning of a statement: DO 10 I = 1,5 start of a loop with I running from 1 to 5 DO 10 I = 1.5 assignment of the value 1.5 to the variable DO10I ==== Flagged words ==== In Algol 60 and Algol 68, special tokens were distinguished explicitly: for publication, in boldface e.g. begin; for programming, with some special marking, e.g., a flag ('begin), quotation marks ('begin'), or underlined (begin on the Elliott 503). This is called "stropping". Tokens that are part of the language syntax thus do not conflict with programmer-defined names. ==== Reserved keywords ==== Certain names are reserved as part of the programming language and can not be used as programmer-defined names. The majority of the most popular programming languages use reserved keywords. Early examples include FLOW-MATIC (1953) and COBOL (1959). Since 1970 other examples include Ada, C, C++, Java, and Pascal. The number of reserved words depends on the language: C has about 30 while COBOL has about 400. == Semantics == Semantics is concerned with the meaning of a program. The standards documents for many programming languages use BNF or some equivalent to express the syntax/grammar in a fairly formal and precise way, but the semantics/meaning of the program is generally described using examples and English prose. This can result in ambiguity. In some language descriptions the meaning of compound statements is defined by the use of 'simpler' constructions, e.g. a while loop can be defined by a combination of tests, jumps, and labels, using if and goto. The semantics article describes several mathematical/logical formalisms which have been used to specify semantics in a precise way; these are generally more complicated than BNF, and no single approach is generally accepted as the way to go. Some approaches effectively define an interpreter for the language, some use formal logic to reason about a program, some attach affixes to syntactic entities to ensure consistency, etc. == Expressions == A distinction is often made between statements, which are executed, and expressions, which are evaluated. Expressions always evaluate to a value, which statements do not. However, expressions are often used as part of a larger statement. In most programming languages, a statement can consist of little more than an expression, usually by following the expression with a statement terminator (semicolon). In such a case, while the expression evaluates to a value, the complete statement does not (the expression's value is discarded). For instance, in C, C++, C#, and many similar languages, x = y + 1 is an expression that will set x to the value of y plus one, and the whole expression itself will evaluate to the same value that x is set to. However, x = y + 1; (note the semicolon at the end) is a statement that will still set x to the value of y plus one because the expression within the statement is still evaluated, but the result of the expression is discarded, and the statement itself does not evaluate to any value. Expressions can also be contained within other expressions. For instance, the expression x = y + 1 contains the expression y + 1, which in turn contains the values y and 1, which are also technically expressions. Although the previous examples show assignment expressions, some languages do not implement assignment as an expression, but rather as a statement. A notable example of this is Python, where = is not an operator, but rather just a separator in the assignment statement. Although Python allows multiple assignments as each assignment were an expression, this is simply a special case of the assignment statement built into the language grammar rather than a true expression. == Extensibility == Most languages have a fixed set of statements defined by the language, but there have been experiments with extensible languages that allow the programmer to define new statements. == See also == Comparison of programming languages (syntax) § Statements Control flow == References == == External links == PC ENCYCLOPEDIA: Definition of: program statement
Wikipedia/Statement_(computer_science)
In mathematics, a unary function is a function that takes one argument. A unary operator belongs to a subset of unary functions, in that its codomain coincides with its domain. In contrast, a unary function's domain need not coincide with its range. == Examples == The successor function, denoted succ {\displaystyle \operatorname {succ} } , is a unary operator. Its domain and codomain are the natural numbers; its definition is as follows: succ : N → N n ↦ ( n + 1 ) {\displaystyle {\begin{aligned}\operatorname {succ} :\quad &\mathbb {N} \rightarrow \mathbb {N} \\&n\mapsto (n+1)\end{aligned}}} In some programming languages such as C, executing this operation is denoted by postfixing ++ to the operand, i.e. the use of n++ is equivalent to executing the assignment n := succ ⁡ ( n ) {\displaystyle n:=\operatorname {succ} (n)} . Many of the elementary functions are unary functions, including the trigonometric functions, logarithm with a specified base, exponentiation to a particular power or base, and hyperbolic functions. == See also == Arity Binary function Binary operation Iterated binary operation Ternary operation Unary operation == Bibliography == Foundations of Genetic Programming
Wikipedia/Unary_function
Control-flow integrity (CFI) is a general term for computer security techniques that prevent a wide variety of malware attacks from redirecting the flow of execution (the control flow) of a program. == Background == A computer program commonly changes its control flow to make decisions and use different parts of the code. Such transfers may be direct, in that the target address is written in the code itself, or indirect, in that the target address itself is a variable in memory or a CPU register. In a typical function call, the program performs a direct call, but returns to the caller function using the stack – an indirect backward-edge transfer. When a function pointer is called, such as from a virtual table, we say there is an indirect forward-edge transfer. Attackers seek to inject code into a program to make use of its privileges or to extract data from its memory space. Before executable code was commonly made read-only, an attacker could arbitrarily change the code as it is run, targeting direct transfers or even do with no transfers at all. After W^X became widespread, an attacker wants to instead redirect execution to a separate, unprotected area containing the code to be run, making use of indirect transfers: one could overwrite the virtual table for a forward-edge attack or change the call stack for a backward-edge attack (return-oriented programming). CFI is designed to protect indirect transfers from going to unintended locations. == Techniques == Associated techniques include code-pointer separation (CPS), code-pointer integrity (CPI), stack canaries, shadow stacks, and vtable pointer verification. These protections can be classified into either coarse-grained or fine-grained based on the number of targets restricted. A coarse-grained forward-edge CFI implementation, could, for example, restrict the set of indirect call targets to any function that may be indirectly called in the program, while a fine-grained one would restrict each indirect call site to functions that have the same type as the function to be called. Similarly, for a backward edge scheme protecting returns, a coarse-grained implementation would only allow the procedure to return to a function of the same type (of which there could be many, especially for common prototypes), while a fine-grained one would enforce precise return matching (so it can return only to the function that called it). == Implementations == Related implementations are available in Clang (LLVM in general), Microsoft's Control Flow Guard and Return Flow Guard, Google's Indirect Function-Call Checks and Reuse Attack Protector (RAP). === LLVM/Clang === LLVM/Clang provides a "CFI" option that works in the forward edge by checking for errors in virtual tables and type casts. It depends on link-time optimization (LTO) to know what functions are supposed to be called in normal cases. There is a separate "shadow call stack" scheme that defends on the backward edge by checking for call stack modifications, available only for aarch64. Google has shipped Android with the Linux kernel compiled by Clang with link-time optimization (LTO) and CFI since 2018. SCS is available for Linux kernel as an option, including on Android. === Intel Control-flow Enforcement Technology === Intel Control-flow Enforcement Technology (CET) detects compromises to control flow integrity with a shadow stack (SS) and indirect branch tracking (IBT). The kernel must map a region of memory for the shadow stack not writable to user space programs except by special instructions. The shadow stack stores a copy of the return address of each CALL. On a RET, the processor checks if the return address stored in the normal stack and shadow stack are equal. If the addresses are not equal, the processor generates an INT #21 (Control Flow Protection Fault). Indirect branch tracking detects indirect JMP or CALL instructions to unauthorized targets. It is implemented by adding a new internal state machine in the processor. The behavior of indirect JMP and CALL instructions is changed so that they switch the state machine from IDLE to WAIT_FOR_ENDBRANCH. In the WAIT_FOR_ENDBRANCH state, the next instruction to be executed is required to be the new ENDBRANCH instruction (ENDBR32 in 32-bit mode or ENDBR64 in 64-bit mode), which changes the internal state machine from WAIT_FOR_ENDBRANCH back to IDLE. Thus every authorized target of an indirect JMP or CALL must begin with ENDBRANCH. If the processor is in a WAIT_FOR_ENDBRANCH state (meaning, the previous instruction was an indirect JMP or CALL), and the next instruction is not an ENDBRANCH instruction, the processor generates an INT #21 (Control Flow Protection Fault). On processors not supporting CET indirect branch tracking, ENDBRANCH instructions are interpreted as NOPs and have no effect. === Microsoft Control Flow Guard === Control Flow Guard (CFG) was first released for Windows 8.1 Update 3 (KB3000850) in November 2014. Developers can add CFG to their programs by adding the /guard:cf linker flag before program linking in Visual Studio 2015 or newer. As of Windows 10 Creators Update (Windows 10 version 1703), the Windows kernel is compiled with CFG. The Windows kernel uses Hyper-V to prevent malicious kernel code from overwriting the CFG bitmap. CFG operates by creating a per-process bitmap, where a set bit indicates that the address is a valid destination. Before performing each indirect function call, the application checks if the destination address is in the bitmap. If the destination address is not in the bitmap, the program terminates. This makes it more difficult for an attacker to exploit a use-after-free by replacing an object's contents and then using an indirect function call to execute a payload. ==== Implementation details ==== For all protected indirect function calls, the _guard_check_icall function is called, which performs the following steps: Convert the target address to an offset and bit number in the bitmap. The highest 3 bytes are the byte offset in the bitmap The bit offset is a 5-bit value. The first four bits are the 4th through 8th low-order bits of the address. The 5th bit of the bit offset is set to 0 if the destination address is aligned with 0x10 (last four bits are 0), and 1 if it is not. Examine the target's address value in the bitmap If the target address is in the bitmap, return without an error. If the target address is not in the bitmap, terminate the program. ==== Bypass techniques ==== There are several generic techniques for bypassing CFG: Set the destination to code located in a non-CFG module loaded in the same process. Find an indirect call that was not protected by CFG (either CALL or JMP). Use a function call with a different number of arguments than the call is designed for, causing a stack misalignment, and code execution after the function returns (patched in Windows 10). Use a function call with the same number of arguments, but one of pointers passed is treated as an object and writes to a pointer-based offset, allowing overwriting a return address. Overwrite the function call used by the CFG to validate the address (patched in March 2015) Set the CFG bitmap to all 1's, allowing all indirect function calls Use a controlled-write primitive to overwrite an address on the stack (since the stack is not protected by CFG) === Microsoft eXtended Flow Guard === eXtended Flow Guard (XFG) has not been officially released yet, but is available in the Windows Insider preview and was publicly presented at Bluehat Shanghai in 2019. XFG extends CFG by validating function call signatures to ensure that indirect function calls are only to the subset of functions with the same signature. Function call signature validation is implemented by adding instructions to store the target function's hash in register r10 immediately prior to the indirect call and storing the calculated function hash in the memory immediately preceding the target address's code. When the indirect call is made, the XFG validation function compares the value in r10 to the target function's stored hash. == See also == Buffer overflow protection == References ==
Wikipedia/Control-flow_integrity
A branch, jump or transfer is an instruction in a computer program that can cause a computer to begin executing a different instruction sequence and thus deviate from its default behavior of executing instructions in order. Branch (or branching, branched) may also refer to the act of switching execution to a different instruction sequence as a result of executing a branch instruction. Branch instructions are used to implement control flow in program loops and conditionals (i.e., executing a particular sequence of instructions only if certain conditions are satisfied). A branch instruction can be either an unconditional branch, which always results in branching, or a conditional branch, which may or may not cause branching depending on some condition. Also, depending on how it specifies the address of the new instruction sequence (the "target" address), a branch instruction is generally classified as direct, indirect or relative, meaning that the instruction contains the target address, or it specifies where the target address is to be found (e.g., a register or memory location), or it specifies the difference between the current and target addresses. == Implementation == Branch instructions can alter the contents of the CPU's program counter (PC) (or instruction pointer on Intel microprocessors). The program counter maintains the memory address of the next machine instruction to be fetched and executed. Therefore, a branch, if executed, causes the CPU to execute code from a new memory address, changing the program logic according to the algorithm planned by the programmer. One type of machine level branch is the jump instruction. These may or may not result in the PC being loaded or modified with some new, different value other than what it ordinarily would have been (being incremented past the current instruction to point to the following, next instruction). Jumps typically have unconditional and conditional forms where the latter may be taken or not taken (the PC is modified or not) depending on some condition. The second type of machine level branch is the call instruction which is used to implement subroutines. Like jump instructions, calls may or may not modify the PC according to condition codes, however, additionally a return address is saved in a secure place in memory (usually in a memory resident data structure called a stack). Upon completion of the subroutine, this return address is restored to the PC, and so program execution resumes with the instruction following the call instruction. The third type of machine level branch is the return instruction. This "pops" a return address off the stack and loads it into the PC register, thus returning control to the calling routine. Return instructions may also be conditionally executed. This description pertains to ordinary practice; however, the machine programmer has considerable powers to manipulate the return address on the stack, and so redirect program execution in any number of different ways. Depending on the processor, jump and call instructions may alter the contents of the PC register in different ways. An absolute address may be loaded, or the current contents of the PC may have some value (or displacement) added or subtracted from its current value, making the destination address relative to the current place in the program. The source of the displacement value may vary, such as an immediate value embedded within the instruction, or the contents of a processor register or memory location, or the contents of some location added to an index value. The term branch can also be used when referring to programs in high-level programming languages. In these branches usually take the form of conditional statements of various forms that encapsulate the instruction sequence that will be executed if the conditions are satisfied. Unconditional branch instructions such as GOTO are used to unconditionally jump to a different instruction sequence. If the algorithm requires a conditional branch, the GOTO (or GOSUB subroutine call) is preceded by an IF-THEN statement specifying the condition(s). All high level languages support algorithms that can re-use code as a loop, a control structure that repeats a sequence of instructions until some condition is satisfied that causes the loop to terminate. Loops also qualify as branch instructions. At the machine level, loops are implemented as ordinary conditional jumps that redirect execution to repeating code. In CPUs with flag registers, an earlier instruction sets a condition in the flag register. The earlier instruction may be arithmetic, or a logic instruction. It is often close to the branch, though not necessarily the instruction immediately before the branch. The stored condition is then used in a branch such as jump if overflow-flag set. This temporary information is often stored in a flag register but may also be located elsewhere. A flag register design is simple in slower, simple computers. In fast computers a flag register can place a bottleneck on speed, because instructions that could otherwise operate in parallel (in several execution units) need to set the flag bits in a particular sequence. There are also machines (or particular instructions) where the condition may be checked by the jump instruction itself, such as branch <label> if register X negative. In simple computer designs, comparison branches execute more arithmetic and can use more power than flag register branches. In fast computer designs comparison branches can run faster than flag register branches, because comparison branches can access the registers with more parallelism, using the same CPU mechanisms as a calculation. Some early and simple CPU architectures, still found in microcontrollers, may not implement a conditional jump, but rather only a conditional "skip the next instruction" operation. A conditional jump or call is thus implemented as a conditional skip of an unconditional jump or call instruction. === Examples === Depending on the computer architecture, the assembly language mnemonic for a jump instruction is typically some shortened form of the word jump or the word branch, often along with other informative letters (or an extra parameter) representing the condition. Sometimes other details are included as well, such as the range of the jump (the offset size) or a special addressing mode that should be used to locate the actual effective offset. This table lists the machine level branch or jump instructions found in several well-known architectures: * x86, the PDP-11, VAX, and some others, set the carry-flag to signal borrow and clear the carry-flag to signal no borrow. ARM, 6502, the PIC, and some others, do the opposite for subtractive operations. This inverted function of the carry flag for certain instructions is marked by (*), that is, borrow=not carry in some parts of the table, but if not otherwise noted, borrow≡carry. However, carry on additive operations are handled the same way by most architectures. == Performance problems with branch instructions == To achieve high performance, modern processors are pipelined. They consist of multiple parts that each partially process an instruction, feed their results to the next stage in the pipeline, and start working on the next instruction in the program. This design expects instructions to execute in a particular unchanging sequence. Conditional branch instructions make it impossible to know this sequence. So conditional branches can cause "stalls" in which the pipeline has to be restarted on a different part of the program. == Improving performance by reducing stalls from branches == Several techniques improve speed by reducing stalls from conditional branches. === Branch prediction hints === Historically, branch prediction took statistics, and used the result to optimize code. A programmer would compile a test version of a program, and run it with test data. The test code counted how the branches were actually taken. The statistics from the test code were then used by the compiler to optimize the branches of released code. The optimization would arrange that the fastest branch direction (taken or not) would always be the most frequently taken control flow path. To permit this, CPUs must be designed with (or at least have) predictable branch timing. Some CPUs have instruction sets (such as the Power ISA) that were designed with "branch hints" so that a compiler can tell a CPU how each branch is to be taken. The problem with software branch prediction is that it requires a complex software development process. === Hardware branch predictors === To run any software, hardware branch predictors moved the statistics into the electronics. Branch predictors are parts of a processor that guess the outcome of a conditional branch. Then the processor's logic gambles on the guess by beginning to execute the expected instruction flow. An example of a simple hardware branch prediction scheme is to assume that all backward branches (i.e. to a smaller program counter) are taken (because they are part of a loop), and all forward branches (to a larger program counter) are not taken (because they leave a loop). Better branch predictors are developed and validated statistically by running them in simulation on a variety of test programs. Good predictors usually count the outcomes of previous executions of a branch. Faster, more expensive computers can then run faster by investing in better branch prediction electronics. In a CPU with hardware branch prediction, branch hints let the compiler's presumably superior branch prediction override the hardware's more simplistic branch prediction. === Branch-free code === Some logic can be written without branches or with fewer branches. It is often possible to use bitwise operations, conditional moves or other predication instead of branches. In fact, branch-free code is a must for cryptography due to timing attacks. === Delay slot === Another technique is a branch delay slot. In this approach, at least one instruction following a branch is always executed, with some exceptions such like the legacy MIPS architecture likely/unlikely branch instruction. Therefore, the computer can use this instruction to do useful work whether or not its pipeline stalls. This approach was historically popular in RISC computers. In a family of compatible CPUs, it complicates multicycle CPUs (with no pipeline), faster CPUs with longer-than-expected pipelines, and superscalar CPUs (which can execute instructions out of order.) == See also == Branch delay slot Branch predication Branch table Conditional (programming) Indirect branch Subroutine Spaghetti code == Notes == == References == == External links == Free IA-32 and x86-64 documentation, provided by Intel The PDP-11 FAQ The ARM instruction set
Wikipedia/Branch_(computer_science)
A default, in computer science, refers to the preexisting value of a user-configurable setting that is assigned to a software application, computer program or device. Such settings are also called factory settings, or factory presets, especially for electronic devices. Default values are standards values that are universal to all instances of the device or model and intended to make the device as accessible as possible "out of the box" without necessitating a lengthy configuration process prior to use. The user only has to modify the default settings according to their personal preferences. In many devices, the user has the option to restore these default settings for one or all options. Such an assignment makes the choice of that setting or value more likely, this is called the default effect. == Examples == === Application software preferences === One use of default parameters is for initial settings for application software. For example, the first time a user runs an application it may suggest that the user's delivery address is in the United States. This default might be appropriate if more users of that application were in the US than any other country. If the user selected a new country, it would override the default, and perhaps become the default for the next time the application is used on that computer or by that user. Changing the default for the next run would involve storing user information in some place, such as in cookies on the user's computer for an Internet application. In Microsoft Windows, default file associations associate applications with file types. === Television or computer monitor === A TV or computer monitor typically comes with a button to "restore factory presets". This allows the settings for brightness, contrast, color, etc., to be returned to the defaults recommended by the manufacturer. This button may be used when the settings get badly adjusted (say by a toddler playing with the controls). Some "fine-tuning" of the settings may still be needed from the factory settings, but they will likely be closer to the desired settings than random settings. == In application software == Using a default involves two goals which sometimes conflict: Minimal user interaction should be required. Setting defaults to the most commonly selected options serves this purpose. Panel entry errors should be minimized. Using defaults will tend to increase errors, as users may leave incorrect default settings selected. In cases where the value can be verified, this is not a severe problem. For example, the delivery country can be checked against the street address or postal codes and any mismatch can generate an error panel displayed to the user, who will then presumably make the correction. In cases where there is no clear majority and the results cannot easily be verified by other available information, such as the gender of the individual, no default should be offered. Some software applications, however, require that default values be supplied. A 1982 Apple Computer manual for developers warned: "Please do not ever use the word default in a program designed for humans. Default is something the mortgage went into right before the evil banker stole the Widow Parson's house. There is an exhaustive list of substitutes (previous, automatic, standard, etc.)". == In computer languages == Many languages in the C family (but not C itself, as of C11) allow a function to have default parameters or default arguments, that are used if the function is called with omitted parameter specifications. In C and programming languages based on its syntax, the switch statement (which dispatches among a number of alternatives) can make use of the default keyword to provide a case for when no other case matches. In Fortran, the INIT parameter on a declaration defines an initial default value for that variable. In Rust, types that implement the Default trait can produce a default value. For example, the primitive integer types in Rust implement the Default trait by returning 0. == In operating systems == In operating systems using a command line interface, the user types short commands often followed by various parameters and options. == See also == Principle of least astonishment Convention over configuration == References ==
Wikipedia/Default_(computer_science)
In programming languages, a label is a sequence of characters that identifies a location within source code. In most languages, labels take the form of an identifier, often followed by a punctuation character (e.g., a colon). In many high-level languages, the purpose of a label is to act as the destination of a GOTO statement. In assembly language, labels can be used anywhere an address can (for example, as the operand of a JMP or MOV instruction). Also in Pascal and its derived variations. Some languages, such as Fortran and BASIC, support numeric labels. Labels are also used to identify an entry point into a compiled sequence of statements (e.g., during debugging). == C == In C a label identifies a statement in the code. A single statement can have multiple labels. Labels just indicate locations in the code and reaching a label has no effect on the actual execution. === Function labels === Function labels consist of an identifier, followed by a colon. Each such label points to a statement in a function and its identifier must be unique within that function. Other functions may use the same name for a label. Label identifiers occupy their own namespace – one can have variables and functions with the same name as a label. Here error is the label. The statement goto can be used to jump to a labeled statement in the code. After a goto, program execution continues with the statement after the label. === Switch labels === Two types of labels can be put in a switch statement. A case label consists of the keyword case, followed by an expression that evaluates to integer constant. A default label consists of the keyword default. Case labels are used to associate an integer value with a statement in the code. When a switch statement is reached, program execution continues with the statement after the case label with value that matches the value in the parentheses of the switch. If there is no such case label, but there is a default label, program execution continues with the statement after the default label. If there is no default label, program execution continues after the switch. Within a single switch statement, the integer constant associated with each case label must be unique. There may or may not be a default statement. There is no restriction on the order of the labels within a switch. The requirement that case labels values evaluate to integer constants gives the compiler more room for optimizations. == Examples == === Javascript === In JavaScript language syntax statements may be preceded by the label: It also possible to use break statement to break out of the code blocks: == Common Lisp == In Common Lisp two ways of defining labels exist. The first one involves the tagbody special operator. Distinguishing its usage from many other programming languages that permit global navigation, such as C, the labels are only accessible in the context of this operator. Inside of a tagbody labels are defined as forms starting with a symbol; the go special form permits a transfer of control between these labels. A second method utilizes the reader macros #n= and #n#, the former of which labels the object immediately following it, the latter refers to its evaluated value. Labels in this sense constitute rather an alternative to variables, with #n= declaring and initializing a “variable” and #n# accessing it. The placeholder n designates a chosen unsigned decimal integer identifying the label. Apart from that, some forms permit or mandate the declaration of a label for later referral, including the special form block which prescribes a naming, and the loop macro that can be identified by a named clause. Immediate departure from a named form is possible by using the return-from special operator. In a fashion similar to C, the macros case, ccase, ecase, typecase, ctypecase and etypecase define switch statements. == See also == Line number == References ==
Wikipedia/Label_(computer_science)
In computer programming a control break is a change in the value of one of the keys on which a file is sorted which requires some extra processing. For example, with an input file sorted by post code, the number of items found in each postal district might need to be printed on a report, and a heading shown for the next district. Quite often there is a hierarchy of nested control breaks in a program, e.g. streets within districts within areas, with the need for a grand total at the end. Structured programming techniques have been developed to ensure correct processing of control breaks in languages such as COBOL and to ensure that conditions such as empty input files and sequence errors are handled properly. With fourth generation languages such as SQL, the programming language should handle most of the details of control breaks automatically. == References ==
Wikipedia/Control_break
A control-flow diagram (CFD) is a diagram to describe the control flow of a business process, process or review. Control-flow diagrams were developed in the 1950s, and are widely used in multiple engineering disciplines. They are one of the classic business process modeling methodologies, along with flow charts, drakon-charts, data flow diagrams, functional flow block diagram, Gantt charts, PERT diagrams, and IDEF. == Overview == A control-flow diagram can consist of a subdivision to show sequential steps, with if-then-else conditions, repetition, and/or case conditions. Suitably annotated geometrical figures are used to represent operations, data, or equipment, and arrows are used to indicate the sequential flow from one to another. There are several types of control-flow diagrams, for example: Change-control-flow diagram, used in project management Configuration-decision control-flow diagram, used in configuration management Process-control-flow diagram, used in process management Quality-control-flow diagram, used in quality control. In software and systems development, control-flow diagrams can be used in control-flow analysis, data-flow analysis, algorithm analysis, and simulation. Control and data are most applicable for real time and data-driven systems. These flow analyses transform logic and data requirements text into graphic flows which are easier to analyze than the text. PERT, state transition, and transaction diagrams are examples of control-flow diagrams. == Types of control-flow diagrams == === Process-control-flow diagram === A flow diagram can be developed for the process [control system] for each critical activity. Process control is normally a closed cycle in which a sensor. The application determines if the sensor information is within the predetermined (or calculated) data parameters and constraints. The results of this comparison, which controls the critical component. This [feedback] may control the component electronically or may indicate the need for a manual action. This closed-cycle process has many checks and balances to ensure that it stays safe. It may be fully computer controlled and automated, or it may be a hybrid in which only the sensor is automated and the action requires manual intervention. Further, some process control systems may use prior generations of hardware and software, while others are state of the art. === Performance-seeking control-flow diagram === The figure presents an example of a performance-seeking control-flow diagram of the algorithm. The control law consists of estimation, modeling, and optimization processes. In the Kalman filter estimator, the inputs, outputs, and residuals were recorded. At the compact propulsion-system-modeling stage, all the estimated inlet and engine parameters were recorded. In addition to temperatures, pressures, and control positions, such estimated parameters as stall margins, thrust, and drag components were recorded. In the optimization phase, the operating-condition constraints, optimal solution, and linear-programming health-status condition codes were recorded. Finally, the actual commands that were sent to the engine through the DEEC were recorded. == See also == Data-flow diagram Data and information visualization Control-flow graph DRAKON Flow process chart == References == This article incorporates public domain material from the National Institute of Standards and Technology
Wikipedia/Control-flow_diagram
The Java Modeling Language (JML) is a specification language for Java programs, using Hoare style pre- and postconditions and invariants, that follows the design by contract paradigm. Specifications are written as Java annotation comments to the source files, which hence can be compiled with any Java compiler. Various verification tools, such as a runtime assertion checker and the Extended Static Checker (ESC/Java) aid development. == Overview == JML is a behavioural interface specification language for Java modules. JML provides semantics to formally describe the behavior of a Java module, preventing ambiguity with regard to the module designers' intentions. JML inherits ideas from Eiffel, Larch and the Refinement Calculus, with the goal of providing rigorous formal semantics while still being accessible to any Java programmer. Various tools are available that make use of JML's behavioral specifications. Because specifications can be written as annotations in Java program files, or stored in separate specification files, Java modules with JML specifications can be compiled unchanged with any Java compiler. == Syntax == JML specifications are added to Java code in the form of annotations in comments. Java comments are interpreted as JML annotations when they begin with an @ sign. That is, comments of the form or Basic JML syntax provides the following keywords requires Defines a precondition on the method that follows. ensures Defines a postcondition on the method that follows. signals Defines a postcondition for when a given Exception is thrown by the method that follows. signals_only Defines what exceptions may be thrown when the given precondition holds. assignable Defines which fields are allowed to be assigned to by the method that follows. pure Declares a method to be side effect free (like assignable \nothing but can also throw exceptions). Furthermore, a pure method is supposed to always either terminate normally or throw an exception. invariant Defines an invariant property of the class. loop_invariant Defines a loop invariant for a loop. also Combines specification cases and can also declare that a method is inheriting specifications from its supertypes. assert Defines a JML assertion. spec_public Declares a protected or private variable public for specification purposes. Basic JML also provides the following expressions \result An identifier for the return value of the method that follows. \old(<expression>) A modifier to refer to the value of the <expression> at the time of entry into a method. (\forall <decl>; <range-exp>; <body-exp>) The universal quantifier. (\exists <decl>; <range-exp>; <body-exp>) The existential quantifier. a ==> b a implies b a <== b a is implied by b a <==> b a if and only if b as well as standard Java syntax for logical and, or, and not. JML annotations also have access to Java objects, object methods and operators that are within the scope of the method being annotated and that have appropriate visibility. These are combined to provide formal specifications of the properties of classes, fields and methods. For example, an annotated example of a simple banking class may look like Full documentation of JML syntax is available in the JML Reference Manual. == Tool support == A variety of tools provide functionality based on JML annotations. The Iowa State JML tools provide an assertion checking compiler jmlc which converts JML annotations into runtime assertions, a documentation generator jmldoc which produces Javadoc documentation augmented with extra information from JML annotations, and a unit test generator jmlunit which generates JUnit test code from JML annotations. Independent groups are working on tools that make use of JML annotations. These include: ESC/Java2 [1], an extended static checker which uses JML annotations to perform more rigorous static checking than is otherwise possible. OpenJML declares itself the successor of ESC/Java2. Daikon, a dynamic invariant generator. KeY, which provides an open source theorem prover with a JML front-end and an Eclipse plug-in (JML Editing) with support for syntax highlighting of JML. Krakatoa, a static verification tool based on the Why verification platform and using the Coq proof assistant. JMLEclipse, a plugin for the Eclipse integrated development environment with support for JML syntax and interfaces to various tools that make use of JML annotations. Sireum/Kiasan, a symbolic execution based static analyzer which supports JML as a contract language. JMLUnit, a tool to generate files for running JUnit tests on JML annotated Java files. TACO, an open source program analysis tool that statically checks the compliance of a Java program against its Java Modeling Language specification. == References == Gary T. Leavens and Yoonsik Cheon. Design by Contract with JML; Draft tutorial. Gary T. Leavens, Albert L. Baker, and Clyde Ruby. JML: A Notation for Detailed Design; in Haim Kilov, Bernhard Rumpe, and Ian Simmonds (editors), Behavioral Specifications of Businesses and Systems, Kluwer, 1999, chapter 12, pages 175-188. Gary T. Leavens, Erik Poll, Curtis Clifton, Yoonsik Cheon, Clyde Ruby, David Cok, Peter Müller, Joseph Kiniry, Patrice Chalin, and Daniel M. Zimmerman. JML Reference Manual (DRAFT), September 2009. HTML Marieke Huisman, Wolfgang Ahrendt, Daniel Bruns, and Martin Hentschel. Formal specification with JML. 2014. download (CC-BY-NC-ND) == External links == JML website
Wikipedia/Java_Modeling_Language
In mathematics, a profinite group is a topological group that is in a certain sense assembled from a system of finite groups. The idea of using a profinite group is to provide a "uniform", or "synoptic", view of an entire system of finite groups. Properties of the profinite group are generally speaking uniform properties of the system. For example, the profinite group is finitely generated (as a topological group) if and only if there exists d ∈ N {\displaystyle d\in \mathbb {N} } such that every group in the system can be generated by d {\displaystyle d} elements. Many theorems about finite groups can be readily generalised to profinite groups; examples are Lagrange's theorem and the Sylow theorems. To construct a profinite group one needs a system of finite groups and group homomorphisms between them. Without loss of generality, these homomorphisms can be assumed to be surjective, in which case the finite groups will appear as quotient groups of the resulting profinite group; in a sense, these quotients approximate the profinite group. Important examples of profinite groups are the additive groups of p {\displaystyle p} -adic integers and the Galois groups of infinite-degree field extensions. Every profinite group is compact and totally disconnected. A non-compact generalization of the concept is that of locally profinite groups. Even more general are the totally disconnected groups. == Definition == Profinite groups can be defined in either of two equivalent ways. === First definition (constructive) === A profinite group is a topological group that is isomorphic to the inverse limit of an inverse system of discrete finite groups. In this context, an inverse system consists of a directed set ( I , ≤ ) , {\displaystyle (I,\leq ),} an indexed family of finite groups { G i : i ∈ I } , {\displaystyle \{G_{i}:i\in I\},} each having the discrete topology, and a family of homomorphisms { f i j : G j → G i ∣ i , j ∈ I , i ≤ j } {\displaystyle \{f_{i}^{j}:G_{j}\to G_{i}\mid i,j\in I,i\leq j\}} such that f i i {\displaystyle f_{i}^{i}} is the identity map on G i {\displaystyle G_{i}} and the collection satisfies the composition property f i j ∘ f j k = f i k {\displaystyle f_{i}^{j}\circ f_{j}^{k}=f_{i}^{k}} whenever i ≤ j ≤ k . {\displaystyle i\leq j\leq k.} The inverse limit is the set: lim ← ⁡ G i = { ( g i ) i ∈ I ∈ ∏ i ∈ I G i : f i j ( g j ) = g i for all i ≤ j } {\displaystyle \varprojlim G_{i}=\left\{(g_{i})_{i\in I}\in {\textstyle \prod \limits _{i\in I}}G_{i}:f_{i}^{j}(g_{j})=g_{i}{\text{ for all }}i\leq j\right\}} equipped with the relative product topology. One can also define the inverse limit in terms of a universal property. In categorical terms, this is a special case of a cofiltered limit construction. === Second definition (axiomatic) === A profinite group is a compact and totally disconnected topological group: that is, a topological group that is also a Stone space. === Profinite completion === Given an arbitrary group G {\displaystyle G} , there is a related profinite group G ^ , {\displaystyle {\widehat {G}},} the profinite completion of G {\displaystyle G} . It is defined as the inverse limit of the groups G / N {\displaystyle G/N} , where N {\displaystyle N} runs through the normal subgroups in G {\displaystyle G} of finite index (these normal subgroups are partially ordered by inclusion, which translates into an inverse system of natural homomorphisms between the quotients). There is a natural homomorphism η : G → G ^ {\displaystyle \eta :G\to {\widehat {G}}} , and the image of G {\displaystyle G} under this homomorphism is dense in G ^ {\displaystyle {\widehat {G}}} . The homomorphism η {\displaystyle \eta } is injective if and only if the group G {\displaystyle G} is residually finite (i.e., ⋂ N = 1 {\displaystyle \bigcap N=1} , where the intersection runs through all normal subgroups N {\displaystyle N} of finite index). The homomorphism η {\displaystyle \eta } is characterized by the following universal property: given any profinite group H {\displaystyle H} and any continuous group homomorphism f : G → H {\displaystyle f:G\rightarrow H} where G {\displaystyle G} is given the smallest topology compatible with group operations in which its normal subgroups of finite index are open, there exists a unique continuous group homomorphism g : G ^ → H {\displaystyle g:{\widehat {G}}\rightarrow H} with f = g η {\displaystyle f=g\eta } . === Equivalence === Any group constructed by the first definition satisfies the axioms in the second definition. Conversely, any group G {\displaystyle G} satisfying the axioms in the second definition can be constructed as an inverse limit according to the first definition using the inverse limit lim ← ⁡ G / N {\displaystyle \varprojlim G/N} where N {\displaystyle N} ranges through the open normal subgroups of G {\displaystyle G} ordered by (reverse) inclusion. If G {\displaystyle G} is topologically finitely generated then it is in addition equal to its own profinite completion. === Surjective systems === In practice, the inverse system of finite groups is almost always surjective, meaning that all its maps are surjective. Without loss of generality, it suffices to consider only surjective systems since given any inverse system, it is possible to first construct its profinite group G , {\displaystyle G,} and then reconstruct it as its own profinite completion. == Examples == Finite groups are profinite, if given the discrete topology. The group of p {\displaystyle p} -adic integers Z p {\displaystyle \mathbb {Z} _{p}} under addition is profinite (in fact procyclic). It is the inverse limit of the finite groups Z / p n Z {\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} } where n {\displaystyle n} ranges over all natural numbers and the natural maps Z / p n Z → Z / p m Z {\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} \to \mathbb {Z} /p^{m}\mathbb {Z} } for n ≥ m . {\displaystyle n\geq m.} The topology on this profinite group is the same as the topology arising from the p {\displaystyle p} -adic valuation on Z p . {\displaystyle \mathbb {Z} _{p}.} The group of profinite integers Z ^ {\displaystyle {\widehat {\mathbb {Z} }}} is the profinite completion of Z . {\displaystyle \mathbb {Z} .} In detail, it is the inverse limit of the finite groups Z / n Z {\displaystyle \mathbb {Z} /n\mathbb {Z} } where n = 1 , 2 , 3 , … {\displaystyle n=1,2,3,\dots } with the modulo maps Z / n Z → Z / m Z {\displaystyle \mathbb {Z} /n\mathbb {Z} \to \mathbb {Z} /m\mathbb {Z} } for m | n . {\displaystyle m\,|\,n.} This group is the product of all the groups Z p , {\displaystyle \mathbb {Z} _{p},} and it is the absolute Galois group of any finite field. The Galois theory of field extensions of infinite degree gives rise naturally to Galois groups that are profinite. Specifically, if L / K {\displaystyle L/K} is a Galois extension, consider the group G = Gal ⁡ ( L / K ) {\displaystyle G=\operatorname {Gal} (L/K)} consisting of all field automorphisms of L {\displaystyle L} that keep all elements of K {\displaystyle K} fixed. This group is the inverse limit of the finite groups Gal ⁡ ( F / K ) , {\displaystyle \operatorname {Gal} (F/K),} where F {\displaystyle F} ranges over all intermediate fields such that F / K {\displaystyle F/K} is a finite Galois extension. For the limit process, the restriction homomorphisms Gal ⁡ ( F 1 / K ) → Gal ⁡ ( F 2 / K ) {\displaystyle \operatorname {Gal} (F_{1}/K)\to \operatorname {Gal} (F_{2}/K)} are used, where F 2 ⊆ F 1 . {\displaystyle F_{2}\subseteq F_{1}.} The topology obtained on Gal ⁡ ( L / K ) {\displaystyle \operatorname {Gal} (L/K)} is known as the Krull topology after Wolfgang Krull. Waterhouse (1974) showed that every profinite group is isomorphic to one arising from the Galois theory of some field K , {\displaystyle K,} but one cannot (yet) control which field K {\displaystyle K} will be in this case. In fact, for many fields K {\displaystyle K} one does not know in general precisely which finite groups occur as Galois groups over K . {\displaystyle K.} This is the inverse Galois problem for a field K . {\displaystyle K.} (For some fields K {\displaystyle K} the inverse Galois problem is settled, such as the field of rational functions in one variable over the complex numbers.) Not every profinite group occurs as an absolute Galois group of a field. The étale fundamental groups considered in algebraic geometry are also profinite groups, roughly speaking because the algebra can only 'see' finite coverings of an algebraic variety. The fundamental groups of algebraic topology, however, are in general not profinite: for any prescribed group, there is a 2-dimensional CW complex whose fundamental group equals it. The automorphism group of a locally finite rooted tree is profinite. == Properties and facts == Every product of (arbitrarily many) profinite groups is profinite; the topology arising from the profiniteness agrees with the product topology. The inverse limit of an inverse system of profinite groups with continuous transition maps is profinite and the inverse limit functor is exact on the category of profinite groups. Further, being profinite is an extension property. Every closed subgroup of a profinite group is itself profinite; the topology arising from the profiniteness agrees with the subspace topology. If N {\displaystyle N} is a closed normal subgroup of a profinite group G , {\displaystyle G,} then the factor group G / N {\displaystyle G/N} is profinite; the topology arising from the profiniteness agrees with the quotient topology. Since every profinite group G {\displaystyle G} is compact Hausdorff, there exists a Haar measure on G , {\displaystyle G,} which allows us to measure the "size" of subsets of G , {\displaystyle G,} compute certain probabilities, and integrate functions on G . {\displaystyle G.} A subgroup of a profinite group is open if and only if it is closed and has finite index. According to a theorem of Nikolay Nikolov and Dan Segal, in any topologically finitely generated profinite group (that is, a profinite group that has a dense finitely generated subgroup) the subgroups of finite index are open. This generalizes an earlier analogous result of Jean-Pierre Serre for topologically finitely generated pro- p {\displaystyle p} groups. The proof uses the classification of finite simple groups. As an easy corollary of the Nikolov–Segal result above, any surjective discrete group homomorphism φ : G → H {\displaystyle \varphi :G\to H} between profinite groups G {\displaystyle G} and H {\displaystyle H} is continuous as long as G {\displaystyle G} is topologically finitely generated. Indeed, any open subgroup of H {\displaystyle H} is of finite index, so its preimage in G {\displaystyle G} is also of finite index, and hence it must be open. Suppose G {\displaystyle G} and H {\displaystyle H} are topologically finitely generated profinite groups that are isomorphic as discrete groups by an isomorphism ι . {\displaystyle \iota .} Then ι {\displaystyle \iota } is bijective and continuous by the above result. Furthermore, ι − 1 {\displaystyle \iota ^{-1}} is also continuous, so ι {\displaystyle \iota } is a homeomorphism. Therefore the topology on a topologically finitely generated profinite group is uniquely determined by its algebraic structure. == Ind-finite groups == There is a notion of ind-finite group, which is the conceptual dual to profinite groups; i.e. a group G {\displaystyle G} is ind-finite if it is the direct limit of an inductive system of finite groups. (In particular, it is an ind-group.) The usual terminology is different: a group G {\displaystyle G} is called locally finite if every finitely generated subgroup is finite. This is equivalent, in fact, to being 'ind-finite'. By applying Pontryagin duality, one can see that abelian profinite groups are in duality with locally finite discrete abelian groups. The latter are just the abelian torsion groups. == Projective profinite groups == A profinite group is projective if it has the lifting property for every extension. This is equivalent to saying that G {\displaystyle G} is projective if for every surjective morphism from a profinite H → G {\displaystyle H\to G} there is a section G → H . {\displaystyle G\to H.} Projectivity for a profinite group G {\displaystyle G} is equivalent to either of the two properties: the cohomological dimension cd ⁡ ( G ) ≤ 1 ; {\displaystyle \operatorname {cd} (G)\leq 1;} for every prime p {\displaystyle p} the Sylow p {\displaystyle p} -subgroups of G {\displaystyle G} are free pro- p {\displaystyle p} -groups. Every projective profinite group can be realized as an absolute Galois group of a pseudo algebraically closed field. This result is due to Alexander Lubotzky and Lou van den Dries. == Procyclic group == A profinite group G {\displaystyle G} is procyclic if it is topologically generated by a single element σ ; {\displaystyle \sigma ;} that is, if G = ⟨ σ ⟩ ¯ , {\displaystyle G={\overline {\langle \sigma \rangle }},} the closure of the subgroup ⟨ σ ⟩ = { σ n : n ∈ Z } . {\displaystyle \langle \sigma \rangle =\left\{\sigma ^{n}:n\in \mathbb {Z} \right\}.} A topological group G {\displaystyle G} is procyclic if and only if G ≅ ∏ p ∈ S G p {\displaystyle G\cong {\textstyle \prod \limits _{p\in S}}G_{p}} where p {\displaystyle p} ranges over some set of prime numbers S {\displaystyle S} and G p {\displaystyle G_{p}} is isomorphic to either Z p {\displaystyle \mathbb {Z} _{p}} or Z / p n Z , n ∈ N . {\displaystyle \mathbb {Z} /p^{n}\mathbb {Z} ,n\in \mathbb {N} .} == See also == Hausdorff completion Locally cyclic group Pro-p group – type of profinite groupPages displaying wikidata descriptions as a fallback Profinite integer – Number theory concept Residual property (mathematics) – concept in group theoryPages displaying wikidata descriptions as a fallback Residually finite group – type of mathematical groupPages displaying wikidata descriptions as a fallback == References == Fried, Michael D.; Jarden, Moshe (2008). Field arithmetic. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. Vol. 11 (3rd revised ed.). Springer-Verlag. ISBN 978-3-540-77269-9. Zbl 1145.12001. Nikolov, Nikolay; Segal, Dan (2007), "On finitely generated profinite groups, I: strong completeness and uniform bounds", Annals of Mathematics, 2nd series, 165 (1): 171–238, arXiv:math.GR/0604399, doi:10.4007/annals.2007.165.171. Nikolov, Nikolay; Segal, Dan (2007), "On finitely generated profinite groups, II: products in quasisimple groups", Annals of Mathematics, 2nd series, 165 (1): 239–273, arXiv:math.GR/0604400, doi:10.4007/annals.2007.165.239. Lenstra, Hendrik (2003), Profinite Groups (PDF), talk given at Oberwolfach. Lubotzky, Alexander (2001), "Book Review", Bulletin of the American Mathematical Society, 38 (4): 475–479, doi:10.1090/S0273-0979-01-00914-4. Review of several books about profinite groups. Serre, Jean-Pierre (1994), Cohomologie galoisienne, Lecture Notes in Mathematics (in French), vol. 5 (5 ed.), Springer-Verlag, ISBN 978-3-540-58002-7, MR 1324577, Zbl 0812.12002. Serre, Jean-Pierre (1997), Galois cohomology, Translated by Patrick Ion, Springer-Verlag, ISBN 3-540-61990-9, Zbl 0902.12004 Waterhouse, William C. (1974), "Profinite groups are Galois groups", Proceedings of the American Mathematical Society, 42 (2), American Mathematical Society: 639–640, doi:10.1090/S0002-9939-1974-0325587-3, JSTOR 2039560, Zbl 0281.20031.
Wikipedia/Krull_topology
In mathematics, particularly in algebra, a field extension is a pair of fields K ⊆ L {\displaystyle K\subseteq L} , such that the operations of K are those of L restricted to K. In this case, L is an extension field of K and K is a subfield of L. For example, under the usual notions of addition and multiplication, the complex numbers are an extension field of the real numbers; the real numbers are a subfield of the complex numbers. Field extensions are fundamental in algebraic number theory, and in the study of polynomial roots through Galois theory, and are widely used in algebraic geometry. == Subfield == A subfield K {\displaystyle K} of a field L {\displaystyle L} is a subset K ⊆ L {\displaystyle K\subseteq L} that is a field with respect to the field operations inherited from L {\displaystyle L} . Equivalently, a subfield is a subset that contains the multiplicative identity 1 {\displaystyle 1} , and is closed under the operations of addition, subtraction, multiplication, and taking the inverse of a nonzero element of K {\displaystyle K} . As 1 – 1 = 0, the latter definition implies K {\displaystyle K} and L {\displaystyle L} have the same zero element. For example, the field of rational numbers is a subfield of the real numbers, which is itself a subfield of the complex numbers. More generally, the field of rational numbers is (or is isomorphic to) a subfield of any field of characteristic 0 {\displaystyle 0} . The characteristic of a subfield is the same as the characteristic of the larger field. == Extension field == If K {\displaystyle K} is a subfield of L {\displaystyle L} , then L {\displaystyle L} is an extension field or simply extension of K {\displaystyle K} , and this pair of fields is a field extension. Such a field extension is denoted L / K {\displaystyle L/K} (read as " L {\displaystyle L} over K {\displaystyle K} "). If L {\displaystyle L} is an extension of F {\displaystyle F} , which is in turn an extension of K {\displaystyle K} , then F {\displaystyle F} is said to be an intermediate field (or intermediate extension or subextension) of L / K {\displaystyle L/K} . Given a field extension L / K {\displaystyle L/K} , the larger field L {\displaystyle L} is a K {\displaystyle K} -vector space. The dimension of this vector space is called the degree of the extension and is denoted by [ L : K ] {\displaystyle [L:K]} . The degree of an extension is 1 if and only if the two fields are equal. In this case, the extension is a trivial extension. Extensions of degree 2 and 3 are called quadratic extensions and cubic extensions, respectively. A finite extension is an extension that has a finite degree. Given two extensions L / K {\displaystyle L/K} and M / L {\displaystyle M/L} , the extension M / K {\displaystyle M/K} is finite if and only if both L / K {\displaystyle L/K} and M / L {\displaystyle M/L} are finite. In this case, one has [ M : K ] = [ M : L ] ⋅ [ L : K ] . {\displaystyle [M:K]=[M:L]\cdot [L:K].} Given a field extension L / K {\displaystyle L/K} and a subset S {\displaystyle S} of L {\displaystyle L} , there is a smallest subfield of L {\displaystyle L} that contains K {\displaystyle K} and S {\displaystyle S} . It is the intersection of all subfields of L {\displaystyle L} that contain K {\displaystyle K} and S {\displaystyle S} , and is denoted by K ( S ) {\displaystyle K(S)} (read as " K {\displaystyle K} adjoin S {\displaystyle S} "). One says that K ( S ) {\displaystyle K(S)} is the field generated by S {\displaystyle S} over K {\displaystyle K} , and that S {\displaystyle S} is a generating set of K ( S ) {\displaystyle K(S)} over K {\displaystyle K} . When S = { x 1 , … , x n } {\displaystyle S=\{x_{1},\ldots ,x_{n}\}} is finite, one writes K ( x 1 , … , x n ) {\displaystyle K(x_{1},\ldots ,x_{n})} instead of K ( { x 1 , … , x n } ) , {\displaystyle K(\{x_{1},\ldots ,x_{n}\}),} and one says that K ( S ) {\displaystyle K(S)} is finitely generated over K {\displaystyle K} . If S {\displaystyle S} consists of a single element s {\displaystyle s} , the extension K ( s ) / K {\displaystyle K(s)/K} is called a simple extension and s {\displaystyle s} is called a primitive element of the extension. An extension field of the form K ( S ) {\displaystyle K(S)} is often said to result from the adjunction of S {\displaystyle S} to K {\displaystyle K} . In characteristic 0, every finite extension is a simple extension. This is the primitive element theorem, which does not hold true for fields of non-zero characteristic. If a simple extension K ( s ) / K {\displaystyle K(s)/K} is not finite, the field K ( s ) {\displaystyle K(s)} is isomorphic to the field of rational fractions in s {\displaystyle s} over K {\displaystyle K} . == Caveats == The notation L / K is purely formal and does not imply the formation of a quotient ring or quotient group or any other kind of division. Instead the slash expresses the word "over". In some literature the notation L:K is used. It is often desirable to talk about field extensions in situations where the small field is not actually contained in the larger one, but is naturally embedded. For this purpose, one abstractly defines a field extension as an injective ring homomorphism between two fields. Every ring homomorphism between fields is injective because fields do not possess nontrivial proper ideals, so field extensions are precisely the morphisms in the category of fields. Henceforth, we will suppress the injective homomorphism and assume that we are dealing with actual subfields. == Examples == The field of complex numbers C {\displaystyle \mathbb {C} } is an extension field of the field of real numbers R {\displaystyle \mathbb {R} } , and R {\displaystyle \mathbb {R} } in turn is an extension field of the field of rational numbers Q {\displaystyle \mathbb {Q} } . Clearly then, C / Q {\displaystyle \mathbb {C} /\mathbb {Q} } is also a field extension. We have [ C : R ] = 2 {\displaystyle [\mathbb {C} :\mathbb {R} ]=2} because { 1 , i } {\displaystyle \{1,i\}} is a basis, so the extension C / R {\displaystyle \mathbb {C} /\mathbb {R} } is finite. This is a simple extension because C = R ( i ) . {\displaystyle \mathbb {C} =\mathbb {R} (i).} [ R : Q ] = c {\displaystyle [\mathbb {R} :\mathbb {Q} ]={\mathfrak {c}}} (the cardinality of the continuum), so this extension is infinite. The field Q ( 2 ) = { a + b 2 ∣ a , b ∈ Q } , {\displaystyle \mathbb {Q} ({\sqrt {2}})=\left\{a+b{\sqrt {2}}\mid a,b\in \mathbb {Q} \right\},} is an extension field of Q , {\displaystyle \mathbb {Q} ,} also clearly a simple extension. The degree is 2 because { 1 , 2 } {\displaystyle \left\{1,{\sqrt {2}}\right\}} can serve as a basis. The field Q ( 2 , 3 ) = Q ( 2 ) ( 3 ) = { a + b 3 ∣ a , b ∈ Q ( 2 ) } = { a + b 2 + c 3 + d 6 ∣ a , b , c , d ∈ Q } , {\displaystyle {\begin{aligned}\mathbb {Q} \left({\sqrt {2}},{\sqrt {3}}\right)&=\mathbb {Q} \left({\sqrt {2}}\right)\left({\sqrt {3}}\right)\\&=\left\{a+b{\sqrt {3}}\mid a,b\in \mathbb {Q} \left({\sqrt {2}}\right)\right\}\\&=\left\{a+b{\sqrt {2}}+c{\sqrt {3}}+d{\sqrt {6}}\mid a,b,c,d\in \mathbb {Q} \right\},\end{aligned}}} is an extension field of both Q ( 2 ) {\displaystyle \mathbb {Q} ({\sqrt {2}})} and Q , {\displaystyle \mathbb {Q} ,} of degree 2 and 4 respectively. It is also a simple extension, as one can show that Q ( 2 , 3 ) = Q ( 2 + 3 ) = { a + b ( 2 + 3 ) + c ( 2 + 3 ) 2 + d ( 2 + 3 ) 3 ∣ a , b , c , d ∈ Q } . {\displaystyle {\begin{aligned}\mathbb {Q} ({\sqrt {2}},{\sqrt {3}})&=\mathbb {Q} ({\sqrt {2}}+{\sqrt {3}})\\&=\left\{a+b({\sqrt {2}}+{\sqrt {3}})+c({\sqrt {2}}+{\sqrt {3}})^{2}+d({\sqrt {2}}+{\sqrt {3}})^{3}\mid a,b,c,d\in \mathbb {Q} \right\}.\end{aligned}}} Finite extensions of Q {\displaystyle \mathbb {Q} } are also called algebraic number fields and are important in number theory. Another extension field of the rationals, which is also important in number theory, although not a finite extension, is the field of p-adic numbers Q p {\displaystyle \mathbb {Q} _{p}} for a prime number p. It is common to construct an extension field of a given field K as a quotient ring of the polynomial ring K[X] in order to "create" a root for a given polynomial f(X). Suppose for instance that K does not contain any element x with x2 = −1. Then the polynomial X 2 + 1 {\displaystyle X^{2}+1} is irreducible in K[X], consequently the ideal generated by this polynomial is maximal, and L = K [ X ] / ( X 2 + 1 ) {\displaystyle L=K[X]/(X^{2}+1)} is an extension field of K which does contain an element whose square is −1 (namely the residue class of X). By iterating the above construction, one can construct a splitting field of any polynomial from K[X]. This is an extension field L of K in which the given polynomial splits into a product of linear factors. If p is any prime number and n is a positive integer, there is a unique (up to isomorphism) finite field G F ( p n ) = F p n {\displaystyle GF(p^{n})=\mathbb {F} _{p^{n}}} with pn elements; this is an extension field of the prime field GF ⁡ ( p ) = F p = Z / p Z {\displaystyle \operatorname {GF} (p)=\mathbb {F} _{p}=\mathbb {Z} /p\mathbb {Z} } with p elements. Given a field K, we can consider the field K(X) of all rational functions in the variable X with coefficients in K; the elements of K(X) are fractions of two polynomials over K, and indeed K(X) is the field of fractions of the polynomial ring K[X]. This field of rational functions is an extension field of K. This extension is infinite. Given a Riemann surface M, the set of all meromorphic functions defined on M is a field, denoted by C ( M ) . {\displaystyle \mathbb {C} (M).} It is a transcendental extension field of C {\displaystyle \mathbb {C} } if we identify every complex number with the corresponding constant function defined on M. More generally, given an algebraic variety V over some field K, the function field K(V), consisting of the rational functions defined on V, is an extension field of K. == Algebraic extension == An element x of a field extension L / K {\displaystyle L/K} is algebraic over K if it is a root of a nonzero polynomial with coefficients in K. For example, 2 {\displaystyle {\sqrt {2}}} is algebraic over the rational numbers, because it is a root of x 2 − 2. {\displaystyle x^{2}-2.} If an element x of L is algebraic over K, the monic polynomial of lowest degree that has x as a root is called the minimal polynomial of x. This minimal polynomial is irreducible over K. An element s of L is algebraic over K if and only if the simple extension K(s) /K is a finite extension. In this case the degree of the extension equals the degree of the minimal polynomial, and a basis of the K-vector space K(s) consists of 1 , s , s 2 , … , s d − 1 , {\displaystyle 1,s,s^{2},\ldots ,s^{d-1},} where d is the degree of the minimal polynomial. The set of the elements of L that are algebraic over K form a subextension, which is called the algebraic closure of K in L. This results from the preceding characterization: if s and t are algebraic, the extensions K(s) /K and K(s)(t) /K(s) are finite. Thus K(s, t) /K is also finite, as well as the sub extensions K(s ± t) /K, K(st) /K and K(1/s) /K (if s ≠ 0). It follows that s ± t, st and 1/s are all algebraic. An algebraic extension L / K {\displaystyle L/K} is an extension such that every element of L is algebraic over K. Equivalently, an algebraic extension is an extension that is generated by algebraic elements. For example, Q ( 2 , 3 ) {\displaystyle \mathbb {Q} ({\sqrt {2}},{\sqrt {3}})} is an algebraic extension of Q {\displaystyle \mathbb {Q} } , because 2 {\displaystyle {\sqrt {2}}} and 3 {\displaystyle {\sqrt {3}}} are algebraic over Q . {\displaystyle \mathbb {Q} .} A simple extension is algebraic if and only if it is finite. This implies that an extension is algebraic if and only if it is the union of its finite subextensions, and that every finite extension is algebraic. Every field K has an algebraic closure, which is up to an isomorphism the largest extension field of K which is algebraic over K, and also the smallest extension field such that every polynomial with coefficients in K has a root in it. For example, C {\displaystyle \mathbb {C} } is an algebraic closure of R {\displaystyle \mathbb {R} } , but not an algebraic closure of Q {\displaystyle \mathbb {Q} } , as it is not algebraic over Q {\displaystyle \mathbb {Q} } (for example π is not algebraic over Q {\displaystyle \mathbb {Q} } ). == Transcendental extension == Given a field extension L / K {\displaystyle L/K} , a subset S of L is called algebraically independent over K if no non-trivial polynomial relation with coefficients in K exists among the elements of S. The largest cardinality of an algebraically independent set is called the transcendence degree of L/K. It is always possible to find a set S, algebraically independent over K, such that L/K(S) is algebraic. Such a set S is called a transcendence basis of L/K. All transcendence bases have the same cardinality, equal to the transcendence degree of the extension. An extension L / K {\displaystyle L/K} is said to be purely transcendental if and only if there exists a transcendence basis S of L / K {\displaystyle L/K} such that L = K(S). Such an extension has the property that all elements of L except those of K are transcendental over K, but, however, there are extensions with this property which are not purely transcendental—a class of such extensions take the form L/K where both L and K are algebraically closed. If L/K is purely transcendental and S is a transcendence basis of the extension, it doesn't necessarily follow that L = K(S). On the opposite, even when one knows a transcendence basis, it may be difficult to decide whether the extension is purely separable, and if it is so, it may be difficult to find a transcendence basis S such that L = K(S). For example, consider the extension Q ( x , y ) / Q , {\displaystyle \mathbb {Q} (x,y)/\mathbb {Q} ,} where x {\displaystyle x} is transcendental over Q , {\displaystyle \mathbb {Q} ,} and y {\displaystyle y} is a root of the equation y 2 − x 3 = 0. {\displaystyle y^{2}-x^{3}=0.} Such an extension can be defined as Q ( X ) [ Y ] / ⟨ Y 2 − X 3 ⟩ , {\displaystyle \mathbb {Q} (X)[Y]/\langle Y^{2}-X^{3}\rangle ,} in which x {\displaystyle x} and y {\displaystyle y} are the equivalence classes of X {\displaystyle X} and Y . {\displaystyle Y.} Obviously, the singleton set { x } {\displaystyle \{x\}} is transcendental over Q {\displaystyle \mathbb {Q} } and the extension Q ( x , y ) / Q ( x ) {\displaystyle \mathbb {Q} (x,y)/\mathbb {Q} (x)} is algebraic; hence { x } {\displaystyle \{x\}} is a transcendence basis that does not generates the extension Q ( x , y ) / Q ( x ) {\displaystyle \mathbb {Q} (x,y)/\mathbb {Q} (x)} . Similarly, { y } {\displaystyle \{y\}} is a transcendence basis that does not generates the whole extension. However the extension is purely transcendental since, if one set t = y / x , {\displaystyle t=y/x,} one has x = t 2 {\displaystyle x=t^{2}} and y = t 3 , {\displaystyle y=t^{3},} and thus t {\displaystyle t} generates the whole extension. Purely transcendental extensions of an algebraically closed field occur as function fields of rational varieties. The problem of finding a rational parametrization of a rational variety is equivalent with the problem of finding a transcendence basis that generates the whole extension. == Normal, separable and Galois extensions == An algebraic extension L / K {\displaystyle L/K} is called normal if every irreducible polynomial in K[X] that has a root in L completely factors into linear factors over L. Every algebraic extension F/K admits a normal closure L, which is an extension field of F such that L / K {\displaystyle L/K} is normal and which is minimal with this property. An algebraic extension L / K {\displaystyle L/K} is called separable if the minimal polynomial of every element of L over K is separable, i.e., has no repeated roots in an algebraic closure over K. A Galois extension is a field extension that is both normal and separable. A consequence of the primitive element theorem states that every finite separable extension has a primitive element (i.e. is simple). Given any field extension L / K {\displaystyle L/K} , we can consider its automorphism group Aut ( L / K ) {\displaystyle {\text{Aut}}(L/K)} , consisting of all field automorphisms α: L → L with α(x) = x for all x in K. When the extension is Galois this automorphism group is called the Galois group of the extension. Extensions whose Galois group is abelian are called abelian extensions. For a given field extension L / K {\displaystyle L/K} , one is often interested in the intermediate fields F (subfields of L that contain K). The significance of Galois extensions and Galois groups is that they allow a complete description of the intermediate fields: there is a bijection between the intermediate fields and the subgroups of the Galois group, described by the fundamental theorem of Galois theory. == Generalizations == Field extensions can be generalized to ring extensions which consist of a ring and one of its subrings. A closer non-commutative analog are central simple algebras (CSAs) – ring extensions over a field, which are simple algebra (no non-trivial 2-sided ideals, just as for a field) and where the center of the ring is exactly the field. For example, the only finite field extension of the real numbers is the complex numbers, while the quaternions are a central simple algebra over the reals, and all CSAs over the reals are Brauer equivalent to the reals or the quaternions. CSAs can be further generalized to Azumaya algebras, where the base field is replaced by a commutative local ring. == Extension of scalars == Given a field extension, one can "extend scalars" on associated algebraic objects. For example, given a real vector space, one can produce a complex vector space via complexification. In addition to vector spaces, one can perform extension of scalars for associative algebras defined over the field, such as polynomials or group algebras and the associated group representations. Extension of scalars of polynomials is often used implicitly, by just considering the coefficients as being elements of a larger field, but may also be considered more formally. Extension of scalars has numerous applications, as discussed in extension of scalars: applications. == See also == Field theory Glossary of field theory Tower of fields Primary extension Regular extension == Notes == == References == Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1 Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016 {{citation}}: ISBN / Date incompatibility (help) Lang, Serge (2004), Algebra, Graduate Texts in Mathematics, vol. 211 (Corrected fourth printing, revised third ed.), New York: Springer-Verlag, ISBN 978-0-387-95385-4 McCoy, Neal H. (1968), Introduction To Modern Algebra, Revised Edition, Boston: Allyn and Bacon, LCCN 68015225 == External links == "Extension of a field", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Wikipedia/Adjunction_(field_theory)
In group theory, a branch of mathematics, a torsion group or a periodic group is a group in which every element has finite order. The exponent of such a group, if it exists, is the least common multiple of the orders of the elements. For example, it follows from Lagrange's theorem that every finite group is periodic and it has an exponent that divides its order. == Infinite examples == Examples of infinite periodic groups include the additive group of the ring of polynomials over a finite field, and the quotient group of the rationals by the integers, as well as their direct summands, the Prüfer groups. Another example is the direct sum of all dihedral groups. None of these examples has a finite generating set. Explicit examples of finitely generated infinite periodic groups were constructed by Golod, based on joint work with Shafarevich (see Golod–Shafarevich theorem), and by Aleshin and Grigorchuk using automata. These groups have infinite exponent; examples with finite exponent are given for instance by Tarski monster groups constructed by Olshanskii. == Burnside's problem == Burnside's problem is a classical question that deals with the relationship between periodic groups and finite groups, when only finitely generated groups are considered: Does specifying an exponent force finiteness? The existence of infinite, finitely generated periodic groups as in the previous paragraph shows that the answer is "no" for an arbitrary exponent. Though much more is known about which exponents can occur for infinite finitely generated groups there are still some for which the problem is open. For some classes of groups, for instance linear groups, the answer to Burnside's problem restricted to the class is positive. == Mathematical logic == An interesting property of periodic groups is that the definition cannot be formalized in terms of first-order logic. This is because doing so would require an axiom of the form ∀ x , ( ( x = e ) ∨ ( x ∘ x = e ) ∨ ( ( x ∘ x ) ∘ x = e ) ∨ ⋯ ) , {\displaystyle \forall x,{\big (}(x=e)\lor (x\circ x=e)\lor ((x\circ x)\circ x=e)\lor \cdots {\big )},} which contains an infinite disjunction and is therefore inadmissible: first order logic permits quantifiers over one type and cannot capture properties or subsets of that type. It is also not possible to get around this infinite disjunction by using an infinite set of axioms: the compactness theorem implies that no set of first-order formulae can characterize the periodic groups. == Related notions == The torsion subgroup of an abelian group A is the subgroup of A that consists of all elements that have finite order. A torsion abelian group is an abelian group in which every element has finite order. A torsion-free abelian group is an abelian group in which the identity element is the only element with finite order. == See also == Torsion (algebra) Jordan–Schur theorem == References == R. I. Grigorchuk, Degrees of growth of finitely generated groups and the theory of invariant means, Izv. Akad. Nauk SSSR Ser. Mat. 48:5 (1984), 939–985 (Russian).
Wikipedia/Exponent_(group_theory)
In mathematics, the Hardy–Ramanujan–Littlewood circle method is a technique of analytic number theory. It is named for G. H. Hardy, S. Ramanujan, and J. E. Littlewood, who developed it in a series of papers on Waring's problem. == History == The initial idea is usually attributed to the work of Hardy with Srinivasa Ramanujan a few years earlier, in 1916 and 1917, on the asymptotics of the partition function. It was taken up by many other researchers, including Harold Davenport and I. M. Vinogradov, who modified the formulation slightly (moving from complex analysis to exponential sums), without changing the broad lines. Hundreds of papers followed, and as of 2022 the method still yields results. The method is the subject of a monograph Vaughan (1997) by R. C. Vaughan. == Outline == The goal is to prove asymptotic behavior of a series: to show that an ~ F(n) for some function. This is done by taking the generating function of the series, then computing the residues about zero (essentially the Fourier coefficients). Technically, the generating function is scaled to have radius of convergence 1, so it has singularities on the unit circle – thus one cannot take the contour integral over the unit circle. The circle method is specifically how to compute these residues, by partitioning the circle into minor arcs (the bulk of the circle) and major arcs (small arcs containing the most significant singularities), and then bounding the behavior on the minor arcs. The key insight is that, in many cases of interest (such as theta functions), the singularities occur at the roots of unity, and the significance of the singularities is in the order of the Farey sequence. Thus one can investigate the most significant singularities, and, if fortunate, compute the integrals. === Setup === The circle in question was initially the unit circle in the complex plane. Assuming the problem had first been formulated in the terms that for a sequence of complex numbers an for n = 0, 1, 2, 3, ..., we want some asymptotic information of the type an ~ F(n), where we have some heuristic reason to guess the form taken by F (an ansatz), we write f ( z ) = ∑ a n z n {\displaystyle f(z)=\sum a_{n}z^{n}} a power series generating function. The interesting cases are where f is then of radius of convergence equal to 1, and we suppose that the problem as posed has been modified to present this situation. === Residues === From that formulation, it follows directly from the residue theorem that I n = ∮ C f ( z ) z − ( n + 1 ) d z = 2 π i a n {\displaystyle I_{n}=\oint _{C}f(z)z^{-(n+1)}\,dz=2\pi ia_{n}} for integers n ≥ 0, where C is a circle of radius r and centred at 0, for any r with 0 < r < 1; in other words, I n {\displaystyle I_{n}} is a contour integral, integrated over the circle described traversed once anticlockwise. We would like to take r = 1 directly, that is, to use the unit circle contour. In the complex analysis formulation this is problematic, since the values of f may not be defined there. === Singularities on unit circle === The problem addressed by the circle method is to force the issue of taking r = 1, by a good understanding of the nature of the singularities f exhibits on the unit circle. The fundamental insight is the role played by the Farey sequence of rational numbers, or equivalently by the roots of unity: ζ = exp ⁡ ( 2 π i r s ) . {\displaystyle \zeta \ =\exp \left({\frac {2\pi ir}{s}}\right).} Here the denominator s, assuming that ⁠r/s⁠ is in lowest terms, turns out to determine the relative importance of the singular behaviour of typical f near ζ. === Method === The Hardy–Littlewood circle method, for the complex-analytic formulation, can then be thus expressed. The contributions to the evaluation of In, as r → 1, should be treated in two ways, traditionally called major arcs and minor arcs. We divide the roots of unity ζ into two classes, according to whether s ≤ N or s > N, where N is a function of n that is ours to choose conveniently. The integral In is divided up into integrals each on some arc of the circle that is adjacent to ζ, of length a function of s (again, at our discretion). The arcs make up the whole circle; the sum of the integrals over the major arcs is to make up 2πiF(n) (realistically, this will happen up to a manageable remainder term). The sum of the integrals over the minor arcs is to be replaced by an upper bound, smaller in order than F(n). == Discussion == Stated boldly like this, it is not at all clear that this can be made to work. The insights involved are quite deep. One clear source is the theory of theta functions. === Waring's problem === In the context of Waring's problem, powers of theta functions are the generating functions for the sum of squares function. Their analytic behaviour is known in much more accurate detail than for the cubes, for example. It is the case, as the false-colour diagram indicates, that for a theta function the 'most important' point on the boundary circle is at z = 1; followed by z = −1, and then the two complex cube roots of unity at 7 o'clock and 11 o'clock. After that it is the fourth roots of unity i and −i that matter most. While nothing in this guarantees that the analytical method will work, it does explain the rationale of using a Farey series-type criterion on roots of unity. In the case of Waring's problem, one takes a sufficiently high power of the generating function to force the situation in which the singularities, organised into the so-called singular series, predominate. The less wasteful the estimates used on the rest, the finer the results. As Bryan Birch has put it, the method is inherently wasteful. That does not apply to the case of the partition function, which signalled the possibility that in a favourable situation the losses from estimates could be controlled. === Vinogradov trigonometric sums === Later, I. M. Vinogradov extended the technique, replacing the exponential sum formulation f(z) with a finite Fourier series, so that the relevant integral In is a Fourier coefficient. Vinogradov applied finite sums to Waring's problem in 1926, and the general trigonometric sum method became known as "the circle method of Hardy, Littlewood and Ramanujan, in the form of Vinogradov's trigonometric sums". Essentially all this does is to discard the whole 'tail' of the generating function, allowing the business of r in the limiting operation to be set directly to the value 1. == Applications == Refinements of the method have allowed results to be proved about the solutions of homogeneous Diophantine equations, as long as the number of variables k is large relative to the degree d (see Birch's theorem for example). This turns out to be a contribution to the Hasse principle, capable of yielding quantitative information. If d is fixed and k is small, other methods are required, and indeed the Hasse principle tends to fail. == Rademacher's contour == In the special case when the circle method is applied to find the coefficients of a modular form of negative weight, Hans Rademacher found a modification of the contour that makes the series arising from the circle method converge to the exact result. To describe his contour, it is convenient to replace the unit circle by the upper half plane, by making the substitution z = exp(2πiτ), so that the contour integral becomes an integral from τ = i to τ = 1 + i. (The number i could be replaced by any number on the upper half-plane, but i is the most convenient choice.) Rademacher's contour is (more or less) given by the boundaries of all the Ford circles from 0 to 1, as shown in the diagram. The replacement of the line from i to 1 + i by the boundaries of these circles is a non-trivial limiting process, which can be justified for modular forms that have negative weight, and with more care can also be justified for non-constant terms for the case of weight 0 (in other words modular functions). == Notes == == References == Apostol, Tom M. (1990), Modular functions and Dirichlet series in number theory (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-97127-8 Mardzhanishvili, K. K. (1985), "Ivan Matveevich Vinogradov: a brief outline of his life and works", I. M. Vinogradov, Selected Works, Berlin{{citation}}: CS1 maint: location missing publisher (link) Rademacher, Hans (1943), "On the expansion of the partition function in a series", Annals of Mathematics, Second Series, 44 (3), The Annals of Mathematics, Vol. 44, No. 3: 416–422, doi:10.2307/1968973, JSTOR 1968973, MR 0008618 Vaughan, R. C. (1997), The Hardy–Littlewood Method, Cambridge Tracts in Mathematics, vol. 125 (2nd ed.), Cambridge University Press, ISBN 978-0-521-57347-4 == Further reading == Wang, Yuan (1991). Diophantine equations and inequalities in algebraic number fields. Berlin: Springer-Verlag. doi:10.1007/978-3-642-58171-7. ISBN 9783642634895. OCLC 851809136. == External links == Terence Tao, Heuristic limitations of the circle method, a blog post in 2012
Wikipedia/Hardy–Ramanujan–Littlewood_circle_method
In mathematics, the Weierstrass elliptic functions are elliptic functions that take a particularly simple form. They are named for Karl Weierstrass. This class of functions is also referred to as ℘-functions and they are usually denoted by the symbol ℘, a uniquely fancy script p. They play an important role in the theory of elliptic functions, i.e., meromorphic functions that are doubly periodic. A ℘-function together with its derivative can be used to parameterize elliptic curves and they generate the field of elliptic functions with respect to a given period lattice. == Motivation == A cubic of the form C g 2 , g 3 C = { ( x , y ) ∈ C 2 : y 2 = 4 x 3 − g 2 x − g 3 } {\displaystyle C_{g_{2},g_{3}}^{\mathbb {C} }=\{(x,y)\in \mathbb {C} ^{2}:y^{2}=4x^{3}-g_{2}x-g_{3}\}} , where g 2 , g 3 ∈ C {\displaystyle g_{2},g_{3}\in \mathbb {C} } are complex numbers with g 2 3 − 27 g 3 2 ≠ 0 {\displaystyle g_{2}^{3}-27g_{3}^{2}\neq 0} , cannot be rationally parameterized. Yet one still wants to find a way to parameterize it. For the quadric K = { ( x , y ) ∈ R 2 : x 2 + y 2 = 1 } {\displaystyle K=\left\{(x,y)\in \mathbb {R} ^{2}:x^{2}+y^{2}=1\right\}} ; the unit circle, there exists a (non-rational) parameterization using the sine function and its derivative the cosine function: ψ : R / 2 π Z → K , t ↦ ( sin ⁡ t , cos ⁡ t ) . {\displaystyle \psi :\mathbb {R} /2\pi \mathbb {Z} \to K,\quad t\mapsto (\sin t,\cos t).} Because of the periodicity of the sine and cosine R / 2 π Z {\displaystyle \mathbb {R} /2\pi \mathbb {Z} } is chosen to be the domain, so the function is bijective. In a similar way one can get a parameterization of C g 2 , g 3 C {\displaystyle C_{g_{2},g_{3}}^{\mathbb {C} }} by means of the doubly periodic ℘ {\displaystyle \wp } -function (see in the section "Relation to elliptic curves"). This parameterization has the domain C / Λ {\displaystyle \mathbb {C} /\Lambda } , which is topologically equivalent to a torus. There is another analogy to the trigonometric functions. Consider the integral function a ( x ) = ∫ 0 x d y 1 − y 2 . {\displaystyle a(x)=\int _{0}^{x}{\frac {dy}{\sqrt {1-y^{2}}}}.} It can be simplified by substituting y = sin ⁡ t {\displaystyle y=\sin t} and s = arcsin ⁡ x {\displaystyle s=\arcsin x} : a ( x ) = ∫ 0 s d t = s = arcsin ⁡ x . {\displaystyle a(x)=\int _{0}^{s}dt=s=\arcsin x.} That means a − 1 ( x ) = sin ⁡ x {\displaystyle a^{-1}(x)=\sin x} . So the sine function is an inverse function of an integral function. Elliptic functions are the inverse functions of elliptic integrals. In particular, let: u ( z ) = ∫ z ∞ d s 4 s 3 − g 2 s − g 3 . {\displaystyle u(z)=\int _{z}^{\infty }{\frac {ds}{\sqrt {4s^{3}-g_{2}s-g_{3}}}}.} Then the extension of u − 1 {\displaystyle u^{-1}} to the complex plane equals the ℘ {\displaystyle \wp } -function. This invertibility is used in complex analysis to provide a solution to certain nonlinear differential equations satisfying the Painlevé property, i.e., those equations that admit poles as their only movable singularities. == Definition == Let ω 1 , ω 2 ∈ C {\displaystyle \omega _{1},\omega _{2}\in \mathbb {C} } be two complex numbers that are linearly independent over R {\displaystyle \mathbb {R} } and let Λ := Z ω 1 + Z ω 2 := { m ω 1 + n ω 2 : m , n ∈ Z } {\displaystyle \Lambda :=\mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}:=\{m\omega _{1}+n\omega _{2}:m,n\in \mathbb {Z} \}} be the period lattice generated by those numbers. Then the ℘ {\displaystyle \wp } -function is defined as follows: ℘ ( z , ω 1 , ω 2 ) := ℘ ( z ) = 1 z 2 + ∑ λ ∈ Λ ∖ { 0 } ( 1 ( z − λ ) 2 − 1 λ 2 ) . {\displaystyle \wp (z,\omega _{1},\omega _{2}):=\wp (z)={\frac {1}{z^{2}}}+\sum _{\lambda \in \Lambda \setminus \{0\}}\left({\frac {1}{(z-\lambda )^{2}}}-{\frac {1}{\lambda ^{2}}}\right).} This series converges locally uniformly absolutely in the complex torus C / Λ {\displaystyle \mathbb {C} /\Lambda } . It is common to use 1 {\displaystyle 1} and τ {\displaystyle \tau } in the upper half-plane H := { z ∈ C : Im ⁡ ( z ) > 0 } {\displaystyle \mathbb {H} :=\{z\in \mathbb {C} :\operatorname {Im} (z)>0\}} as generators of the lattice. Dividing by ω 1 {\textstyle \omega _{1}} maps the lattice Z ω 1 + Z ω 2 {\displaystyle \mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}} isomorphically onto the lattice Z + Z τ {\displaystyle \mathbb {Z} +\mathbb {Z} \tau } with τ = ω 2 ω 1 {\textstyle \tau ={\tfrac {\omega _{2}}{\omega _{1}}}} . Because − τ {\displaystyle -\tau } can be substituted for τ {\displaystyle \tau } , without loss of generality we can assume τ ∈ H {\displaystyle \tau \in \mathbb {H} } , and then define ℘ ( z , τ ) := ℘ ( z , 1 , τ ) {\displaystyle \wp (z,\tau ):=\wp (z,1,\tau )} . With that definition, we have ℘ ( z , ω 1 , ω 2 ) = ω 1 − 2 ℘ ( z / ω 1 , ω 2 / ω 1 ) {\displaystyle \wp (z,\omega _{1},\omega _{2})=\omega _{1}^{-2}\wp (z/\omega _{1},\omega _{2}/\omega _{1})} . == Properties == ℘ {\displaystyle \wp } is a meromorphic function with a pole of order 2 at each period λ {\displaystyle \lambda } in Λ {\displaystyle \Lambda } . ℘ {\displaystyle \wp } is a homogeneous function in that: ℘ ( λ z , λ ω 1 , λ ω 2 ) = λ − 2 ℘ ( z , ω 1 , ω 2 ) . {\displaystyle \wp (\lambda z,\lambda \omega _{1},\lambda \omega _{2})=\lambda ^{-2}\wp (z,\omega _{1},\omega _{2}).} ℘ {\displaystyle \wp } is an even function. That means ℘ ( z ) = ℘ ( − z ) {\displaystyle \wp (z)=\wp (-z)} for all z ∈ C ∖ Λ {\displaystyle z\in \mathbb {C} \setminus \Lambda } , which can be seen in the following way: ℘ ( − z ) = 1 ( − z ) 2 + ∑ λ ∈ Λ ∖ { 0 } ( 1 ( − z − λ ) 2 − 1 λ 2 ) = 1 z 2 + ∑ λ ∈ Λ ∖ { 0 } ( 1 ( z + λ ) 2 − 1 λ 2 ) = 1 z 2 + ∑ λ ∈ Λ ∖ { 0 } ( 1 ( z − λ ) 2 − 1 λ 2 ) = ℘ ( z ) . {\displaystyle {\begin{aligned}\wp (-z)&={\frac {1}{(-z)^{2}}}+\sum _{\lambda \in \Lambda \setminus \{0\}}\left({\frac {1}{(-z-\lambda )^{2}}}-{\frac {1}{\lambda ^{2}}}\right)\\&={\frac {1}{z^{2}}}+\sum _{\lambda \in \Lambda \setminus \{0\}}\left({\frac {1}{(z+\lambda )^{2}}}-{\frac {1}{\lambda ^{2}}}\right)\\&={\frac {1}{z^{2}}}+\sum _{\lambda \in \Lambda \setminus \{0\}}\left({\frac {1}{(z-\lambda )^{2}}}-{\frac {1}{\lambda ^{2}}}\right)=\wp (z).\end{aligned}}} The second last equality holds because { − λ : λ ∈ Λ } = Λ {\displaystyle \{-\lambda :\lambda \in \Lambda \}=\Lambda } . Since the sum converges absolutely this rearrangement does not change the limit. The derivative of ℘ {\displaystyle \wp } is given by: ℘ ′ ( z ) = − 2 ∑ λ ∈ Λ 1 ( z − λ ) 3 . {\displaystyle \wp '(z)=-2\sum _{\lambda \in \Lambda }{\frac {1}{(z-\lambda )^{3}}}.} ℘ {\displaystyle \wp } and ℘ ′ {\displaystyle \wp '} are doubly periodic with the periods ω 1 {\displaystyle \omega _{1}} and ω 2 {\displaystyle \omega _{2}} . This means: ℘ ( z + ω 1 ) = ℘ ( z ) = ℘ ( z + ω 2 ) , and ℘ ′ ( z + ω 1 ) = ℘ ′ ( z ) = ℘ ′ ( z + ω 2 ) . {\displaystyle {\begin{aligned}\wp (z+\omega _{1})&=\wp (z)=\wp (z+\omega _{2}),\ {\textrm {and}}\\[3mu]\wp '(z+\omega _{1})&=\wp '(z)=\wp '(z+\omega _{2}).\end{aligned}}} It follows that ℘ ( z + λ ) = ℘ ( z ) {\displaystyle \wp (z+\lambda )=\wp (z)} and ℘ ′ ( z + λ ) = ℘ ′ ( z ) {\displaystyle \wp '(z+\lambda )=\wp '(z)} for all λ ∈ Λ {\displaystyle \lambda \in \Lambda } . == Laurent expansion == Let r := min { | λ | : 0 ≠ λ ∈ Λ } {\displaystyle r:=\min\{{|\lambda }|:0\neq \lambda \in \Lambda \}} . Then for 0 < | z | < r {\displaystyle 0<|z|<r} the ℘ {\displaystyle \wp } -function has the following Laurent expansion ℘ ( z ) = 1 z 2 + ∑ n = 1 ∞ ( 2 n + 1 ) G 2 n + 2 z 2 n {\displaystyle \wp (z)={\frac {1}{z^{2}}}+\sum _{n=1}^{\infty }(2n+1)G_{2n+2}z^{2n}} where G n = ∑ 0 ≠ λ ∈ Λ λ − n {\displaystyle G_{n}=\sum _{0\neq \lambda \in \Lambda }\lambda ^{-n}} for n ≥ 3 {\displaystyle n\geq 3} are so called Eisenstein series. == Differential equation == Set g 2 = 60 G 4 {\displaystyle g_{2}=60G_{4}} and g 3 = 140 G 6 {\displaystyle g_{3}=140G_{6}} . Then the ℘ {\displaystyle \wp } -function satisfies the differential equation ℘ ′ 2 ( z ) = 4 ℘ 3 ( z ) − g 2 ℘ ( z ) − g 3 . {\displaystyle \wp '^{2}(z)=4\wp ^{3}(z)-g_{2}\wp (z)-g_{3}.} This relation can be verified by forming a linear combination of powers of ℘ {\displaystyle \wp } and ℘ ′ {\displaystyle \wp '} to eliminate the pole at z = 0 {\displaystyle z=0} . This yields an entire elliptic function that has to be constant by Liouville's theorem. === Invariants === The coefficients of the above differential equation g 2 {\displaystyle g_{2}} and g 3 {\displaystyle g_{3}} are known as the invariants. Because they depend on the lattice Λ {\displaystyle \Lambda } they can be viewed as functions in ω 1 {\displaystyle \omega _{1}} and ω 2 {\displaystyle \omega _{2}} . The series expansion suggests that g 2 {\displaystyle g_{2}} and g 3 {\displaystyle g_{3}} are homogeneous functions of degree − 4 {\displaystyle -4} and − 6 {\displaystyle -6} . That is g 2 ( λ ω 1 , λ ω 2 ) = λ − 4 g 2 ( ω 1 , ω 2 ) {\displaystyle g_{2}(\lambda \omega _{1},\lambda \omega _{2})=\lambda ^{-4}g_{2}(\omega _{1},\omega _{2})} g 3 ( λ ω 1 , λ ω 2 ) = λ − 6 g 3 ( ω 1 , ω 2 ) {\displaystyle g_{3}(\lambda \omega _{1},\lambda \omega _{2})=\lambda ^{-6}g_{3}(\omega _{1},\omega _{2})} for λ ≠ 0 {\displaystyle \lambda \neq 0} . If ω 1 {\displaystyle \omega _{1}} and ω 2 {\displaystyle \omega _{2}} are chosen in such a way that Im ⁡ ( ω 2 ω 1 ) > 0 {\displaystyle \operatorname {Im} \left({\tfrac {\omega _{2}}{\omega _{1}}}\right)>0} , g 2 {\displaystyle g_{2}} and g 3 {\displaystyle g_{3}} can be interpreted as functions on the upper half-plane H := { z ∈ C : Im ⁡ ( z ) > 0 } {\displaystyle \mathbb {H} :=\{z\in \mathbb {C} :\operatorname {Im} (z)>0\}} . Let τ = ω 2 ω 1 {\displaystyle \tau ={\tfrac {\omega _{2}}{\omega _{1}}}} . One has: g 2 ( 1 , τ ) = ω 1 4 g 2 ( ω 1 , ω 2 ) , {\displaystyle g_{2}(1,\tau )=\omega _{1}^{4}g_{2}(\omega _{1},\omega _{2}),} g 3 ( 1 , τ ) = ω 1 6 g 3 ( ω 1 , ω 2 ) . {\displaystyle g_{3}(1,\tau )=\omega _{1}^{6}g_{3}(\omega _{1},\omega _{2}).} That means g2 and g3 are only scaled by doing this. Set g 2 ( τ ) := g 2 ( 1 , τ ) {\displaystyle g_{2}(\tau ):=g_{2}(1,\tau )} and g 3 ( τ ) := g 3 ( 1 , τ ) . {\displaystyle g_{3}(\tau ):=g_{3}(1,\tau ).} As functions of τ ∈ H {\displaystyle \tau \in \mathbb {H} } , g 2 {\displaystyle g_{2}} and g 3 {\displaystyle g_{3}} are so called modular forms. The Fourier series for g 2 {\displaystyle g_{2}} and g 3 {\displaystyle g_{3}} are given as follows: g 2 ( τ ) = 4 3 π 4 [ 1 + 240 ∑ k = 1 ∞ σ 3 ( k ) q 2 k ] {\displaystyle g_{2}(\tau )={\frac {4}{3}}\pi ^{4}\left[1+240\sum _{k=1}^{\infty }\sigma _{3}(k)q^{2k}\right]} g 3 ( τ ) = 8 27 π 6 [ 1 − 504 ∑ k = 1 ∞ σ 5 ( k ) q 2 k ] {\displaystyle g_{3}(\tau )={\frac {8}{27}}\pi ^{6}\left[1-504\sum _{k=1}^{\infty }\sigma _{5}(k)q^{2k}\right]} where σ m ( k ) := ∑ d ∣ k d m {\displaystyle \sigma _{m}(k):=\sum _{d\mid {k}}d^{m}} is the divisor function and q = e π i τ {\displaystyle q=e^{\pi i\tau }} is the nome. === Modular discriminant === The modular discriminant Δ {\displaystyle \Delta } is defined as the discriminant of the characteristic polynomial of the differential equation ℘ ′ 2 ( z ) = 4 ℘ 3 ( z ) − g 2 ℘ ( z ) − g 3 {\displaystyle \wp '^{2}(z)=4\wp ^{3}(z)-g_{2}\wp (z)-g_{3}} as follows: Δ = g 2 3 − 27 g 3 2 . {\displaystyle \Delta =g_{2}^{3}-27g_{3}^{2}.} The discriminant is a modular form of weight 12 {\displaystyle 12} . That is, under the action of the modular group, it transforms as Δ ( a τ + b c τ + d ) = ( c τ + d ) 12 Δ ( τ ) {\displaystyle \Delta \left({\frac {a\tau +b}{c\tau +d}}\right)=\left(c\tau +d\right)^{12}\Delta (\tau )} where a , b , d , c ∈ Z {\displaystyle a,b,d,c\in \mathbb {Z} } with a d − b c = 1 {\displaystyle ad-bc=1} . Note that Δ = ( 2 π ) 12 η 24 {\displaystyle \Delta =(2\pi )^{12}\eta ^{24}} where η {\displaystyle \eta } is the Dedekind eta function. For the Fourier coefficients of Δ {\displaystyle \Delta } , see Ramanujan tau function. === The constants e1, e2 and e3 === e 1 {\displaystyle e_{1}} , e 2 {\displaystyle e_{2}} and e 3 {\displaystyle e_{3}} are usually used to denote the values of the ℘ {\displaystyle \wp } -function at the half-periods. e 1 ≡ ℘ ( ω 1 2 ) {\displaystyle e_{1}\equiv \wp \left({\frac {\omega _{1}}{2}}\right)} e 2 ≡ ℘ ( ω 2 2 ) {\displaystyle e_{2}\equiv \wp \left({\frac {\omega _{2}}{2}}\right)} e 3 ≡ ℘ ( ω 1 + ω 2 2 ) {\displaystyle e_{3}\equiv \wp \left({\frac {\omega _{1}+\omega _{2}}{2}}\right)} They are pairwise distinct and only depend on the lattice Λ {\displaystyle \Lambda } and not on its generators. e 1 {\displaystyle e_{1}} , e 2 {\displaystyle e_{2}} and e 3 {\displaystyle e_{3}} are the roots of the cubic polynomial 4 ℘ ( z ) 3 − g 2 ℘ ( z ) − g 3 {\displaystyle 4\wp (z)^{3}-g_{2}\wp (z)-g_{3}} and are related by the equation: e 1 + e 2 + e 3 = 0. {\displaystyle e_{1}+e_{2}+e_{3}=0.} Because those roots are distinct the discriminant Δ {\displaystyle \Delta } does not vanish on the upper half plane. Now we can rewrite the differential equation: ℘ ′ 2 ( z ) = 4 ( ℘ ( z ) − e 1 ) ( ℘ ( z ) − e 2 ) ( ℘ ( z ) − e 3 ) . {\displaystyle \wp '^{2}(z)=4(\wp (z)-e_{1})(\wp (z)-e_{2})(\wp (z)-e_{3}).} That means the half-periods are zeros of ℘ ′ {\displaystyle \wp '} . The invariants g 2 {\displaystyle g_{2}} and g 3 {\displaystyle g_{3}} can be expressed in terms of these constants in the following way: g 2 = − 4 ( e 1 e 2 + e 1 e 3 + e 2 e 3 ) {\displaystyle g_{2}=-4(e_{1}e_{2}+e_{1}e_{3}+e_{2}e_{3})} g 3 = 4 e 1 e 2 e 3 {\displaystyle g_{3}=4e_{1}e_{2}e_{3}} e 1 {\displaystyle e_{1}} , e 2 {\displaystyle e_{2}} and e 3 {\displaystyle e_{3}} are related to the modular lambda function: λ ( τ ) = e 3 − e 2 e 1 − e 2 , τ = ω 2 ω 1 . {\displaystyle \lambda (\tau )={\frac {e_{3}-e_{2}}{e_{1}-e_{2}}},\quad \tau ={\frac {\omega _{2}}{\omega _{1}}}.} == Relation to Jacobi's elliptic functions == For numerical work, it is often convenient to calculate the Weierstrass elliptic function in terms of Jacobi's elliptic functions. The basic relations are: ℘ ( z ) = e 3 + e 1 − e 3 sn 2 ⁡ w = e 2 + ( e 1 − e 3 ) dn 2 ⁡ w sn 2 ⁡ w = e 1 + ( e 1 − e 3 ) cn 2 ⁡ w sn 2 ⁡ w {\displaystyle \wp (z)=e_{3}+{\frac {e_{1}-e_{3}}{\operatorname {sn} ^{2}w}}=e_{2}+(e_{1}-e_{3}){\frac {\operatorname {dn} ^{2}w}{\operatorname {sn} ^{2}w}}=e_{1}+(e_{1}-e_{3}){\frac {\operatorname {cn} ^{2}w}{\operatorname {sn} ^{2}w}}} where e 1 , e 2 {\displaystyle e_{1},e_{2}} and e 3 {\displaystyle e_{3}} are the three roots described above and where the modulus k of the Jacobi functions equals k = e 2 − e 3 e 1 − e 3 {\displaystyle k={\sqrt {\frac {e_{2}-e_{3}}{e_{1}-e_{3}}}}} and their argument w equals w = z e 1 − e 3 . {\displaystyle w=z{\sqrt {e_{1}-e_{3}}}.} == Relation to Jacobi's theta functions == The function ℘ ( z , τ ) = ℘ ( z , 1 , ω 2 / ω 1 ) {\displaystyle \wp (z,\tau )=\wp (z,1,\omega _{2}/\omega _{1})} can be represented by Jacobi's theta functions: ℘ ( z , τ ) = ( π θ 2 ( 0 , q ) θ 3 ( 0 , q ) θ 4 ( π z , q ) θ 1 ( π z , q ) ) 2 − π 2 3 ( θ 2 4 ( 0 , q ) + θ 3 4 ( 0 , q ) ) {\displaystyle \wp (z,\tau )=\left(\pi \theta _{2}(0,q)\theta _{3}(0,q){\frac {\theta _{4}(\pi z,q)}{\theta _{1}(\pi z,q)}}\right)^{2}-{\frac {\pi ^{2}}{3}}\left(\theta _{2}^{4}(0,q)+\theta _{3}^{4}(0,q)\right)} where q = e π i τ {\displaystyle q=e^{\pi i\tau }} is the nome and τ {\displaystyle \tau } is the period ratio ( τ ∈ H ) {\displaystyle (\tau \in \mathbb {H} )} . This also provides a very rapid algorithm for computing ℘ ( z , τ ) {\displaystyle \wp (z,\tau )} . == Relation to elliptic curves == Consider the embedding of the cubic curve in the complex projective plane C ¯ g 2 , g 3 C = { ( x , y ) ∈ C 2 : y 2 = 4 x 3 − g 2 x − g 3 } ∪ { O } ⊂ C 2 ∪ P 1 ( C ) = P 2 ( C ) . {\displaystyle {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }=\{(x,y)\in \mathbb {C} ^{2}:y^{2}=4x^{3}-g_{2}x-g_{3}\}\cup \{O\}\subset \mathbb {C} ^{2}\cup \mathbb {P} _{1}(\mathbb {C} )=\mathbb {P} _{2}(\mathbb {C} ).} where O {\displaystyle O} is a point lying on the line at infinity P 1 ( C ) {\displaystyle \mathbb {P} _{1}(\mathbb {C} )} . For this cubic there exists no rational parameterization, if Δ ≠ 0 {\displaystyle \Delta \neq 0} . In this case it is also called an elliptic curve. Nevertheless there is a parameterization in homogeneous coordinates that uses the ℘ {\displaystyle \wp } -function and its derivative ℘ ′ {\displaystyle \wp '} : φ ( ℘ , ℘ ′ ) : C / Λ → C ¯ g 2 , g 3 C , z ↦ { [ ℘ ( z ) : ℘ ′ ( z ) : 1 ] z ∉ Λ [ 0 : 1 : 0 ] z ∈ Λ {\displaystyle \varphi (\wp ,\wp '):\mathbb {C} /\Lambda \to {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} },\quad z\mapsto {\begin{cases}\left[\wp (z):\wp '(z):1\right]&z\notin \Lambda \\\left[0:1:0\right]\quad &z\in \Lambda \end{cases}}} Now the map φ {\displaystyle \varphi } is bijective and parameterizes the elliptic curve C ¯ g 2 , g 3 C {\displaystyle {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }} . C / Λ {\displaystyle \mathbb {C} /\Lambda } is an abelian group and a topological space, equipped with the quotient topology. It can be shown that every Weierstrass cubic is given in such a way. That is to say that for every pair g 2 , g 3 ∈ C {\displaystyle g_{2},g_{3}\in \mathbb {C} } with Δ = g 2 3 − 27 g 3 2 ≠ 0 {\displaystyle \Delta =g_{2}^{3}-27g_{3}^{2}\neq 0} there exists a lattice Z ω 1 + Z ω 2 {\displaystyle \mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}} , such that g 2 = g 2 ( ω 1 , ω 2 ) {\displaystyle g_{2}=g_{2}(\omega _{1},\omega _{2})} and g 3 = g 3 ( ω 1 , ω 2 ) {\displaystyle g_{3}=g_{3}(\omega _{1},\omega _{2})} . The statement that elliptic curves over Q {\displaystyle \mathbb {Q} } can be parameterized over Q {\displaystyle \mathbb {Q} } , is known as the modularity theorem. This is an important theorem in number theory. It was part of Andrew Wiles' proof (1995) of Fermat's Last Theorem. == Addition theorems == Let z , w ∈ C {\displaystyle z,w\in \mathbb {C} } , so that z , w , z + w , z − w ∉ Λ {\displaystyle z,w,z+w,z-w\notin \Lambda } . Then one has: ℘ ( z + w ) = 1 4 [ ℘ ′ ( z ) − ℘ ′ ( w ) ℘ ( z ) − ℘ ( w ) ] 2 − ℘ ( z ) − ℘ ( w ) . {\displaystyle \wp (z+w)={\frac {1}{4}}\left[{\frac {\wp '(z)-\wp '(w)}{\wp (z)-\wp (w)}}\right]^{2}-\wp (z)-\wp (w).} As well as the duplication formula: ℘ ( 2 z ) = 1 4 [ ℘ ″ ( z ) ℘ ′ ( z ) ] 2 − 2 ℘ ( z ) . {\displaystyle \wp (2z)={\frac {1}{4}}\left[{\frac {\wp ''(z)}{\wp '(z)}}\right]^{2}-2\wp (z).} ==== Proofs ==== 1. These formulas can come with a geometric interpretation. If one looks at the elliptic curve C g 2 , g 3 C {\displaystyle C_{g_{2},g_{3}}^{\mathbb {C} }} a line λ = { ( x , y ) ∈ C 2 : y = m x + q } {\displaystyle \lambda =\{(x,y)\in \mathbb {C} ^{2}:y=mx+q\}} intersects it in three points: C g 2 , g 3 C ∩ λ = { P , Q , R } {\displaystyle C_{g_{2},g_{3}}^{\mathbb {C} }\cap \lambda =\{P,Q,R\}} . Since these points belong to the elliptic curve they can be labeled as P = ( ℘ ( u ) , ℘ ′ ( u ) ) Q = ( ℘ ( v ) , ℘ ′ ( v ) ) {\displaystyle P=(\wp (u),\wp '(u))\quad Q=(\wp (v),\wp '(v))\quad } R = ( ℘ ( u + v ) , ℘ ′ ( u + v ) ) {\displaystyle R=(\wp (u+v),\wp '(u+v))} with ( u , v ) ∉ Λ {\displaystyle (u,v)\notin \Lambda } . From the formula of a secant line we have m = y P − y Q x P − x Q = ℘ ′ ( u ) − ℘ ′ ( v ) ℘ ( u ) − ℘ ( v ) {\displaystyle m={\frac {y_{P}-y_{Q}}{x_{P}-x_{Q}}}={\frac {\wp '(u)-\wp '(v)}{\wp (u)-\wp (v)}}} letting C g 2 , g 3 C = λ {\displaystyle C_{g_{2},g_{3}}^{\mathbb {C} }=\lambda } we have the equation ( m x + q ) 2 = 4 x 3 − g 2 x − g 3 {\displaystyle (mx+q)^{2}=4x^{3}-g_{2}x-g_{3}} which becomes 4 x 3 − m 2 x 2 − ( 2 m q + g 2 ) x − g 3 − q 2 = 0 {\displaystyle 4x^{3}-m^{2}x^{2}-(2mq+g_{2})x-g_{3}-q^{2}=0} using Vieta's formulas one obtains: x P + x Q + x R = m 2 4 {\displaystyle x_{P}+x_{Q}+x_{R}={\frac {m^{2}}{4}}} which provides the wanted formula ℘ ( u + v ) + ℘ ( u ) + ℘ ( v ) = 1 4 [ ℘ ′ ( u ) − ℘ ′ ( v ) ℘ ( u ) − ℘ ( v ) ] 2 {\displaystyle \wp (u+v)+\wp (u)+\wp (v)={\frac {1}{4}}\left[{\frac {\wp '(u)-\wp '(v)}{\wp (u)-\wp (v)}}\right]^{2}} 2. A second proof from Akhiezer's book is the following: if f {\displaystyle f} is arbitrary elliptic function then: f ( u ) = c ∏ i = 1 n σ ( u − a i ) σ ( u − b i ) c ∈ C {\displaystyle f(u)=c\prod _{i=1}^{n}{\frac {\sigma (u-a_{i})}{\sigma (u-b_{i})}}\quad c\in \mathbb {C} } where σ {\displaystyle \sigma } is one of the Weierstrass functions and a i , b i {\displaystyle a_{i},b_{i}} are the respective zeros and poles in the period parallelogram. We then let a function k ( u , v ) = ℘ ( u ) − ℘ ( v ) {\displaystyle k(u,v)=\wp (u)-\wp (v)} From the previous lemma we have: k ( u , v ) = ℘ ( u ) − ℘ ( v ) = c σ ( u + v ) σ ( u − v ) σ ( u ) 2 {\displaystyle k(u,v)=\wp (u)-\wp (v)=c{\frac {\sigma (u+v)\sigma (u-v)}{\sigma (u)^{2}}}} From some calculations one can find that c = 1 σ ( v ) 2 ⟹ ℘ ( u ) − ℘ ( v ) = σ ( u + v ) σ ( u − v ) σ ( u ) 2 σ ( v ) 2 {\displaystyle c={\frac {1}{\sigma (v)^{2}}}\implies \wp (u)-\wp (v)={\frac {\sigma (u+v)\sigma (u-v)}{\sigma (u)^{2}\sigma (v)^{2}}}} By definition the Weierstrass Zeta function: d d z ln ⁡ σ ( z ) = ζ ( z ) {\displaystyle {\frac {d}{dz}}\ln \sigma (z)=\zeta (z)} therefore we logarithmicly differentiate both sides obtaining: ℘ ′ ( u ) ℘ ( u ) − ℘ ( v ) = ζ ( u + v ) − 2 ζ ( u ) − ζ ( u − v ) {\displaystyle {\frac {\wp '(u)}{\wp (u)-\wp (v)}}=\zeta (u+v)-2\zeta (u)-\zeta (u-v)} Once again by definition ζ ′ ( z ) = − ℘ ( z ) {\displaystyle \zeta '(z)=-\wp (z)} thus by differentiating once more on both sides and rearranging the terms we obtain − ℘ ( u + v ) = − ℘ ( u ) + 1 2 ℘ ″ ( v ) [ ℘ ( u ) − ℘ ( v ) ] − ℘ ′ ( u ) [ ℘ ′ ( u ) − ℘ ′ ( v ) ] [ ℘ ( u ) − ℘ ( v ) ] 2 {\displaystyle -\wp (u+v)=-\wp (u)+{\frac {1}{2}}{\frac {\wp ''(v)[\wp (u)-\wp (v)]-\wp '(u)[\wp '(u)-\wp '(v)]}{[\wp (u)-\wp (v)]^{2}}}} Knowing that ℘ ″ {\displaystyle \wp ''} has the following differential equation 2 ℘ ″ = 12 ℘ 2 − g 2 {\displaystyle 2\wp ''=12\wp ^{2}-g_{2}} and rearranging the terms one gets the wanted formula ℘ ( u + v ) = 1 4 [ ℘ ′ ( u ) − ℘ ′ ( v ) ℘ ( u ) − ℘ ( v ) ] 2 − ℘ ( u ) − ℘ ( v ) . {\displaystyle \wp (u+v)={\frac {1}{4}}\left[{\frac {\wp '(u)-\wp '(v)}{\wp (u)-\wp (v)}}\right]^{2}-\wp (u)-\wp (v).} == Typography == The Weierstrass's elliptic function is usually written with a rather special, lower case script letter ℘, which was Weierstrass's own notation introduced in his lectures of 1862–1863. It should not be confused with the normal mathematical script letters P: 𝒫 and 𝓅. In computing, the letter ℘ is available as \wp in TeX. In Unicode the code point is U+2118 ℘ SCRIPT CAPITAL P (&weierp;, &wp;), with the more correct alias weierstrass elliptic function. In HTML, it can be escaped as &weierp;. == See also == Weierstrass functions Jacobi elliptic functions Lemniscate elliptic functions == Footnotes == == References == Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 18". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 627. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253. N. I. Akhiezer, Elements of the Theory of Elliptic Functions, (1970) Moscow, translated into English as AMS Translations of Mathematical Monographs Volume 79 (1990) AMS, Rhode Island ISBN 0-8218-4532-2 Tom M. Apostol, Modular Functions and Dirichlet Series in Number Theory, Second Edition (1990), Springer, New York ISBN 0-387-97127-0 (See chapter 1.) K. Chandrasekharan, Elliptic functions (1980), Springer-Verlag ISBN 0-387-15295-4 Konrad Knopp, Funktionentheorie II (1947), Dover Publications; Republished in English translation as Theory of Functions (1996), Dover Publications ISBN 0-486-69219-1 Serge Lang, Elliptic Functions (1973), Addison-Wesley, ISBN 0-201-04162-6 E. T. Whittaker and G. N. Watson, A Course of Modern Analysis, Cambridge University Press, 1952, chapters 20 and 21 == External links == "Weierstrass elliptic functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Weierstrass's elliptic functions on Mathworld. Chapter 23, Weierstrass Elliptic and Modular Functions in DLMF (Digital Library of Mathematical Functions) by W. P. Reinhardt and P. L. Walker. Weierstrass P function and its derivative implemented in C by David Dumas
Wikipedia/Weierstrass_elliptic_functions
In number theory, the prime omega functions ω ( n ) {\displaystyle \omega (n)} and Ω ( n ) {\displaystyle \Omega (n)} count the number of prime factors of a natural number n . {\displaystyle n.} The number of distinct prime factors is assigned to ω ( n ) {\displaystyle \omega (n)} (little omega), while Ω ( n ) {\displaystyle \Omega (n)} (big omega) counts the total number of prime factors with multiplicity (see arithmetic function). That is, if we have a prime factorization of n {\displaystyle n} of the form n = p 1 α 1 p 2 α 2 ⋯ p k α k {\displaystyle n=p_{1}^{\alpha _{1}}p_{2}^{\alpha _{2}}\cdots p_{k}^{\alpha _{k}}} for distinct primes p i {\displaystyle p_{i}} ( 1 ≤ i ≤ k {\displaystyle 1\leq i\leq k} ), then the prime omega functions are given by ω ( n ) = k {\displaystyle \omega (n)=k} and Ω ( n ) = α 1 + α 2 + ⋯ + α k {\displaystyle \Omega (n)=\alpha _{1}+\alpha _{2}+\cdots +\alpha _{k}} . These prime-factor-counting functions have many important number theoretic relations. == Properties and relations == The function ω ( n ) {\displaystyle \omega (n)} is additive and Ω ( n ) {\displaystyle \Omega (n)} is completely additive. Little omega has the formula ω ( n ) = ∑ p ∣ n 1 , {\displaystyle \omega (n)=\sum _{p\mid n}1,} where notation p|n indicates that the sum is taken over all primes p that divide n, without multiplicity. For example, ω ( 12 ) = ω ( 2 2 3 ) = 2 {\displaystyle \omega (12)=\omega (2^{2}3)=2} . Big omega has the formulas Ω ( n ) = ∑ p α ∣ n 1 = ∑ p α ∥ n α . {\displaystyle \Omega (n)=\sum _{p^{\alpha }\mid n}1=\sum _{p^{\alpha }\parallel n}\alpha .} The notation pα|n indicates that the sum is taken over all prime powers pα that divide n, while pα||n indicates that the sum is taken over all prime powers pα that divide n and such that n / pα is coprime to pα. For example, Ω ( 12 ) = Ω ( 2 2 3 1 ) = 3 {\displaystyle \Omega (12)=\Omega (2^{2}3^{1})=3} . The omegas are related by the inequalities ω(n) ≤ Ω(n) and 2ω(n) ≤ d(n) ≤ 2Ω(n), where d(n) is the divisor-counting function. If Ω(n) = ω(n), then n is squarefree and related to the Möbius function by μ ( n ) = ( − 1 ) ω ( n ) = ( − 1 ) Ω ( n ) . {\displaystyle \mu (n)=(-1)^{\omega (n)}=(-1)^{\Omega (n)}.} If ω ( n ) = 1 {\displaystyle \omega (n)=1} then n {\displaystyle n} is a prime power, and if Ω ( n ) = 1 {\displaystyle \Omega (n)=1} then n {\displaystyle n} is prime. An asymptotic series for the average order of ω ( n ) {\displaystyle \omega (n)} is 1 n ∑ k = 1 n ω ( k ) ∼ log ⁡ log ⁡ n + B 1 + ∑ k ≥ 1 ( ∑ j = 0 k − 1 γ j j ! − 1 ) ( k − 1 ) ! ( log ⁡ n ) k , {\displaystyle {\frac {1}{n}}\sum \limits _{k=1}^{n}\omega (k)\sim \log \log n+B_{1}+\sum _{k\geq 1}\left(\sum _{j=0}^{k-1}{\frac {\gamma _{j}}{j!}}-1\right){\frac {(k-1)!}{(\log n)^{k}}},} where B 1 ≈ 0.26149721 {\displaystyle B_{1}\approx 0.26149721} is the Mertens constant and γ j {\displaystyle \gamma _{j}} are the Stieltjes constants. The function ω ( n ) {\displaystyle \omega (n)} is related to divisor sums over the Möbius function and the divisor function, including: ∑ d ∣ n | μ ( d ) | = 2 ω ( n ) {\displaystyle \sum _{d\mid n}|\mu (d)|=2^{\omega (n)}} is the number of unitary divisors. OEIS: A034444 ∑ d ∣ n | μ ( d ) | k ω ( d ) = ( k + 1 ) ω ( n ) {\displaystyle \sum _{d\mid n}|\mu (d)|k^{\omega (d)}=(k+1)^{\omega (n)}} ∑ r ∣ n 2 ω ( r ) = d ( n 2 ) {\displaystyle \sum _{r\mid n}2^{\omega (r)}=d(n^{2})} ∑ r ∣ n 2 ω ( r ) d ( n r ) = d 2 ( n ) {\displaystyle \sum _{r\mid n}2^{\omega (r)}d\left({\frac {n}{r}}\right)=d^{2}(n)} ∑ d ∣ n ( − 1 ) ω ( d ) = ∏ p α | | n ( 1 − α ) {\displaystyle \sum _{d\mid n}(-1)^{\omega (d)}=\prod \limits _{p^{\alpha }||n}(1-\alpha )} ∑ ( k , m ) = 1 1 ≤ k ≤ m gcd ( k 2 − 1 , m 1 ) gcd ( k 2 − 1 , m 2 ) = φ ( n ) ∑ d 2 ∣ m 2 d 1 ∣ m 1 φ ( gcd ( d 1 , d 2 ) ) 2 ω ( lcm ⁡ ( d 1 , d 2 ) ) , m 1 , m 2 odd , m = lcm ⁡ ( m 1 , m 2 ) {\displaystyle \sum _{\stackrel {1\leq k\leq m}{(k,m)=1}}\gcd(k^{2}-1,m_{1})\gcd(k^{2}-1,m_{2})=\varphi (n)\sum _{\stackrel {d_{1}\mid m_{1}}{d_{2}\mid m_{2}}}\varphi (\gcd(d_{1},d_{2}))2^{\omega (\operatorname {lcm} (d_{1},d_{2}))},\ m_{1},m_{2}{\text{ odd}},m=\operatorname {lcm} (m_{1},m_{2})} ∑ gcd ⁡ ( k , m ) = 1 1 ≤ k ≤ n 1 = n φ ( m ) m + O ( 2 ω ( m ) ) {\displaystyle \sum _{\stackrel {1\leq k\leq n}{\operatorname {gcd} (k,m)=1}}\!\!\!\!1=n{\frac {\varphi (m)}{m}}+O\left(2^{\omega (m)}\right)} The characteristic function of the primes can be expressed by a convolution with the Möbius function: χ P ( n ) = ( μ ∗ ω ) ( n ) = ∑ d | n ω ( d ) μ ( n / d ) . {\displaystyle \chi _{\mathbb {P} }(n)=(\mu \ast \omega )(n)=\sum _{d|n}\omega (d)\mu (n/d).} A partition-related exact identity for ω ( n ) {\displaystyle \omega (n)} is given by ω ( n ) = log 2 ⁡ [ ∑ k = 1 n ∑ j = 1 k ( ∑ d ∣ k ∑ i = 1 d p ( d − j i ) ) s n , k ⋅ | μ ( j ) | ] , {\displaystyle \omega (n)=\log _{2}\left[\sum _{k=1}^{n}\sum _{j=1}^{k}\left(\sum _{d\mid k}\sum _{i=1}^{d}p(d-ji)\right)s_{n,k}\cdot |\mu (j)|\right],} where p ( n ) {\displaystyle p(n)} is the partition function, μ ( n ) {\displaystyle \mu (n)} is the Möbius function, and the triangular sequence s n , k {\displaystyle s_{n,k}} is expanded by s n , k = [ q n ] ( q ; q ) ∞ q k 1 − q k = s o ( n , k ) − s e ( n , k ) , {\displaystyle s_{n,k}=[q^{n}](q;q)_{\infty }{\frac {q^{k}}{1-q^{k}}}=s_{o}(n,k)-s_{e}(n,k),} in terms of the infinite q-Pochhammer symbol and the restricted partition functions s o / e ( n , k ) {\displaystyle s_{o/e}(n,k)} which respectively denote the number of k {\displaystyle k} 's in all partitions of n {\displaystyle n} into an odd (even) number of distinct parts. == Continuation to the complex plane == A continuation of ω ( n ) {\displaystyle \omega (n)} has been found, though it is not analytic everywhere. Note that the normalized sinc {\displaystyle \operatorname {sinc} } function sinc ⁡ ( x ) = sin ⁡ ( π x ) π x {\displaystyle \operatorname {sinc} (x)={\frac {\sin(\pi x)}{\pi x}}} is used. ω ( z ) = log 2 ⁡ ( ∑ n = 1 ⌈ R e ( z ) ⌉ sinc ⁡ ( ∏ m = 1 ⌈ R e ( z ) ⌉ + 1 ( n 2 + n − m z ) ) ) {\displaystyle \omega (z)=\log _{2}\left(\sum _{n=1}^{\lceil Re(z)\rceil }\operatorname {sinc} \left(\prod _{m=1}^{\lceil Re(z)\rceil +1}\left(n^{2}+n-mz\right)\right)\right)} This is closely related to the following partition identity. Consider partitions of the form a = 2 c + 4 c + … + 2 ( b − 1 ) c + 2 b c {\displaystyle a={\frac {2}{c}}+{\frac {4}{c}}+\ldots +{\frac {2(b-1)}{c}}+{\frac {2b}{c}}} where a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} are positive integers, and a > b > c {\displaystyle a>b>c} . The number of partitions is then given by 2 ω ( a ) − 2 {\displaystyle 2^{\omega (a)}-2} . == Average order and summatory functions == An average order of both ω ( n ) {\displaystyle \omega (n)} and Ω ( n ) {\displaystyle \Omega (n)} is log ⁡ log ⁡ n {\displaystyle \log \log n} . When n {\displaystyle n} is prime a lower bound on the value of the function is ω ( n ) = 1 {\displaystyle \omega (n)=1} . Similarly, if n {\displaystyle n} is primorial then the function is as large as ω ( n ) ∼ log ⁡ n log ⁡ log ⁡ n {\displaystyle \omega (n)\sim {\frac {\log n}{\log \log n}}} on average order. When n {\displaystyle n} is a power of 2, then Ω ( n ) = log 2 ⁡ ( n ) . {\displaystyle \Omega (n)=\log _{2}(n).} Asymptotics for the summatory functions over ω ( n ) {\displaystyle \omega (n)} , Ω ( n ) {\displaystyle \Omega (n)} , and powers of ω ( n ) {\displaystyle \omega (n)} are respectively ∑ n ≤ x ω ( n ) = x log ⁡ log ⁡ x + B 1 x + o ( x ) ∑ n ≤ x Ω ( n ) = x log ⁡ log ⁡ x + B 2 x + o ( x ) ∑ n ≤ x ω ( n ) 2 = x ( log ⁡ log ⁡ x ) 2 + O ( x log ⁡ log ⁡ x ) ∑ n ≤ x ω ( n ) k = x ( log ⁡ log ⁡ x ) k + O ( x ( log ⁡ log ⁡ x ) k − 1 ) , k ∈ Z + , {\displaystyle {\begin{aligned}\sum _{n\leq x}\omega (n)&=x\log \log x+B_{1}x+o(x)\\\sum _{n\leq x}\Omega (n)&=x\log \log x+B_{2}x+o(x)\\\sum _{n\leq x}\omega (n)^{2}&=x(\log \log x)^{2}+O(x\log \log x)\\\sum _{n\leq x}\omega (n)^{k}&=x(\log \log x)^{k}+O(x(\log \log x)^{k-1}),k\in \mathbb {Z} ^{+},\end{aligned}}} where B 1 ≈ 0.2614972128 {\displaystyle B_{1}\approx 0.2614972128} is the Mertens constant and the constant B 2 {\displaystyle B_{2}} is defined by B 2 = B 1 + ∑ p prime 1 p ( p − 1 ) ≈ 1.0345061758. {\displaystyle B_{2}=B_{1}+\sum _{p{\text{ prime}}}{\frac {1}{p(p-1)}}\approx 1.0345061758.} The sum of number of unitary divisors is ∑ n ≤ x 2 ω ( n ) = ( x log ⁡ x ) / ζ ( 2 ) + O ( x ) {\displaystyle \sum _{n\leq x}2^{\omega (n)}=(x\log x)/\zeta (2)+O(x)} (sequence A064608 in the OEIS) Other sums relating the two variants of the prime omega functions include ∑ n ≤ x { Ω ( n ) − ω ( n ) } = O ( x ) , {\displaystyle \sum _{n\leq x}\left\{\Omega (n)-\omega (n)\right\}=O(x),} and # { n ≤ x : Ω ( n ) − ω ( n ) > log ⁡ log ⁡ x } = O ( x ( log ⁡ log ⁡ x ) 1 / 2 ) . {\displaystyle \#\left\{n\leq x:\Omega (n)-\omega (n)>{\sqrt {\log \log x}}\right\}=O\left({\frac {x}{(\log \log x)^{1/2}}}\right).} === Example I: A modified summatory function === In this example we suggest a variant of the summatory functions S ω ( x ) := ∑ n ≤ x ω ( n ) {\displaystyle S_{\omega }(x):=\sum _{n\leq x}\omega (n)} estimated in the above results for sufficiently large x {\displaystyle x} . We then prove an asymptotic formula for the growth of this modified summatory function derived from the asymptotic estimate of S ω ( x ) {\displaystyle S_{\omega }(x)} provided in the formulas in the main subsection of this article above. To be completely precise, let the odd-indexed summatory function be defined as S odd ( x ) := ∑ n ≤ x ω ( n ) [ n odd ] , {\displaystyle S_{\operatorname {odd} }(x):=\sum _{n\leq x}\omega (n)[n{\text{ odd}}],} where [ ⋅ ] {\displaystyle [\cdot ]} denotes Iverson bracket. Then we have that S odd ( x ) = x 2 log ⁡ log ⁡ x + ( 2 B 1 − 1 ) x 4 + { x 4 } − [ x ≡ 2 , 3 mod 4 ] + O ( x log ⁡ x ) . {\displaystyle S_{\operatorname {odd} }(x)={\frac {x}{2}}\log \log x+{\frac {(2B_{1}-1)x}{4}}+\left\{{\frac {x}{4}}\right\}-\left[x\equiv 2,3{\bmod {4}}\right]+O\left({\frac {x}{\log x}}\right).} The proof of this result follows by first observing that ω ( 2 n ) = { ω ( n ) + 1 , if n is odd; ω ( n ) , if n is even, {\displaystyle \omega (2n)={\begin{cases}\omega (n)+1,&{\text{if }}n{\text{ is odd; }}\\\omega (n),&{\text{if }}n{\text{ is even,}}\end{cases}}} and then applying the asymptotic result from Hardy and Wright for the summatory function over ω ( n ) {\displaystyle \omega (n)} , denoted by S ω ( x ) := ∑ n ≤ x ω ( n ) {\displaystyle S_{\omega }(x):=\sum _{n\leq x}\omega (n)} , in the following form: S ω ( x ) = S odd ( x ) + ∑ n ≤ ⌊ x 2 ⌋ ω ( 2 n ) = S odd ( x ) + ∑ n ≤ ⌊ x 4 ⌋ ( ω ( 4 n ) + ω ( 4 n + 2 ) ) = S odd ( x ) + ∑ n ≤ ⌊ x 4 ⌋ ( ω ( 2 n ) + ω ( 2 n + 1 ) + 1 ) = S odd ( x ) + S ω ( ⌊ x 2 ⌋ ) + ⌊ x 4 ⌋ . {\displaystyle {\begin{aligned}S_{\omega }(x)&=S_{\operatorname {odd} }(x)+\sum _{n\leq \left\lfloor {\frac {x}{2}}\right\rfloor }\omega (2n)\\&=S_{\operatorname {odd} }(x)+\sum _{n\leq \left\lfloor {\frac {x}{4}}\right\rfloor }\left(\omega (4n)+\omega (4n+2)\right)\\&=S_{\operatorname {odd} }(x)+\sum _{n\leq \left\lfloor {\frac {x}{4}}\right\rfloor }\left(\omega (2n)+\omega (2n+1)+1\right)\\&=S_{\operatorname {odd} }(x)+S_{\omega }\left(\left\lfloor {\frac {x}{2}}\right\rfloor \right)+\left\lfloor {\frac {x}{4}}\right\rfloor .\end{aligned}}} === Example II: Summatory functions for so-termed factorial moments of ω(n) === The computations expanded in Chapter 22.11 of Hardy and Wright provide asymptotic estimates for the summatory function ω ( n ) { ω ( n ) − 1 } , {\displaystyle \omega (n)\left\{\omega (n)-1\right\},} by estimating the product of these two component omega functions as ω ( n ) { ω ( n ) − 1 } = ∑ p , q prime p ≠ q p q ∣ n 1 = ∑ p , q prime p q ∣ n 1 − ∑ p prime p 2 ∣ n 1. {\displaystyle \omega (n)\left\{\omega (n)-1\right\}=\sum _{\stackrel {pq\mid n}{\stackrel {p\neq q}{p,q{\text{ prime}}}}}1=\sum _{\stackrel {pq\mid n}{p,q{\text{ prime}}}}1-\sum _{\stackrel {p^{2}\mid n}{p{\text{ prime}}}}1.} We can similarly calculate asymptotic formulas more generally for the related summatory functions over so-termed factorial moments of the function ω ( n ) {\displaystyle \omega (n)} . == Dirichlet series == A known Dirichlet series involving ω ( n ) {\displaystyle \omega (n)} and the Riemann zeta function is given by ∑ n ≥ 1 2 ω ( n ) n s = ζ 2 ( s ) ζ ( 2 s ) , ℜ ( s ) > 1. {\displaystyle \sum _{n\geq 1}{\frac {2^{\omega (n)}}{n^{s}}}={\frac {\zeta ^{2}(s)}{\zeta (2s)}},\ \Re (s)>1.} We can also see that ∑ n ≥ 1 z ω ( n ) n s = ∏ p ( 1 + z p s − 1 ) , | z | < 2 , ℜ ( s ) > 1 , {\displaystyle \sum _{n\geq 1}{\frac {z^{\omega (n)}}{n^{s}}}=\prod _{p}\left(1+{\frac {z}{p^{s}-1}}\right),|z|<2,\Re (s)>1,} ∑ n ≥ 1 z Ω ( n ) n s = ∏ p ( 1 − z p s ) − 1 , | z | < 2 , ℜ ( s ) > 1 , {\displaystyle \sum _{n\geq 1}{\frac {z^{\Omega (n)}}{n^{s}}}=\prod _{p}\left(1-{\frac {z}{p^{s}}}\right)^{-1},|z|<2,\Re (s)>1,} The function Ω ( n ) {\displaystyle \Omega (n)} is completely additive, where ω ( n ) {\displaystyle \omega (n)} is strongly additive (additive). Now we can prove a short lemma in the following form which implies exact formulas for the expansions of the Dirichlet series over both ω ( n ) {\displaystyle \omega (n)} and Ω ( n ) {\displaystyle \Omega (n)} : Lemma. Suppose that f {\displaystyle f} is a strongly additive arithmetic function defined such that its values at prime powers is given by f ( p α ) := f 0 ( p , α ) {\displaystyle f(p^{\alpha }):=f_{0}(p,\alpha )} , i.e., f ( p 1 α 1 ⋯ p k α k ) = f 0 ( p 1 , α 1 ) + ⋯ + f 0 ( p k , α k ) {\displaystyle f(p_{1}^{\alpha _{1}}\cdots p_{k}^{\alpha _{k}})=f_{0}(p_{1},\alpha _{1})+\cdots +f_{0}(p_{k},\alpha _{k})} for distinct primes p i {\displaystyle p_{i}} and exponents α i ≥ 1 {\displaystyle \alpha _{i}\geq 1} . The Dirichlet series of f {\displaystyle f} is expanded by ∑ n ≥ 1 f ( n ) n s = ζ ( s ) × ∑ p p r i m e ( 1 − p − s ) ⋅ ∑ n ≥ 1 f 0 ( p , n ) p − n s , ℜ ( s ) > min ( 1 , σ f ) . {\displaystyle \sum _{n\geq 1}{\frac {f(n)}{n^{s}}}=\zeta (s)\times \sum _{p\mathrm {\ prime} }(1-p^{-s})\cdot \sum _{n\geq 1}f_{0}(p,n)p^{-ns},\Re (s)>\min(1,\sigma _{f}).} Proof. We can see that ∑ n ≥ 1 u f ( n ) n s = ∏ p p r i m e ( 1 + ∑ n ≥ 1 u f 0 ( p , n ) p − n s ) . {\displaystyle \sum _{n\geq 1}{\frac {u^{f(n)}}{n^{s}}}=\prod _{p\mathrm {\ prime} }\left(1+\sum _{n\geq 1}u^{f_{0}(p,n)}p^{-ns}\right).} This implies that ∑ n ≥ 1 f ( n ) n s = d d u [ ∏ p p r i m e ( 1 + ∑ n ≥ 1 u f 0 ( p , n ) p − n s ) ] | u = 1 = ∏ p ( 1 + ∑ n ≥ 1 p − n s ) × ∑ p ∑ n ≥ 1 f 0 ( p , n ) p − n s 1 + ∑ n ≥ 1 p − n s = ζ ( s ) × ∑ p p r i m e ( 1 − p − s ) ⋅ ∑ n ≥ 1 f 0 ( p , n ) p − n s , {\displaystyle {\begin{aligned}\sum _{n\geq 1}{\frac {f(n)}{n^{s}}}&={\frac {d}{du}}\left[\prod _{p\mathrm {\ prime} }\left(1+\sum _{n\geq 1}u^{f_{0}(p,n)}p^{-ns}\right)\right]{\Biggr |}_{u=1}=\prod _{p}\left(1+\sum _{n\geq 1}p^{-ns}\right)\times \sum _{p}{\frac {\sum _{n\geq 1}f_{0}(p,n)p^{-ns}}{1+\sum _{n\geq 1}p^{-ns}}}\\&=\zeta (s)\times \sum _{p\mathrm {\ prime} }(1-p^{-s})\cdot \sum _{n\geq 1}f_{0}(p,n)p^{-ns},\end{aligned}}} wherever the corresponding series and products are convergent. In the last equation, we have used the Euler product representation of the Riemann zeta function. The lemma implies that for ℜ ( s ) > 1 {\displaystyle \Re (s)>1} , D ω ( s ) := ∑ n ≥ 1 ω ( n ) n s = ζ ( s ) P ( s ) = ζ ( s ) × ∑ n ≥ 1 μ ( n ) n log ⁡ ζ ( n s ) D Ω ( s ) := ∑ n ≥ 1 Ω ( n ) n s = ζ ( s ) × ∑ n ≥ 1 P ( n s ) = ζ ( s ) × ∑ n ≥ 1 ϕ ( n ) n log ⁡ ζ ( n s ) D h ( s ) := ∑ n ≥ 1 h ( n ) n s = ζ ( s ) log ⁡ ζ ( s ) = ζ ( s ) × ∑ n ≥ 1 ε ( n ) n log ⁡ ζ ( n s ) , {\displaystyle {\begin{aligned}D_{\omega }(s)&:=\sum _{n\geq 1}{\frac {\omega (n)}{n^{s}}}=\zeta (s)P(s)\\&\ =\zeta (s)\times \sum _{n\geq 1}{\frac {\mu (n)}{n}}\log \zeta (ns)\\D_{\Omega }(s)&:=\sum _{n\geq 1}{\frac {\Omega (n)}{n^{s}}}=\zeta (s)\times \sum _{n\geq 1}P(ns)\\&\ =\zeta (s)\times \sum _{n\geq 1}{\frac {\phi (n)}{n}}\log \zeta (ns)\\D_{h}(s)&:=\sum _{n\geq 1}{\frac {h(n)}{n^{s}}}=\zeta (s)\log \zeta (s)\\&\ =\zeta (s)\times \sum _{n\geq 1}{\frac {\varepsilon (n)}{n}}\log \zeta (ns),\end{aligned}}} where P ( s ) {\displaystyle P(s)} is the prime zeta function, h ( n ) = ∑ p k | n 1 k = ∑ p k | | n H k {\displaystyle h(n)=\sum _{p^{k}|n}{\frac {1}{k}}=\sum _{p^{k}||n}{H_{k}}} where H k {\displaystyle H_{k}} is the k {\displaystyle k} -th harmonic number and ε {\displaystyle \varepsilon } is the identity for the Dirichlet convolution, ε ( n ) = ⌊ 1 n ⌋ {\displaystyle \varepsilon (n)=\lfloor {\frac {1}{n}}\rfloor } . == The distribution of the difference of prime omega functions == The distribution of the distinct integer values of the differences Ω ( n ) − ω ( n ) {\displaystyle \Omega (n)-\omega (n)} is regular in comparison with the semi-random properties of the component functions. For k ≥ 0 {\displaystyle k\geq 0} , define N k ( x ) := # ( { n ∈ Z + : Ω ( n ) − ω ( n ) = k } ∩ [ 1 , x ] ) . {\displaystyle N_{k}(x):=\#(\{n\in \mathbb {Z} ^{+}:\Omega (n)-\omega (n)=k\}\cap [1,x]).} These cardinalities have a corresponding sequence of limiting densities d k {\displaystyle d_{k}} such that for x ≥ 2 {\displaystyle x\geq 2} N k ( x ) = d k ⋅ x + O ( ( 3 4 ) k x ( log ⁡ x ) 4 3 ) . {\displaystyle N_{k}(x)=d_{k}\cdot x+O\left(\left({\frac {3}{4}}\right)^{k}{\sqrt {x}}(\log x)^{\frac {4}{3}}\right).} These densities are generated by the prime products ∑ k ≥ 0 d k ⋅ z k = ∏ p ( 1 − 1 p ) ( 1 + 1 p − z ) . {\displaystyle \sum _{k\geq 0}d_{k}\cdot z^{k}=\prod _{p}\left(1-{\frac {1}{p}}\right)\left(1+{\frac {1}{p-z}}\right).} With the absolute constant c ^ := 1 4 × ∏ p > 2 ( 1 − 1 ( p − 1 ) 2 ) − 1 {\displaystyle {\hat {c}}:={\frac {1}{4}}\times \prod _{p>2}\left(1-{\frac {1}{(p-1)^{2}}}\right)^{-1}} , the densities d k {\displaystyle d_{k}} satisfy d k = c ^ ⋅ 2 − k + O ( 5 − k ) . {\displaystyle d_{k}={\hat {c}}\cdot 2^{-k}+O(5^{-k}).} Compare to the definition of the prime products defined in the last section of in relation to the Erdős–Kac theorem. == See also == Additive function Arithmetic function Erdős–Kac theorem Omega function (disambiguation) Prime number Square-free integer == Notes == == References == G. H. Hardy and E. M. Wright (2006). An Introduction to the Theory of Numbers (6th ed.). Oxford University Press. H. L. Montgomery and R. C. Vaughan (2007). Multiplicative number theory I. Classical theory (1st ed.). Cambridge University Press. Schmidt, Maxie (2017). "Factorization Theorems for Hadamard Products and Higher-Order Derivatives of Lambert Series Generating Functions". arXiv:1712.00608 [math.NT]. Weisstein, Eric. "Distinct Prime Factors". MathWorld. Retrieved 22 April 2018. == External links == OEIS Wiki for related sequence numbers and tables OEIS Wiki on Prime Factors
Wikipedia/Prime_omega_function
In number theory, an average order of an arithmetic function is some simpler or better-understood function which takes the same values "on average". Let f {\displaystyle f} be an arithmetic function. We say that an average order of f {\displaystyle f} is g {\displaystyle g} if ∑ n ≤ x f ( n ) ∼ ∑ n ≤ x g ( n ) {\displaystyle \sum _{n\leq x}f(n)\sim \sum _{n\leq x}g(n)} as x {\displaystyle x} tends to infinity. It is conventional to choose an approximating function g {\displaystyle g} that is continuous and monotone. But even so an average order is of course not unique. In cases where the limit lim N → ∞ 1 N ∑ n ≤ N f ( n ) = c {\displaystyle \lim _{N\to \infty }{\frac {1}{N}}\sum _{n\leq N}f(n)=c} exists, it is said that f {\displaystyle f} has a mean value (average value) c {\displaystyle c} . If in addition the constant c {\displaystyle c} is not zero, then the constant function g ( x ) = c {\displaystyle g(x)=c} is an average order of f {\displaystyle f} . == Examples == An average order of d(n), the number of divisors of n, is log n; An average order of σ(n), the sum of divisors of n, is nπ2 / 6; An average order of φ(n), Euler's totient function of n, is 6n / π2; An average order of r(n), the number of ways of expressing n as a sum of two squares, is π; The average order of representations of a natural number as a sum of three squares is 4πn / 3; The average number of decompositions of a natural number into a sum of one or more consecutive prime numbers is n log2; An average order of ω(n), the number of distinct prime factors of n, is loglog n; An average order of Ω(n), the number of prime factors of n, is loglog n; The prime number theorem is equivalent to the statement that the von Mangoldt function Λ(n) has average order 1; An average value of μ(n), the Möbius function, is zero; this is again equivalent to the prime number theorem. == Calculating mean values using Dirichlet series == In case F {\displaystyle F} is of the form F ( n ) = ∑ d ∣ n f ( d ) , {\displaystyle F(n)=\sum _{d\mid n}f(d),} for some arithmetic function f ( n ) {\displaystyle f(n)} , one has, Generalized identities of the previous form are found here. This identity often provides a practical way to calculate the mean value in terms of the Riemann zeta function. This is illustrated in the following example. === The density of the kth-power-free integers in ℕ === For an integer k ≥ 1 {\displaystyle k\geq 1} the set Q k {\displaystyle Q_{k}} of kth-power-free integers is Q k := { n ∈ Z ∣ n is not divisible by d k for any integer d ≥ 2 } . {\displaystyle Q_{k}:=\{n\in \mathbb {Z} \mid n{\text{ is not divisible by }}d^{k}{\text{ for any integer }}d\geq 2\}.} We calculate the natural density of these numbers in ℕ, that is, the average value of 1 Q k {\displaystyle 1_{Q_{k}}} , denoted by δ ( n ) {\displaystyle \delta (n)} , in terms of the zeta function. The function δ {\displaystyle \delta } is multiplicative, and since it is bounded by 1, its Dirichlet series converges absolutely in the half-plane Re ⁡ ( s ) > 1 {\displaystyle \operatorname {Re} (s)>1} , and there has Euler product ∑ Q k n − s = ∑ n δ ( n ) n − s = ∏ p ( 1 + p − s + ⋯ + p − s ( k − 1 ) ) = ∏ p ( 1 − p − s k 1 − p − s ) = ζ ( s ) ζ ( s k ) . {\displaystyle \sum _{Q_{k}}n^{-s}=\sum _{n}\delta (n)n^{-s}=\prod _{p}\left(1+p^{-s}+\cdots +p^{-s(k-1)}\right)=\prod _{p}\left({\frac {1-p^{-sk}}{1-p^{-s}}}\right)={\frac {\zeta (s)}{\zeta (sk)}}.} By the Möbius inversion formula, we get 1 ζ ( k s ) = ∑ n μ ( n ) n − k s , {\displaystyle {\frac {1}{\zeta (ks)}}=\sum _{n}\mu (n)n^{-ks},} where μ {\displaystyle \mu } stands for the Möbius function. Equivalently, 1 ζ ( k s ) = ∑ n f ( n ) n − s , {\displaystyle {\frac {1}{\zeta (ks)}}=\sum _{n}f(n)n^{-s},} where f ( n ) = { μ ( d ) n = d k 0 otherwise , {\displaystyle f(n)={\begin{cases}\mu (d)&n=d^{k}\\0&{\text{otherwise}},\end{cases}}} and hence, ζ ( s ) ζ ( s k ) = ∑ n ( ∑ d ∣ n f ( d ) ) n − s . {\displaystyle {\frac {\zeta (s)}{\zeta (sk)}}=\sum _{n}\left(\sum _{d\mid n}f(d)\right)n^{-s}.} By comparing the coefficients, we get δ ( n ) = ∑ d ∣ n f ( d ) . {\displaystyle \delta (n)=\sum _{d\mid n}f(d).} Using (1), we get ∑ d ≤ x δ ( d ) = x ∑ d ≤ x ( f ( d ) / d ) + O ( x 1 / k ) . {\displaystyle \sum _{d\leq x}\delta (d)=x\sum _{d\leq x}(f(d)/d)+O(x^{1/k}).} We conclude that, ∑ n ≤ x n ∈ Q k 1 = x ζ ( k ) + O ( x 1 / k ) , {\displaystyle \sum _{\stackrel {n\in Q_{k}}{n\leq x}}1={\frac {x}{\zeta (k)}}+O(x^{1/k}),} where for this we used the relation ∑ n ( f ( n ) / n ) = ∑ n f ( n k ) n − k = ∑ n μ ( n ) n − k = 1 ζ ( k ) , {\displaystyle \sum _{n}(f(n)/n)=\sum _{n}f(n^{k})n^{-k}=\sum _{n}\mu (n)n^{-k}={\frac {1}{\zeta (k)}},} which follows from the Möbius inversion formula. In particular, the density of the square-free integers is ζ ( 2 ) − 1 = 6 π 2 {\textstyle \zeta (2)^{-1}={\frac {6}{\pi ^{2}}}} . === Visibility of lattice points === We say that two lattice points are visible from one another if there is no lattice point on the open line segment joining them. Now, if gcd(a, b) = d > 1, then writing a = da2, b = db2 one observes that the point (a2, b2) is on the line segment which joins (0,0) to (a, b) and hence (a, b) is not visible from the origin. Thus (a, b) is visible from the origin implies that (a, b) = 1. Conversely, it is also easy to see that gcd(a, b) = 1 implies that there is no other integer lattice point in the segment joining (0,0) to (a,b). Thus, (a, b) is visible from (0,0) if and only if gcd(a, b) = 1. Notice that φ ( n ) n {\displaystyle {\frac {\varphi (n)}{n}}} is the probability of a random point on the square { ( r , s ) ∈ N : max ( | r | , | s | ) = n } {\displaystyle \{(r,s)\in \mathbb {N} :\max(|r|,|s|)=n\}} to be visible from the origin. Thus, one can show that the natural density of the points which are visible from the origin is given by the average, lim N → ∞ 1 N ∑ n ≤ N φ ( n ) n = 6 π 2 = 1 ζ ( 2 ) . {\displaystyle \lim _{N\to \infty }{\frac {1}{N}}\sum _{n\leq N}{\frac {\varphi (n)}{n}}={\frac {6}{\pi ^{2}}}={\frac {1}{\zeta (2)}}.} 1 ζ ( 2 ) {\textstyle {\frac {1}{\zeta (2)}}} is also the natural density of the square-free numbers in ℕ. In fact, this is not a coincidence. Consider the k-dimensional lattice, Z k {\displaystyle \mathbb {Z} ^{k}} . The natural density of the points which are visible from the origin is 1 ζ ( k ) {\textstyle {\frac {1}{\zeta (k)}}} , which is also the natural density of the k-th free integers in ℕ. === Divisor functions === Consider the generalization of d ( n ) {\displaystyle d(n)} : σ α ( n ) = ∑ d ∣ n d α . {\displaystyle \sigma _{\alpha }(n)=\sum _{d\mid n}d^{\alpha }.} The following are true: ∑ n ≤ x σ α ( n ) = { ∑ n ≤ x σ α ( n ) = ζ ( α + 1 ) α + 1 x α + 1 + O ( x β ) if α > 0 , α ≠ 1 , ∑ n ≤ x σ 1 ( n ) = ζ ( 2 ) 2 x 2 + O ( x log ⁡ x ) if α = 1 , ∑ n ≤ x σ − 1 ( n ) = ζ ( 2 ) x + O ( log ⁡ x ) if α = − 1 , ∑ n ≤ x σ α ( n ) = ζ ( − α + 1 ) x + O ( x max ( 0 , 1 + α ) ) otherwise. {\displaystyle \sum _{n\leq x}\sigma _{\alpha }(n)={\begin{cases}\;\;\sum _{n\leq x}\sigma _{\alpha }(n)={\frac {\zeta (\alpha +1)}{\alpha +1}}x^{\alpha +1}+O(x^{\beta })&{\text{if }}\alpha >0,\alpha \neq 1,\\\;\;\sum _{n\leq x}\sigma _{1}(n)={\frac {\zeta (2)}{2}}x^{2}+O(x\log x)&{\text{if }}\alpha =1,\\\;\;\sum _{n\leq x}\sigma _{-1}(n)=\zeta (2)x+O(\log x)&{\text{if }}\alpha =-1,\\\;\;\sum _{n\leq x}\sigma _{\alpha }(n)=\zeta (-\alpha +1)x+O(x^{\max(0,1+\alpha )})&{\text{otherwise.}}\end{cases}}} where β = max ( 1 , α ) {\displaystyle \beta =\max(1,\alpha )} . == Better average order == This notion is best discussed through an example. From ∑ n ≤ x d ( n ) = x log ⁡ x + ( 2 γ − 1 ) x + o ( x ) {\displaystyle \sum _{n\leq x}d(n)=x\log x+(2\gamma -1)x+o(x)} ( γ {\displaystyle \gamma } is the Euler–Mascheroni constant) and ∑ n ≤ x log ⁡ n = x log ⁡ x − x + O ( log ⁡ x ) , {\displaystyle \sum _{n\leq x}\log n=x\log x-x+O(\log x),} we have the asymptotic relation ∑ n ≤ x ( d ( n ) − ( log ⁡ n + 2 γ ) ) = o ( x ) ( x → ∞ ) , {\displaystyle \sum _{n\leq x}(d(n)-(\log n+2\gamma ))=o(x)\quad (x\to \infty ),} which suggests that the function log ⁡ n + 2 γ {\displaystyle \log n+2\gamma } is a better choice of average order for d ( n ) {\displaystyle d(n)} than simply log ⁡ n {\displaystyle \log n} . == Mean values over Fq[x] == === Definition === Let h(x) be a function on the set of monic polynomials over Fq. For n ≥ 1 {\displaystyle n\geq 1} we define Ave n ( h ) = 1 q n ∑ f monic , deg ⁡ ( f ) = n h ( f ) . {\displaystyle {\text{Ave}}_{n}(h)={\frac {1}{q^{n}}}\sum _{f{\text{ monic}},\deg(f)=n}h(f).} This is the mean value (average value) of h on the set of monic polynomials of degree n. We say that g(n) is an average order of h if Ave n ( h ) ∼ g ( n ) {\displaystyle {\text{Ave}}_{n}(h)\sim g(n)} as n tends to infinity. In cases where the limit, lim n → ∞ Ave n ( h ) = c {\displaystyle \lim _{n\to \infty }{\text{Ave}}_{n}(h)=c} exists, it is said that h has a mean value (average value) c. === Zeta function and Dirichlet series in Fq[X] === Let Fq[X] = A be the ring of polynomials over the finite field Fq. Let h be a polynomial arithmetic function (i.e. a function on set of monic polynomials over A). Its corresponding Dirichlet series define to be D h ( s ) = ∑ f monic h ( f ) | f | − s , {\displaystyle D_{h}(s)=\sum _{f{\text{ monic}}}h(f)|f|^{-s},} where for g ∈ A {\displaystyle g\in A} , set | g | = q deg ⁡ ( g ) {\displaystyle |g|=q^{\deg(g)}} if g ≠ 0 {\displaystyle g\neq 0} , and | g | = 0 {\displaystyle |g|=0} otherwise. The polynomial zeta function is then ζ A ( s ) = ∑ f monic | f | − s . {\displaystyle \zeta _{A}(s)=\sum _{f{\text{ monic}}}|f|^{-s}.} Similar to the situation in N, every Dirichlet series of a multiplicative function h has a product representation (Euler product): D h ( s ) = ∏ P ( ∑ n = 0 ∞ h ( P n ) | P | − s n ) , {\displaystyle D_{h}(s)=\prod _{P}\left(\sum _{n=0}^{\infty }h(P^{n})\left|P\right|^{-sn}\right),} where the product runs over all monic irreducible polynomials P. For example, the product representation of the zeta function is as for the integers: ζ A ( s ) = ∏ P ( 1 − | P | − s ) − 1 {\textstyle \zeta _{A}(s)=\prod _{P}\left(1-\left|P\right|^{-s}\right)^{-1}} . Unlike the classical zeta function, ζ A ( s ) {\displaystyle \zeta _{A}(s)} is a simple rational function: ζ A ( s ) = ∑ f ( | f | − s ) = ∑ n ∑ deg ⁡ ( f ) = n q − s n = ∑ n ( q n − s n ) = ( 1 − q 1 − s ) − 1 . {\displaystyle \zeta _{A}(s)=\sum _{f}(|f|^{-s})=\sum _{n}\sum _{\deg(f)=n}q^{-sn}=\sum _{n}(q^{n-sn})=(1-q^{1-s})^{-1}.} In a similar way, If f and g are two polynomial arithmetic functions, one defines f * g, the Dirichlet convolution of f and g, by ( f ∗ g ) ( m ) = ∑ d ∣ m f ( m ) g ( m d ) = ∑ a b = m f ( a ) g ( b ) {\displaystyle {\begin{aligned}(f*g)(m)&=\sum _{d\mid m}f(m)g\left({\frac {m}{d}}\right)\\&=\sum _{ab=m}f(a)g(b)\end{aligned}}} where the sum extends over all monic divisors d of m, or equivalently over all pairs (a, b) of monic polynomials whose product is m. The identity D h D g = D h ∗ g {\displaystyle D_{h}D_{g}=D_{h*g}} still holds. Thus, like in the elementary theory, the polynomial Dirichlet series and the zeta function has a connection with the notion of mean values in the context of polynomials. The following examples illustrate it. === Examples === ==== The density of the k-th power free polynomials in Fq[X] ==== Define δ ( f ) {\displaystyle \delta (f)} to be 1 if f {\displaystyle f} is k-th power free and 0 otherwise. We calculate the average value of δ {\displaystyle \delta } , which is the density of the k-th power free polynomials in Fq[X], in the same fashion as in the integers. By multiplicativity of δ {\displaystyle \delta } : ∑ f δ ( f ) | f | s = ∏ P ( ∑ j = 0 k − 1 ( | P | − j s ) ) = ∏ P 1 − | P | − s k 1 − | P | − s = ζ A ( s ) ζ A ( s k ) = 1 − q 1 − k s 1 − q 1 − s = ζ A ( s ) ζ A ( k s ) {\displaystyle \sum _{f}{\frac {\delta (f)}{|f|^{s}}}=\prod _{P}\left(\sum _{j=0}^{k-1}(|P|^{-js})\right)=\prod _{P}{\frac {1-|P|^{-sk}}{1-|P|^{-s}}}={\frac {\zeta _{A}(s)}{\zeta _{A}(sk)}}={\frac {1-q^{1-ks}}{1-q^{1-s}}}={\frac {\zeta _{A}(s)}{\zeta _{A}(ks)}}} Denote b n {\displaystyle b_{n}} the number of k-th power monic polynomials of degree n, we get ∑ f δ ( f ) | f | s = ∑ n ∑ def f = n δ ( f ) | f | − s = ∑ n b n q − s n . {\displaystyle \sum _{f}{\frac {\delta (f)}{|f|^{s}}}=\sum _{n}\sum _{{\text{def}}f=n}\delta (f)|f|^{-s}=\sum _{n}b_{n}q^{-sn}.} Making the substitution u = q − s {\displaystyle u=q^{-s}} we get: 1 − q u k 1 − q u = ∑ n = 0 ∞ b n u n . {\displaystyle {\frac {1-qu^{k}}{1-qu}}=\sum _{n=0}^{\infty }b_{n}u^{n}.} Finally, expand the left-hand side in a geometric series and compare the coefficients on u n {\displaystyle u^{n}} on both sides, to conclude that b n = { q n n ≤ k − 1 q n ( 1 − q 1 − k ) otherwise {\displaystyle b_{n}={\begin{cases}q^{n}&n\leq k-1\\q^{n}(1-q^{1-k})&{\text{otherwise}}\end{cases}}} Hence, Ave n ( δ ) = 1 − q 1 − k = 1 ζ A ( k ) {\displaystyle {\text{Ave}}_{n}(\delta )=1-q^{1-k}={\frac {1}{\zeta _{A}(k)}}} And since it doesn't depend on n this is also the mean value of δ ( f ) {\displaystyle \delta (f)} . ==== Polynomial Divisor functions ==== In Fq[X], we define σ k ( m ) = ∑ f | m , monic | f | k . {\displaystyle \sigma _{k}(m)=\sum _{f|m,{\text{ monic}}}|f|^{k}.} We will compute Ave n ( σ k ) {\displaystyle {\text{Ave}}_{n}(\sigma _{k})} for k ≥ 1 {\displaystyle k\geq 1} . First, notice that σ k ( m ) = h ∗ I ( m ) {\displaystyle \sigma _{k}(m)=h*\mathbb {I} (m)} where h ( f ) = | f | k {\displaystyle h(f)=|f|^{k}} and I ( f ) = 1 ∀ f {\displaystyle \mathbb {I} (f)=1\;\;\forall {f}} . Therefore, ∑ m σ k ( m ) | m | − s = ζ A ( s ) ∑ m h ( m ) | m | − s . {\displaystyle \sum _{m}\sigma _{k}(m)|m|^{-s}=\zeta _{A}(s)\sum _{m}h(m)|m|^{-s}.} Substitute q − s = u {\displaystyle q^{-s}=u} we get, LHS = ∑ n ( ∑ deg ⁡ ( m ) = n σ k ( m ) ) u n , {\displaystyle {\text{LHS}}=\sum _{n}\left(\sum _{\deg(m)=n}\sigma _{k}(m)\right)u^{n},} and by Cauchy product we get, RHS = ∑ n q n ( 1 − s ) ∑ n ( ∑ deg ⁡ ( m ) = n h ( m ) ) u n = ∑ n q n u n ∑ l q l q l k u l = ∑ n ( ∑ j = 0 n q n − j q j k + j ) = ∑ n ( q n ( 1 − q k ( n + 1 ) 1 − q k ) ) u n . {\displaystyle {\begin{aligned}{\text{RHS}}&=\sum _{n}q^{n(1-s)}\sum _{n}\left(\sum _{\deg(m)=n}h(m)\right)u^{n}\\&=\sum _{n}q^{n}u^{n}\sum _{l}q^{l}q^{lk}u^{l}\\&=\sum _{n}\left(\sum _{j=0}^{n}q^{n-j}q^{jk+j}\right)\\&=\sum _{n}\left(q^{n}\left({\frac {1-q^{k(n+1)}}{1-q^{k}}}\right)\right)u^{n}.\end{aligned}}} Finally we get that, Ave n σ k = 1 − q k ( n + 1 ) 1 − q k . {\displaystyle {\text{Ave}}_{n}\sigma _{k}={\frac {1-q^{k(n+1)}}{1-q^{k}}}.} Notice that q n Ave n σ k = q n ( k + 1 ) ( 1 − q − k ( n + 1 ) 1 − q − k ) = q n ( k + 1 ) ( ζ ( k + 1 ) ζ ( k n + k + 1 ) ) {\displaystyle q^{n}{\text{Ave}}_{n}\sigma _{k}=q^{n(k+1)}\left({\frac {1-q^{-k(n+1)}}{1-q^{-k}}}\right)=q^{n(k+1)}\left({\frac {\zeta (k+1)}{\zeta (kn+k+1)}}\right)} Thus, if we set x = q n {\displaystyle x=q^{n}} then the above result reads ∑ deg ⁡ ( m ) = n , m monic σ k ( m ) = x k + 1 ( ζ ( k + 1 ) ζ ( k n + k + 1 ) ) {\displaystyle \sum _{\deg(m)=n,m{\text{ monic}}}\sigma _{k}(m)=x^{k+1}\left({\frac {\zeta (k+1)}{\zeta (kn+k+1)}}\right)} which resembles the analogous result for the integers: ∑ n ≤ x σ k ( n ) = ζ ( k + 1 ) k + 1 x k + 1 + O ( x k ) {\displaystyle \sum _{n\leq x}\sigma _{k}(n)={\frac {\zeta (k+1)}{k+1}}x^{k+1}+O(x^{k})} ==== Number of divisors ==== Let d ( f ) {\displaystyle d(f)} be the number of monic divisors of f and let D ( n ) {\displaystyle D(n)} be the sum of d ( f ) {\displaystyle d(f)} over all monics of degree n. ζ A ( s ) 2 = ( ∑ h | h | − s ) ( ∑ g | g | − s ) = ∑ f ( ∑ h g = f 1 ) | f | − s = ∑ f d ( f ) | f | − s = D d ( s ) = ∑ n = 0 ∞ D ( n ) u n {\displaystyle \zeta _{A}(s)^{2}=\left(\sum _{h}|h|^{-s}\right)\left(\sum _{g}|g|^{-s}\right)=\sum _{f}\left(\sum _{hg=f}1\right)|f|^{-s}=\sum _{f}d(f)|f|^{-s}=D_{d}(s)=\sum _{n=0}^{\infty }D(n)u^{n}} where u = q − s {\displaystyle u=q^{-s}} . Expanding the right-hand side into power series we get, D ( n ) = ( n + 1 ) q n . {\displaystyle D(n)=(n+1)q^{n}.} Substitute x = q n {\displaystyle x=q^{n}} the above equation becomes: D ( n ) = x log q ⁡ ( x ) + x {\displaystyle D(n)=x\log _{q}(x)+x} which resembles closely the analogous result for integers ∑ k = 1 n d ( k ) = x log ⁡ x + ( 2 γ − 1 ) x + O ( x ) {\textstyle \sum _{k=1}^{n}d(k)=x\log x+(2\gamma -1)x+O({\sqrt {x}})} , where γ {\displaystyle \gamma } is Euler constant. Not much is known about the error term for the integers, while in the polynomials case, there is no error term. This is because of the very simple nature of the zeta function ζ A ( s ) {\displaystyle \zeta _{A}(s)} , and that it has no zeros. ==== Polynomial von Mangoldt function ==== The Polynomial von Mangoldt function is defined by: Λ A ( f ) = { log ⁡ | P | if f = | P | k for some prime monic P and integer k ≥ 1 , 0 otherwise. {\displaystyle \Lambda _{A}(f)={\begin{cases}\log |P|&{\text{if }}f=|P|^{k}{\text{ for some prime monic}}P{\text{ and integer }}k\geq 1,\\0&{\text{otherwise.}}\end{cases}}} where the logarithm is taken on the basis of q. Proposition. The mean value of Λ A {\displaystyle \Lambda _{A}} is exactly 1. Proof. Let m be a monic polynomial, and let m = ∏ i = 1 l P i e i {\textstyle m=\prod _{i=1}^{l}P_{i}^{e_{i}}} be the prime decomposition of m. We have, ∑ f | m Λ A ( f ) = ∑ ( i 1 , … , i l ) | 0 ≤ i j ≤ e j Λ A ( ∏ j = 1 l P j i j ) = ∑ j = 1 l ∑ i = 1 e i Λ A ( P j i ) = ∑ j = 1 l ∑ i = 1 e i log ⁡ | P j | = ∑ j = 1 l e j log ⁡ | P j | = ∑ j = 1 l log ⁡ | P j | e j = log ⁡ | ( ∏ i = 1 l P i e i ) | = log ⁡ ( m ) {\displaystyle {\begin{aligned}\sum _{f|m}\Lambda _{A}(f)&=\sum _{(i_{1},\ldots ,i_{l})|0\leq i_{j}\leq e_{j}}\Lambda _{A}\left(\prod _{j=1}^{l}P_{j}^{i_{j}}\right)=\sum _{j=1}^{l}\sum _{i=1}^{e_{i}}\Lambda _{A}(P_{j}^{i})\\&=\sum _{j=1}^{l}\sum _{i=1}^{e_{i}}\log |P_{j}|\\&=\sum _{j=1}^{l}e_{j}\log |P_{j}|=\sum _{j=1}^{l}\log |P_{j}|^{e_{j}}\\&=\log \left|\left(\prod _{i=1}^{l}P_{i}^{e_{i}}\right)\right|\\&=\log(m)\end{aligned}}} Hence, I ⋅ Λ A ( m ) = log ⁡ | m | {\displaystyle \mathbb {I} \cdot \Lambda _{A}(m)=\log |m|} and we get that, ζ A ( s ) D Λ A ( s ) = ∑ m log ⁡ | m | | m | − s . {\displaystyle \zeta _{A}(s)D_{\Lambda _{A}}(s)=\sum _{m}\log \left|m\right|\left|m\right|^{-s}.} Now, ∑ m | m | s = ∑ n ∑ deg ⁡ m = n u n = ∑ n q n u n = ∑ n q n ( 1 − s ) . {\displaystyle \sum _{m}|m|^{s}=\sum _{n}\sum _{\deg m=n}u^{n}=\sum _{n}q^{n}u^{n}=\sum _{n}q^{n(1-s)}.} Thus, d d s ∑ m | m | s = − ∑ n log ⁡ ( q n ) q n ( 1 − s ) = − ∑ n ∑ deg ⁡ ( f ) = n log ⁡ ( q n ) q − n s = − ∑ f log ⁡ | f | | f | − s . {\displaystyle {\frac {d}{ds}}\sum _{m}|m|^{s}=-\sum _{n}\log(q^{n})q^{n(1-s)}=-\sum _{n}\sum _{\deg(f)=n}\log(q^{n})q^{-ns}=-\sum _{f}\log \left|f\right|\left|f\right|^{-s}.} We got that: D Λ A ( s ) = − ζ A ′ ( s ) ζ A ( s ) {\displaystyle D_{\Lambda _{A}}(s)={\frac {-\zeta '_{A}(s)}{\zeta _{A}(s)}}} Now, ∑ m Λ A ( m ) | m | − s = ∑ n ( ∑ deg ⁡ ( m ) = n Λ A ( m ) q − s m ) = ∑ n ( ∑ deg ⁡ ( m ) = n Λ A ( m ) ) u n = − ζ A ′ ( s ) ζ A ( s ) = q 1 − s log ⁡ ( q ) 1 − q 1 − s = log ⁡ ( q ) ∑ n = 1 ∞ q n u n {\displaystyle {\begin{aligned}\sum _{m}\Lambda _{A}(m)|m|^{-s}&=\sum _{n}\left(\sum _{\deg(m)=n}\Lambda _{A}(m)q^{-sm}\right)=\sum _{n}\left(\sum _{\deg(m)=n}\Lambda _{A}(m)\right)u^{n}\\&={\frac {-\zeta '_{A}(s)}{\zeta _{A}(s)}}={\frac {q^{1-s}\log(q)}{1-q^{1-s}}}\\&=\log(q)\sum _{n=1}^{\infty }q^{n}u^{n}\end{aligned}}} Hence, ∑ deg ⁡ ( m ) = n Λ A ( m ) = q n log ⁡ ( q ) , {\displaystyle \sum _{\deg(m)=n}\Lambda _{A}(m)=q^{n}\log(q),} and by dividing by q n {\displaystyle q^{n}} we get that, Ave n Λ A ( m ) = log ⁡ ( q ) = 1. {\displaystyle {\text{Ave}}_{n}\Lambda _{A}(m)=\log(q)=1.} ==== Polynomial Euler totient function ==== Define Euler totient function polynomial analogue, Φ {\displaystyle \Phi } , to be the number of elements in the group ( A / f A ) ∗ {\displaystyle (A/fA)^{*}} . We have, ∑ deg ⁡ f = n , f monic Φ ( f ) = q 2 n ( 1 − q − 1 ) . {\displaystyle \sum _{\deg f=n,f{\text{ monic}}}\Phi (f)=q^{2n}(1-q^{-1}).} == See also == Divisor summatory function Normal order of an arithmetic function Extremal orders of an arithmetic function Divisor sum identities == References == Hardy, G. H.; Wright, E. M. (2008) [1938]. An Introduction to the Theory of Numbers. Revised by D. R. Heath-Brown and J. H. Silverman. Foreword by Andrew Wiles. (6th ed.). Oxford: Oxford University Press. ISBN 978-0-19-921986-5. MR 2445243. Zbl 1159.11001. pp. 347–360 Gérald Tenenbaum (1995). Introduction to Analytic and Probabilistic Number Theory. Cambridge studies in advanced mathematics. Vol. 46. Cambridge University Press. pp. 36–55. ISBN 0-521-41261-7. Zbl 0831.11001. Tom M. Apostol (1976), Introduction to Analytic Number Theory, Springer Undergraduate Texts in Mathematics, ISBN 0-387-90163-9 Michael Rosen (2000), Number Theory in Function Fields, Springer Graduate Texts In Mathematics, ISBN 0-387-95335-3 Hugh L. Montgomery; Robert C. Vaughan (2006), Multiplicative Number Theory, Cambridge University Press, ISBN 978-0521849036 Michael Baakea; Robert V. Moodyb; Peter A.B. Pleasantsc (2000), Diffraction from visible lattice points and kth power free integers, Discrete Mathematics- Journal
Wikipedia/Average_order_of_an_arithmetic_function
In mathematics, the Rankin–Selberg method, introduced by Rankin (1939) and Selberg (1940), also known as the theory of integral representations of L-functions, is a technique for directly constructing and analytically continuing several important examples of automorphic L-functions. Some authors reserve the term for a special type of integral representation, namely those that involve an Eisenstein series. It has been one of the most powerful techniques for studying the Langlands program. == History == The theory in some sense dates back to Bernhard Riemann, who constructed his zeta function as the Mellin transform of Jacobi's theta function. Riemann used asymptotics of the theta function to obtain the analytic continuation, and the automorphy of the theta function to prove the functional equation. Erich Hecke, and later Hans Maass, applied the same Mellin transform method to modular forms on the upper half-plane, after which Riemann's example can be seen as a special case. Robert Alexander Rankin and Atle Selberg independently constructed their convolution L-functions, now thought of as the Langlands L-function associated to the tensor product of standard representation of GL(2) with itself. Like Riemann, they used an integral of modular forms, but one of a different type: they integrated the product of two weight k modular forms f, g with a real analytic Eisenstein series E(τ,s) over a fundamental domain D of the modular group SL2(Z) acting on the upper half plane ∫ D f ( τ ) g ( τ ) ¯ E ( τ , s ) y k − 2 d x d y {\displaystyle \displaystyle \int _{D}f(\tau ){\overline {g(\tau )}}E(\tau ,s)y^{k-2}dxdy} . The integral converges absolutely if one of the two forms is cuspidal; otherwise the asymptotics must be used to get a meromorphic continuation like Riemann did. The analytic continuation and functional equation then boil down to those of the Eisenstein series. The integral was identified with the convolution L-function by a technique called "unfolding", in which the definition of the Eisenstein series and the range of integration are converted into a simpler expression that more readily exhibits the L-function as a Dirichlet series. The simultaneous combination of an unfolding together with global control over the analytic properties, is special and what makes the technique successful. == Modern adelic theory == Hervé Jacquet and Robert Langlands later gave adelic integral representations for the standard, and tensor product L-functions that had been earlier obtained by Riemann, Hecke, Maass, Rankin, and Selberg. They gave a very complete theory, in that they elucidated formulas for all local factors, stated the functional equation in a precise form, and gave sharp analytic continuations. == Generalizations and limitations == Nowadays one has integral representations for a large constellation of automorphic L-functions, however with two frustrating caveats. The first is that it is not at all clear which L-functions possibly have integral representations, or how they may be found; it is feared that the method is near exhaustion, though time and again new examples are found via clever arguments. The second is that in general it is difficult or perhaps even impossible to compute the local integrals after the unfolding stage. This means that the integrals may have the desired analytic properties, only that they may not represent an L-function (but instead something close to it). Thus, having an integral representation for an L-function by no means indicates its analytic properties are resolved: there may be serious analytic issues remaining. At minimum, though, it ensures the L-function has an algebraic construction through formal manipulations of an integral of automorphic forms, and that at all but a finite number of places it has the conjectured Euler product of a particular L-function. In many situations the Langlands–Shahidi method gives complementary information. == Notable examples == Standard L-function on GL(n) (Godement–Jacquet). The theory was completely resolved in the original manuscript. Standard L-function on classical groups (Piatetski-Shapiro-Rallis). This construction was known as the doubling method and works for non-generic representations as well. Tensor product L-function on GL(n) × G with G a classical group (Cai-Friedberg-Ginzburg-Kaplan). This construction was a vast generalization of the doubling method, now known as the generalized doubling method. Tensor product L-function on GL(n) × GL(m) (includes the standard L-function if m = 1), due to Jacquet, Piatetski-Shapiro, and Shalika. The theory was completely resolved by Moeglin–Waldspurger, and was reverse-engineered to establish the "converse theorem". Symmetric square on GL(n) due to Shimura, and Gelbart–Jacquet (n = 2), Piatetski-Shapiro and Patterson (n = 3), and Bump–Ginzburg (n > 3). Exterior square on GL(n), due to Jacquet–Shalika and Bump–Ginzburg. Triple Product on GL(2) × GL(2) × GL(2) (Garrett, as well as Harris, Ikeda, Piatetski-Shapiro, Rallis, Ramakrishnan, and Orloff). Symmetric cube on GL(2) (Bump–Ginzburg–Hoffstein). Symmetric fourth power on GL(2) (Ginzburg–Rallis). Standard L-function of E6 and E7 (Ginzburg). Standard L-function of G2 (Ginzburg-Hundley, Gurevich-Segal). == References == Bump, Daniel (1989), "The Rankin-Selberg method: a survey", Number theory, trace formulas and discrete groups (Oslo, 1987), Boston, MA: Academic Press, pp. 49–109, MR 0993311 Bump, Daniel (2005), "The Rankin-Selberg method: an introduction and survey", in Cogdell, James W.; Jiang, Dihua; Kudla, Stephen S.; Soudry, David; Stanton, Robert (eds.), Automorphic representations, L-functions and applications: progress and prospects, Ohio State Univ. Math. Res. Inst. Publ., vol. 11, Berlin: de Gruyter, pp. 41–73, ISBN 978-3-11-017939-2, MR 2192819 Rankin, Robert A. (1939), "Contributions to the theory of Ramanujan's function τ(n) and similar arithmetical functions. I. The zeros of the function Σn=1∞τ(n)/ns on the line R s=13/2. II. The order of the Fourier coefficients of integral modular forms", Proc. Cambridge Philos. Soc., 35: 351–372, doi:10.1017/S0305004100021095, MR 0000411 Selberg, Atle (1940), "Bemerkungen über eine Dirichletsche Reihe, die mit der Theorie der Modulformen nahe verbunden ist", Arch. Math. Naturvid., 43: 47–50, MR 0002626
Wikipedia/Rankin–Selberg_method
In number theory, functions of positive integers which respect products are important and are called completely multiplicative functions or totally multiplicative functions. A weaker condition is also important, respecting only products of coprime numbers, and such functions are called multiplicative functions. Outside of number theory, the term "multiplicative function" is often taken to be synonymous with "completely multiplicative function" as defined in this article. == Definition == A completely multiplicative function (or totally multiplicative function) is an arithmetic function (that is, a function whose domain is the natural numbers), such that f(1) = 1 and f(ab) = f(a)f(b) holds for all positive integers a and b. In logic notation: f ( 1 ) = 1 {\displaystyle f(1)=1} and ∀ a , b ∈ domain ( f ) , f ( a b ) = f ( a ) f ( b ) {\displaystyle \forall a,b\in {\text{domain}}(f),f(ab)=f(a)f(b)} . Without the requirement that f(1) = 1, one could still have f(1) = 0, but then f(a) = 0 for all positive integers a, so this is not a very strong restriction. If one did not fix f ( 1 ) = 1 {\displaystyle f(1)=1} , one can see that both 0 {\displaystyle 0} and 1 {\displaystyle 1} are possibilities for the value of f ( 1 ) {\displaystyle f(1)} in the following way: f ( 1 ) = f ( 1 ⋅ 1 ) ⟺ f ( 1 ) = f ( 1 ) f ( 1 ) ⟺ f ( 1 ) = f ( 1 ) 2 ⟺ f ( 1 ) 2 − f ( 1 ) = 0 ⟺ f ( 1 ) ( f ( 1 ) − 1 ) = 0 ⟺ f ( 1 ) = 0 ∨ f ( 1 ) = 1. {\displaystyle {\begin{aligned}f(1)=f(1\cdot 1)&\iff f(1)=f(1)f(1)\\&\iff f(1)=f(1)^{2}\\&\iff f(1)^{2}-f(1)=0\\&\iff f(1)\left(f(1)-1\right)=0\\&\iff f(1)=0\lor f(1)=1.\end{aligned}}} The definition above can be rephrased using the language of algebra: A completely multiplicative function is a homomorphism from the monoid ( Z + , ⋅ ) {\displaystyle (\mathbb {Z} ^{+},\cdot )} (that is, the positive integers under multiplication) to some other monoid. == Examples == The easiest example of a completely multiplicative function is a monomial with leading coefficient 1: For any particular positive integer n, define f(a) = an. Then f(bc) = (bc)n = bncn = f(b)f(c), and f(1) = 1n = 1. The Liouville function is a non-trivial example of a completely multiplicative function as are Dirichlet characters, the Jacobi symbol and the Legendre symbol. == Properties == A completely multiplicative function is completely determined by its values at the prime numbers, a consequence of the fundamental theorem of arithmetic. Thus, if n is a product of powers of distinct primes, say n = pa qb ..., then f(n) = f(p)a f(q)b ... While the Dirichlet convolution of two multiplicative functions is multiplicative, the Dirichlet convolution of two completely multiplicative functions need not be completely multiplicative. Arithmetic functions which can be written as the Dirichlet convolution of two completely multiplicative functions are said to be quadratics or specially multiplicative multiplicative functions. They are rational arithmetic functions of order (2, 0) and obey the Busche-Ramanujan identity. There are a variety of statements about a function which are equivalent to it being completely multiplicative. For example, if a function f is multiplicative then it is completely multiplicative if and only if its Dirichlet inverse is μ ⋅ f {\displaystyle \mu \cdot f} where μ {\displaystyle \mu } is the Möbius function. Completely multiplicative functions also satisfy a distributive law. If f is completely multiplicative then f ⋅ ( g ∗ h ) = ( f ⋅ g ) ∗ ( f ⋅ h ) {\displaystyle f\cdot (g*h)=(f\cdot g)*(f\cdot h)} where * represents the Dirichlet product and ⋅ {\displaystyle \cdot } represents pointwise multiplication. One consequence of this is that for any completely multiplicative function f one has f ∗ f = τ ⋅ f {\displaystyle f*f=\tau \cdot f} which can be deduced from the above by putting both g = h = 1 {\displaystyle g=h=1} , where 1 ( n ) = 1 {\displaystyle 1(n)=1} is the constant function. Here τ {\displaystyle \tau } is the divisor function. === Proof of distributive property === f ⋅ ( g ∗ h ) ( n ) = f ( n ) ⋅ ∑ d | n g ( d ) h ( n d ) = = ∑ d | n f ( n ) ⋅ ( g ( d ) h ( n d ) ) = = ∑ d | n ( f ( d ) f ( n d ) ) ⋅ ( g ( d ) h ( n d ) ) (since f is completely multiplicative) = = ∑ d | n ( f ( d ) g ( d ) ) ⋅ ( f ( n d ) h ( n d ) ) = ( f ⋅ g ) ∗ ( f ⋅ h ) . {\displaystyle {\begin{aligned}f\cdot \left(g*h\right)(n)&=f(n)\cdot \sum _{d|n}g(d)h\left({\frac {n}{d}}\right)=\\&=\sum _{d|n}f(n)\cdot (g(d)h\left({\frac {n}{d}}\right))=\\&=\sum _{d|n}(f(d)f\left({\frac {n}{d}}\right))\cdot (g(d)h\left({\frac {n}{d}}\right)){\text{ (since }}f{\text{ is completely multiplicative) }}=\\&=\sum _{d|n}(f(d)g(d))\cdot (f\left({\frac {n}{d}}\right)h\left({\frac {n}{d}}\right))\\&=(f\cdot g)*(f\cdot h).\end{aligned}}} === Dirichlet series === The L-function of completely (or totally) multiplicative Dirichlet series a ( n ) {\displaystyle a(n)} satisfies L ( s , a ) = ∑ n = 1 ∞ a ( n ) n s = ∏ p ( 1 − a ( p ) p s ) − 1 , {\displaystyle L(s,a)=\sum _{n=1}^{\infty }{\frac {a(n)}{n^{s}}}=\prod _{p}{\biggl (}1-{\frac {a(p)}{p^{s}}}{\biggr )}^{-1},} which means that the sum all over the natural numbers is equal to the product all over the prime numbers. == See also == Arithmetic function Dirichlet L-function Dirichlet series Multiplicative function == References == T. M. Apostol, Some properties of completely multiplicative arithmetical functions, Amer. Math. Monthly 78 (1971) 266-271. P. Haukkanen, On characterizations of completely multiplicative arithmetical functions, in Number theory, Turku, de Gruyter, 2001, pp. 115–123. E. Langford, Distributivity over the Dirichlet product and completely multiplicative arithmetical functions, Amer. Math. Monthly 80 (1973) 411–414. V. Laohakosol, Logarithmic operators and characterizations of completely multiplicative functions, Southeast Asian Bull. Math. 25 (2001) no. 2, 273–281. K. L. Yocom, Totally multiplicative functions in regular convolution rings, Canad. Math. Bull. 16 (1973) 119–128.
Wikipedia/Completely_multiplicative_function
In number theory, the parity problem refers to a limitation in sieve theory that prevents sieves from giving good estimates in many kinds of prime-counting problems. The problem was identified and named by Atle Selberg in 1949. Beginning around 1996, John Friedlander and Henryk Iwaniec developed some parity-sensitive sieves that make the parity problem less of an obstacle. == Statement == Terence Tao gave this "rough" statement of the problem: Parity problem. If A is a set whose elements are all products of an odd number of primes (or are all products of an even number of primes), then (without injecting additional ingredients), sieve theory is unable to provide non-trivial lower bounds on the size of A. Also, any upper bounds must be off from the truth by a factor of 2 or more. This problem is significant because it may explain why it is difficult for sieves to "detect primes," in other words to give a non-trivial lower bound for the number of primes with some property. For example, in a sense Chen's theorem is very close to a solution of the twin prime conjecture, since it says that there are infinitely many primes p such that p + 2 is either prime or the product of two primes (semiprime). The parity problem suggests that, because the case of interest has an odd number of prime factors (namely 1), it won't be possible to separate out the two cases using sieves. == Example == This example is due to Selberg and is given as an exercise with hints by Cojocaru & Murty.: 133–134  The problem is to estimate separately the number of numbers ≤ x with no prime divisors ≤ x1/2, that have an even (or an odd) number of prime factors. It can be shown that, no matter what the choice of weights in a Brun- or Selberg-type sieve, the upper bound obtained will be at least (2 + o(1)) x / ln x for both problems. But in fact the set with an even number of factors is empty and so has size 0. The set with an odd number of factors is just the primes between x1/2 and x, so by the prime number theorem its size is (1 + o(1)) x / ln x. Thus these sieve methods are unable to give a useful upper bound for the first set, and overestimate the upper bound on the second set by a factor of 2. == Parity-sensitive sieves == Beginning around 1996 John Friedlander and Henryk Iwaniec developed some new sieve techniques to "break" the parity problem. One of the triumphs of these new methods is the Friedlander–Iwaniec theorem, which states that there are infinitely many primes of the form a2 + b4. Glyn Harman relates the parity problem to the distinction between Type I and Type II information in a sieve. == Karatsuba phenomenon == In 2007 Anatolii Alexeevitch Karatsuba discovered an imbalance between the numbers in an arithmetic progression with given parities of the number of prime factors. His papers were published after his death. Let N {\displaystyle \mathbb {N} } be a set of natural numbers (positive integers) that is, the numbers 1 , 2 , 3 , … {\displaystyle 1,2,3,\dots } . The set of primes, that is, such integers n ∈ N {\displaystyle n\in \mathbb {N} } , n > 1 {\displaystyle n>1} , that have just two distinct divisors (namely, n {\displaystyle n} and 1 {\displaystyle 1} ), is denoted by P {\displaystyle \mathbb {P} } , P = { 2 , 3 , 5 , 7 , 11 , … } ⊂ N {\displaystyle \mathbb {P} =\{2,3,5,7,11,\dots \}\subset \mathbb {N} } . Every natural number n ∈ N {\displaystyle n\in \mathbb {N} } , n > 1 {\displaystyle n>1} , can be represented as a product of primes (not necessarily distinct), that is n = p 1 p 2 … p k , {\displaystyle n=p_{1}p_{2}\dots p_{k},} where p 1 ∈ P , p 2 ∈ P , … , p k ∈ P {\displaystyle p_{1}\in \mathbb {P} ,\ p_{2}\in \mathbb {P} ,\ \dots ,\ p_{k}\in \mathbb {P} } , and such representation is unique up to the order of factors. If we form two sets, the first consisting of positive integers having even number of prime factors, the second consisting of positive integers having an odd number of prime factors, in their canonical representation, then the two sets are approximately the same size. If, however, we limit our two sets to those positive integers whose canonical representation contains no primes in arithmetic progression, for example 6 m + 1 {\displaystyle 6m+1} , m = 1 , 2 , … {\displaystyle m=1,2,\dots } or the progression k m + l {\displaystyle km+l} , 1 ≤ l < k {\displaystyle 1\leq l<k} , ( l , k ) = 1 {\displaystyle (l,k)=1} , m = 0 , 1 , 2 , … {\displaystyle m=0,1,2,\dots } , then of these positive integers, those with an even number of prime factors will tend to be fewer than those with odd number of prime factors. Karatsuba discovered this property. He found also a formula for this phenomenon, a formula for the difference in cardinalities of sets of natural numbers with odd and even amount of prime factors, when these factors are complied with certain restrictions. In all cases, since the sets involved are infinite, by "larger" and "smaller" we mean the limit of the ratio of the sets as an upper bound on the primes goes to infinity. In the case of primes containing an arithmetic progression, Karatsuba proved that this limit is infinite. We restate the Karatsuba phenomenon using mathematical terminology. Let N 0 {\displaystyle \mathbb {N} _{0}} and N 1 {\displaystyle \mathbb {N} _{1}} be subsets of N {\displaystyle \mathbb {N} } , such that n ∈ N 0 {\displaystyle n\in \mathbb {N} _{0}} , if n {\displaystyle n} contains an even number of prime factors, and n ∈ N 1 {\displaystyle n\in \mathbb {N} _{1}} , if n {\displaystyle n} contains an odd number of prime factors. Intuitively, the sizes of the two sets N 0 {\displaystyle \mathbb {N} _{0}} and N 1 {\displaystyle \mathbb {N} _{1}} are approximately the same. More precisely, for all x ≥ 1 {\displaystyle x\geq 1} , we define n 0 ( x ) {\displaystyle n_{0}(x)} and n 1 ( x ) {\displaystyle n_{1}(x)} , where n 0 ( x ) {\displaystyle n_{0}(x)} is the cardinality of the set of all numbers n {\displaystyle n} from N 0 {\displaystyle \mathbb {N} _{0}} such that n ≤ x {\displaystyle n\leq x} , and n 1 ( x ) {\displaystyle n_{1}(x)} is the cardinality of the set of all numbers n {\displaystyle n} from N 1 {\displaystyle \mathbb {N} _{1}} such that n ≤ x {\displaystyle n\leq x} . The asymptotic behavior of n 0 ( x ) {\displaystyle n_{0}(x)} and n 1 ( x ) {\displaystyle n_{1}(x)} was derived by E. Landau: n 0 ( x ) = 1 2 x + O ( x e − c ln ⁡ x ) , n 1 ( x ) = 1 2 x + O ( x e − c ln ⁡ x ) ; c > 0. {\displaystyle n_{0}(x)={\frac {1}{2}}x+O\left(xe^{-c{\sqrt {\ln x}}}\right),n_{1}(x)={\frac {1}{2}}x+O\left(xe^{-c{\sqrt {\ln x}}}\right);c>0.} This shows that n 0 ( x ) ∼ n 1 ( x ) ∼ 1 2 x , {\displaystyle n_{0}(x)\sim n_{1}(x)\sim {\frac {1}{2}}x,} that is n 0 ( x ) {\displaystyle n_{0}(x)} and n 1 ( x ) {\displaystyle n_{1}(x)} are asymptotically equal. Further, n 1 ( x ) − n 0 ( x ) = O ( x e − c ln ⁡ x ) , {\displaystyle n_{1}(x)-n_{0}(x)=O\left(xe^{-c{\sqrt {\ln x}}}\right),} so that the difference between the cardinalities of the two sets is small. On the other hand, if we let k ≥ 2 {\displaystyle k\geq 2} be a natural number, and l 1 , l 2 , … l r {\displaystyle l_{1},l_{2},\dots l_{r}} be a sequence of natural numbers, 1 ≤ r < φ ( k ) {\displaystyle 1\leq r<\varphi (k)} , such that 1 ≤ l j < k {\displaystyle 1\leq l_{j}<k} ; ( l j , k ) = 1 {\displaystyle (l_{j},k)=1} ; every l j {\displaystyle l_{j}} are different modulo k {\displaystyle k} ; j = 1 , 2 , … r . {\displaystyle j=1,2,\dots r.} Let A {\displaystyle \mathbb {A} } be a set of primes belonging to the progressions k n + l j {\displaystyle kn+l_{j}} ; j ≤ r {\displaystyle j\leq r} . ( A {\displaystyle \mathbb {A} } is the set of all primes not dividing k {\displaystyle k} ). We denote as N ∗ {\displaystyle \mathbb {N} ^{*}} a set of natural numbers, which do not contain prime factors from A {\displaystyle \mathbb {A} } , and as N 0 ∗ {\displaystyle \mathbb {N} _{0}^{*}} a subset of numbers from N ∗ {\displaystyle \mathbb {N} ^{*}} with even number of prime factors, as N 1 ∗ {\displaystyle \mathbb {N} _{1}^{*}} a subset of numbers from N ∗ {\displaystyle \mathbb {N} ^{*}} with odd number of prime factors. We define the functions n ∗ ( x ) = ∑ n ≤ x n ∈ N ∗ 1 ; n 0 ∗ ( x ) = ∑ n ≤ x n ∈ N 0 ∗ 1 ; n 1 ∗ ( x ) = ∑ n ≤ x n ∈ N 1 ∗ 1. {\displaystyle n^{*}(x)=\displaystyle \sum _{\begin{array}{c}n\leq x\\n\in \mathbb {N} ^{*}\end{array}}1;n_{0}^{*}(x)=\displaystyle \sum _{\begin{array}{c}n\leq x\\n\in \mathbb {N} _{0}^{*}\end{array}}1;n_{1}^{*}(x)=\displaystyle \sum _{\begin{array}{c}n\leq x\\n\in \mathbb {N} _{1}^{*}\end{array}}1.} Karatsuba proved that for x → + ∞ {\displaystyle x\to +\infty } , the asymptotic formula n 1 ∗ ( x ) − n 0 ∗ ( x ) ∼ C n ∗ ( x ) ( ln ⁡ x ) 2 ( r φ ( k ) − 1 ) , {\displaystyle n_{1}^{*}(x)-n_{0}^{*}(x)\sim Cn^{*}(x)(\ln x)^{2\left({\frac {r}{\varphi (k)}}-1\right)},} is valid, where C {\displaystyle C} is a positive constant. He also showed that it is possible to prove the analogous theorems for other sets of natural numbers, for example, for numbers which are representable in the form of the sum of two squares, and that sets of natural numbers, all factors of which do belong to A {\displaystyle \mathbb {A} } , will display analogous asymptotic behavior. The Karatsuba theorem was generalized for the case when A {\displaystyle \mathbf {A} } is a certain unlimited set of primes. The Karatsuba phenomenon is illustrated by the following example. We consider the natural numbers whose canonical representation does not include primes belonging to the progression 6 m + 1 {\displaystyle 6m+1} , m = 1 , 2 , … {\displaystyle m=1,2,\dots } . Then this phenomenon is expressed by the formula: n 1 ∗ ( x ) − n 0 ∗ ( x ) ∼ π 8 3 n ∗ ( x ) ln ⁡ x , x → + ∞ . {\displaystyle n_{1}^{*}(x)-n_{0}^{*}(x)\sim {\frac {\pi }{8{\sqrt {3}}}}{\frac {n^{*}(x)}{\ln x}},\qquad x\to +\infty .} == Notes ==
Wikipedia/Parity_problem_(sieve_theory)
A weight function is a mathematical device used when performing a sum, integral, or average to give some elements more "weight" or influence on the result than other elements in the same set. The result of this application of a weight function is a weighted sum or weighted average. Weight functions occur frequently in statistics and analysis, and are closely related to the concept of a measure. Weight functions can be employed in both discrete and continuous settings. They can be used to construct systems of calculus called "weighted calculus" and "meta-calculus". == Discrete weights == === General definition === In the discrete setting, a weight function w : A → R + {\displaystyle w\colon A\to \mathbb {R} ^{+}} is a positive function defined on a discrete set A {\displaystyle A} , which is typically finite or countable. The weight function w ( a ) := 1 {\displaystyle w(a):=1} corresponds to the unweighted situation in which all elements have equal weight. One can then apply this weight to various concepts. If the function f : A → R {\displaystyle f\colon A\to \mathbb {R} } is a real-valued function, then the unweighted sum of f {\displaystyle f} on A {\displaystyle A} is defined as ∑ a ∈ A f ( a ) ; {\displaystyle \sum _{a\in A}f(a);} but given a weight function w : A → R + {\displaystyle w\colon A\to \mathbb {R} ^{+}} , the weighted sum or conical combination is defined as ∑ a ∈ A f ( a ) w ( a ) . {\displaystyle \sum _{a\in A}f(a)w(a).} One common application of weighted sums arises in numerical integration. If B is a finite subset of A, one can replace the unweighted cardinality |B| of B by the weighted cardinality ∑ a ∈ B w ( a ) . {\displaystyle \sum _{a\in B}w(a).} If A is a finite non-empty set, one can replace the unweighted mean or average 1 | A | ∑ a ∈ A f ( a ) {\displaystyle {\frac {1}{|A|}}\sum _{a\in A}f(a)} by the weighted mean or weighted average ∑ a ∈ A f ( a ) w ( a ) ∑ a ∈ A w ( a ) . {\displaystyle {\frac {\sum _{a\in A}f(a)w(a)}{\sum _{a\in A}w(a)}}.} In this case only the relative weights are relevant. === Statistics === Weighted means are commonly used in statistics to compensate for the presence of bias. For a quantity f {\displaystyle f} measured multiple independent times f i {\displaystyle f_{i}} with variance σ i 2 {\displaystyle \sigma _{i}^{2}} , the best estimate of the signal is obtained by averaging all the measurements with weight w i = 1 / σ i 2 {\textstyle w_{i}=1/{\sigma _{i}^{2}}} , and the resulting variance is smaller than each of the independent measurements σ 2 = 1 / ∑ i w i {\textstyle \sigma ^{2}=1/\sum _{i}w_{i}} . The maximum likelihood method weights the difference between fit and data using the same weights w i {\displaystyle w_{i}} . The expected value of a random variable is the weighted average of the possible values it might take on, with the weights being the respective probabilities. More generally, the expected value of a function of a random variable is the probability-weighted average of the values the function takes on for each possible value of the random variable. In regressions in which the dependent variable is assumed to be affected by both current and lagged (past) values of the independent variable, a distributed lag function is estimated, this function being a weighted average of the current and various lagged independent variable values. Similarly, a moving average model specifies an evolving variable as a weighted average of current and various lagged values of a random variable. === Mechanics === The terminology weight function arises from mechanics: if one has a collection of n {\displaystyle n} objects on a lever, with weights w 1 , … , w n {\displaystyle w_{1},\ldots ,w_{n}} (where weight is now interpreted in the physical sense) and locations x 1 , … , x n {\displaystyle {\boldsymbol {x}}_{1},\dotsc ,{\boldsymbol {x}}_{n}} , then the lever will be in balance if the fulcrum of the lever is at the center of mass ∑ i = 1 n w i x i ∑ i = 1 n w i , {\displaystyle {\frac {\sum _{i=1}^{n}w_{i}{\boldsymbol {x}}_{i}}{\sum _{i=1}^{n}w_{i}}},} which is also the weighted average of the positions x i {\displaystyle {\boldsymbol {x}}_{i}} . == Continuous weights == In the continuous setting, a weight is a positive measure such as w ( x ) d x {\displaystyle w(x)\,dx} on some domain Ω {\displaystyle \Omega } , which is typically a subset of a Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , for instance Ω {\displaystyle \Omega } could be an interval [ a , b ] {\displaystyle [a,b]} . Here d x {\displaystyle dx} is Lebesgue measure and w : Ω → R + {\displaystyle w\colon \Omega \to \mathbb {R} ^{+}} is a non-negative measurable function. In this context, the weight function w ( x ) {\displaystyle w(x)} is sometimes referred to as a density. === General definition === If f : Ω → R {\displaystyle f\colon \Omega \to \mathbb {R} } is a real-valued function, then the unweighted integral ∫ Ω f ( x ) d x {\displaystyle \int _{\Omega }f(x)\ dx} can be generalized to the weighted integral ∫ Ω f ( x ) w ( x ) d x {\displaystyle \int _{\Omega }f(x)w(x)\,dx} Note that one may need to require f {\displaystyle f} to be absolutely integrable with respect to the weight w ( x ) d x {\displaystyle w(x)\,dx} in order for this integral to be finite. === Weighted volume === If E is a subset of Ω {\displaystyle \Omega } , then the volume vol(E) of E can be generalized to the weighted volume ∫ E w ( x ) d x , {\displaystyle \int _{E}w(x)\ dx,} === Weighted average === If Ω {\displaystyle \Omega } has finite non-zero weighted volume, then we can replace the unweighted average 1 v o l ( Ω ) ∫ Ω f ( x ) d x {\displaystyle {\frac {1}{\mathrm {vol} (\Omega )}}\int _{\Omega }f(x)\ dx} by the weighted average ∫ Ω f ( x ) w ( x ) d x ∫ Ω w ( x ) d x {\displaystyle {\frac {\displaystyle \int _{\Omega }f(x)\,w(x)\,dx}{\displaystyle \int _{\Omega }w(x)\,dx}}} === Bilinear form === If f : Ω → R {\displaystyle f\colon \Omega \to {\mathbb {R} }} and g : Ω → R {\displaystyle g\colon \Omega \to {\mathbb {R} }} are two functions, one can generalize the unweighted bilinear form ⟨ f , g ⟩ := ∫ Ω f ( x ) g ( x ) d x {\displaystyle \langle f,g\rangle :=\int _{\Omega }f(x)g(x)\ dx} to a weighted bilinear form ⟨ f , g ⟩ w := ∫ Ω f ( x ) g ( x ) w ( x ) d x . {\displaystyle {\langle f,g\rangle }_{w}:=\int _{\Omega }f(x)g(x)\ w(x)\ dx.} See the entry on orthogonal polynomials for examples of weighted orthogonal functions. == See also == Center of mass Numerical integration Orthogonality Weighted mean Linear combination Kernel (statistics) Measure (mathematics) Riemann–Stieltjes integral Weighting Window function == References ==
Wikipedia/Weight_function
In number theory, the fundamental lemma of sieve theory is any of several results that systematize the process of applying sieve methods to particular problems. Halberstam & Richert : 92–93  write: A curious feature of sieve literature is that while there is frequent use of Brun's method there are only a few attempts to formulate a general Brun theorem (such as Theorem 2.1); as a result there are surprisingly many papers which repeat in considerable detail the steps of Brun's argument. Diamond & Halberstam: 42  attribute the terminology Fundamental Lemma to Jonas Kubilius. == Common notation == We use these notations: A {\displaystyle A} is a set of X {\displaystyle X} positive integers, and A d {\displaystyle A_{d}} is its subset of integers divisible by d {\displaystyle d} w ( d ) {\displaystyle w(d)} and R d {\displaystyle R_{d}} are functions of A {\displaystyle A} and of d {\displaystyle d} that estimate the number of elements of A {\displaystyle A} that are divisible by d {\displaystyle d} , according to the formula | A d | = w ( d ) d X + R d . {\displaystyle \left\vert A_{d}\right\vert ={\frac {w(d)}{d}}X+R_{d}.} Thus w ( d ) / d {\displaystyle w(d)/d} represents an approximate density of members divisible by d {\displaystyle d} , and R d {\displaystyle R_{d}} represents an error or remainder term. P {\displaystyle P} is a set of primes, and P ( z ) {\displaystyle P(z)} is the product of those primes ≤ z {\displaystyle \leq z} S ( A , P , z ) {\displaystyle S(A,P,z)} is the number of elements of A {\displaystyle A} not divisible by any prime in P {\displaystyle P} that is ≤ z {\displaystyle \leq z} κ {\displaystyle \kappa } is a constant, called the sifting density,: 28  that appears in the assumptions below. It is a weighted average of the number of residue classes sieved out by each prime. == Fundamental lemma of the combinatorial sieve == This formulation is from Tenenbaum.: 60  Other formulations are in Halberstam & Richert,: 82  in Greaves,: 92  and in Friedlander & Iwaniec.: 732–733  We make the assumptions: w ( d ) {\displaystyle w(d)} is a multiplicative function. The sifting density κ {\displaystyle \kappa } satisfies, for some constant C {\displaystyle C} and any real numbers η {\displaystyle \eta } and ξ {\displaystyle \xi } with 2 ≤ η ≤ ξ {\displaystyle 2\leq \eta \leq \xi } : ∏ η ≤ p ≤ ξ ( 1 − w ( p ) p ) − 1 < ( ln ⁡ ξ ln ⁡ η ) κ ( 1 + C ln ⁡ η ) . {\displaystyle \prod _{\eta \leq p\leq \xi }\left(1-{\frac {w(p)}{p}}\right)^{-1}<\left({\frac {\ln \xi }{\ln \eta }}\right)^{\kappa }\left(1+{\frac {C}{\ln \eta }}\right).} There is a parameter u ≥ 1 {\displaystyle u\geq 1} that is at our disposal. We have uniformly in A {\displaystyle A} , X {\displaystyle X} , z {\displaystyle z} , and u {\displaystyle u} that S ( a , P , z ) = X ∏ p ≤ z , p ∈ P ( 1 − w ( p ) p ) { 1 + O ( u − u / 2 ) } + O ( ∑ d ≤ z u , d | P ( z ) | R d | ) . {\displaystyle S(a,P,z)=X\prod _{p\leq z,p\in P}\left(1-{\frac {w(p)}{p}}\right)\{1+O(u^{-u/2})\}+O\left(\sum _{d\leq z^{u},d|P(z)}|R_{d}|\right).} In applications we pick u {\displaystyle u} to get the best error term. In the sieve it is related to the number of levels of the inclusion–exclusion principle. == Fundamental lemma of the Selberg sieve == This formulation is from Halberstam & Richert.: 208–209  Another formulation is in Diamond & Halberstam.: 29  We make the assumptions: w ( d ) {\displaystyle w(d)} is a multiplicative function. The sifting density κ {\displaystyle \kappa } satisfies, for some constant C {\displaystyle C} and any real numbers η {\displaystyle \eta } and ξ {\displaystyle \xi } with 2 ≤ η ≤ ξ {\displaystyle 2\leq \eta \leq \xi } : ∑ η ≤ p ≤ ξ w ( p ) ln ⁡ p p < κ ln ⁡ ξ η + C . {\displaystyle \qquad \sum _{\eta \leq p\leq \xi }{\frac {w(p)\ln p}{p}}<\kappa \ln {\frac {\xi }{\eta }}+C.} w ( p ) p < 1 − c {\displaystyle {\frac {w(p)}{p}}<1-c} for some small fixed c {\displaystyle c} and all p {\displaystyle p} . | R ( d ) | ≤ w ( d ) {\displaystyle |R(d)|\leq w(d)} for all squarefree d {\displaystyle d} whose prime factors are in P {\displaystyle P} . The fundamental lemma has almost the same form as for the combinatorial sieve. Write u = ln ⁡ X / ln ⁡ z {\displaystyle u=\ln {X}/\ln {z}} . The conclusion is: S ( a , P , z ) = X ∏ p ≤ z , p ∈ P ( 1 − w ( p ) p ) { 1 + O ( e − u / 2 ) } . {\displaystyle S(a,P,z)=X\prod _{p\leq z,\ p\in P}\left(1-{\frac {w(p)}{p}}\right)\{1+O(e^{-u/2})\}.} Note that u {\displaystyle u} is no longer an independent parameter at our disposal, but is controlled by the choice of z {\displaystyle z} . Note that the error term here is weaker than for the fundamental lemma of the combinatorial sieve. Halberstam & Richert remark:: 221  "Thus it is not true to say, as has been asserted from time to time in the literature, that Selberg's sieve is always better than Brun's." == Notes ==
Wikipedia/Fundamental_lemma_of_sieve_theory
The chirp Z-transform (CZT) is a generalization of the discrete Fourier transform (DFT). While the DFT samples the Z plane at uniformly-spaced points along the unit circle, the chirp Z-transform samples along spiral arcs in the Z-plane, corresponding to straight lines in the S plane. The DFT, real DFT, and zoom DFT can be calculated as special cases of the CZT. Specifically, the chirp Z transform calculates the Z transform at a finite number of points zk along a logarithmic spiral contour, defined as: X k = ∑ n = 0 N − 1 x ( n ) z k − n {\displaystyle X_{k}=\sum _{n=0}^{N-1}x(n)z_{k}^{-n}} z k = A ⋅ W − k , k = 0 , 1 , … , M − 1 {\displaystyle z_{k}=A\cdot W^{-k},k=0,1,\dots ,M-1} where A is the complex starting point, W is the complex ratio between points, and M is the number of points to calculate. Like the DFT, the chirp Z-transform can be computed in O(n log n) operations where n = max ( M , N ) n=\max(M,N) . An O(N log N) algorithm for the inverse chirp Z-transform (ICZT) was described in 2003, and in 2019. == Bluestein's algorithm == Bluestein's algorithm expresses the CZT as a convolution and implements it efficiently using FFT/IFFT. As the DFT is a special case of the CZT, this allows the efficient calculation of discrete Fourier transform (DFT) of arbitrary sizes, including prime sizes. (The other algorithm for FFTs of prime sizes, Rader's algorithm, also works by rewriting the DFT as a convolution.) It was conceived in 1968 by Leo Bluestein. Bluestein's algorithm can be used to compute more general transforms than the DFT, based on the (unilateral) z-transform (Rabiner et al., 1969). Recall that the DFT is defined by the formula X k = ∑ n = 0 N − 1 x n e − 2 π i N n k k = 0 , … , N − 1. {\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}e^{-{\frac {2\pi i}{N}}nk}\qquad k=0,\dots ,N-1.} If we replace the product nk in the exponent by the identity n k = − ( k − n ) 2 2 + n 2 2 + k 2 2 {\displaystyle nk={\frac {-(k-n)^{2}}{2}}+{\frac {n^{2}}{2}}+{\frac {k^{2}}{2}}} we thus obtain: X k = e − π i N k 2 ∑ n = 0 N − 1 ( x n e − π i N n 2 ) e π i N ( k − n ) 2 k = 0 , … , N − 1. {\displaystyle X_{k}=e^{-{\frac {\pi i}{N}}k^{2}}\sum _{n=0}^{N-1}\left(x_{n}e^{-{\frac {\pi i}{N}}n^{2}}\right)e^{{\frac {\pi i}{N}}(k-n)^{2}}\qquad k=0,\dots ,N-1.} This summation is precisely a convolution of the two sequences an and bn defined by: a n = x n e − π i N n 2 {\displaystyle a_{n}=x_{n}e^{-{\frac {\pi i}{N}}n^{2}}} b n = e π i N n 2 , {\displaystyle b_{n}=e^{{\frac {\pi i}{N}}n^{2}},} with the output of the convolution multiplied by N phase factors bk*. That is: X k = b k ∗ ( ∑ n = 0 N − 1 a n b k − n ) k = 0 , … , N − 1. {\displaystyle X_{k}=b_{k}^{*}\left(\sum _{n=0}^{N-1}a_{n}b_{k-n}\right)\qquad k=0,\dots ,N-1.} This convolution, in turn, can be performed with a pair of FFTs (plus the pre-computed FFT of complex chirp bn) via the convolution theorem. The key point is that these FFTs are not of the same length N: such a convolution can be computed exactly from FFTs only by zero-padding it to a length greater than or equal to 2N–1. In particular, one can pad to a power of two or some other highly composite size, for which the FFT can be efficiently performed by e.g. the Cooley–Tukey algorithm in O(N log N) time. Thus, Bluestein's algorithm provides an O(N log N) way to compute prime-size DFTs, albeit several times slower than the Cooley–Tukey algorithm for composite sizes. The use of zero-padding for the convolution in Bluestein's algorithm deserves some additional comment. Suppose we zero-pad to a length M ≥ 2N–1. This means that an is extended to an array An of length M, where An = an for 0 ≤ n < N and An = 0 otherwise—the usual meaning of "zero-padding". However, because of the bk–n term in the convolution, both positive and negative values of n are required for bn (noting that b–n = bn). The periodic boundaries implied by the DFT of the zero-padded array mean that –n is equivalent to M–n. Thus, bn is extended to an array Bn of length M, where B0 = b0, Bn = BM–n = bn for 0 < n < N, and Bn = 0 otherwise. A and B are then FFTed, multiplied pointwise, and inverse FFTed to obtain the convolution of a and b, according to the usual convolution theorem. Let us also be more precise about what type of convolution is required in Bluestein's algorithm for the DFT. If the sequence bn were periodic in n with period N, then it would be a cyclic convolution of length N, and the zero-padding would be for computational convenience only. However, this is not generally the case: b n + N = e π i N ( n + N ) 2 = b n [ e π i N ( 2 N n + N 2 ) ] = ( − 1 ) N b n . {\displaystyle b_{n+N}=e^{{\frac {\pi i}{N}}(n+N)^{2}}=b_{n}\left[e^{{\frac {\pi i}{N}}(2Nn+N^{2})}\right]=(-1)^{N}b_{n}.} Therefore, for N even the convolution is cyclic, but in this case N is composite and one would normally use a more efficient FFT algorithm such as Cooley–Tukey. For N odd, however, then bn is antiperiodic and we technically have a negacyclic convolution of length N. Such distinctions disappear when one zero-pads an to a length of at least 2N−1 as described above, however. It is perhaps easiest, therefore, to think of it as a subset of the outputs of a simple linear convolution (i.e. no conceptual "extensions" of the data, periodic or otherwise). == z-transforms == Bluestein's algorithm can also be used to compute a more general transform based on the (unilateral) z-transform (Rabiner et al., 1969). In particular, it can compute any transform of the form: X k = ∑ n = 0 N − 1 x n z n k k = 0 , … , M − 1 , {\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}z^{nk}\qquad k=0,\dots ,M-1,} for an arbitrary complex number z and for differing numbers N and M of inputs and outputs. Given Bluestein's algorithm, such a transform can be used, for example, to obtain a more finely spaced interpolation of some portion of the spectrum (although the frequency resolution is still limited by the total sampling time, similar to a Zoom FFT), enhance arbitrary poles in transfer-function analyses, etc. The algorithm was dubbed the chirp z-transform algorithm because, for the Fourier-transform case (|z| = 1), the sequence bn from above is a complex sinusoid of linearly increasing frequency, which is called a (linear) chirp in radar systems. == See also == Fractional Fourier transform == References == === General === Leo I. Bluestein, "A linear filtering approach to the computation of the discrete Fourier transform," Northeast Electronics Research and Engineering Meeting Record 10, 218-219 (1968). Lawrence R. Rabiner, Ronald W. Schafer, and Charles M. Rader, "The chirp z-transform algorithm and its application," Bell Syst. Tech. J. 48, 1249-1292 (1969). Also published in: Rabiner, Shafer, and Rader, "The chirp z-transform algorithm," IEEE Trans. Audio Electroacoustics 17 (2), 86–92 (1969). D. H. Bailey and P. N. Swarztrauber, "The fractional Fourier transform and applications," SIAM Review 33, 389-404 (1991). (Note that this terminology for the z-transform is nonstandard: a fractional Fourier transform conventionally refers to an entirely different, continuous transform.) Lawrence Rabiner, "The chirp z-transform algorithm—a lesson in serendipity," IEEE Signal Processing Magazine 21, 118-119 (March 2004). (Historical commentary.) Vladimir Sukhoy and Alexander Stoytchev: "Generalizing the inverse FFT off the unit circle", (Oct 2019). # Open access. Vladimir Sukhoy and Alexander Stoytchev: "Numerical error analysis of the ICZT algorithm for chirp contours on the unit circle", Sci Rep 10, 4852 (2020). == External links == A DSP algorithm for frequency analysis - the Chirp-Z Transform (CZT) Solving a 50-year-old puzzle in signal processing, part two
Wikipedia/Chirp_Z-transform
In mathematics, the discrete-time Fourier transform (DTFT) is a form of Fourier analysis that is applicable to a sequence of discrete values. The DTFT is often used to analyze samples of a continuous function. The term discrete-time refers to the fact that the transform operates on discrete data, often samples whose interval has units of time. From uniformly spaced samples it produces a function of frequency that is a periodic summation of the continuous Fourier transform of the original continuous function. In simpler terms, when you take the DTFT of regularly-spaced samples of a continuous signal, you get repeating (and possibly overlapping) copies of the signal's frequency spectrum, spaced at intervals corresponding to the sampling frequency. Under certain theoretical conditions, described by the sampling theorem, the original continuous function can be recovered perfectly from the DTFT and thus from the original discrete samples. The DTFT itself is a continuous function of frequency, but discrete samples of it can be readily calculated via the discrete Fourier transform (DFT) (see § Sampling the DTFT), which is by far the most common method of modern Fourier analysis. Both transforms are invertible. The inverse DTFT reconstructs the original sampled data sequence, while the inverse DFT produces a periodic summation of the original sequence. The fast Fourier transform (FFT) is an algorithm for computing one cycle of the DFT, and its inverse produces one cycle of the inverse DFT. == Relation to Fourier Transform == Let s ( t ) {\displaystyle s(t)} be a continuous function in the time domain. We begin with a common definition of the continuous Fourier transform, where f {\displaystyle f} represents frequency in hertz and t {\displaystyle t} represents time in seconds: S ( f ) ≜ ∫ − ∞ ∞ s ( t ) ⋅ e − i 2 π f t d t . {\displaystyle S(f)\triangleq \int _{-\infty }^{\infty }s(t)\cdot e^{-i2\pi ft}dt.} We can reduce the integral into a summation by sampling s ( t ) {\displaystyle s(t)} at intervals of T {\displaystyle T} seconds (see Fourier transform § Numerical integration of a series of ordered pairs). Specifically, we can replace s ( t ) {\displaystyle s(t)} with a discrete sequence of its samples, s ( n T ) {\displaystyle s(nT)} , for integer values of n {\displaystyle n} , and replace the differential element d t {\displaystyle dt} with the sampling period T {\displaystyle T} . Thus, we obtain one formulation for the discrete-time Fourier transform (DTFT): S 1 / T ( f ) ≜ ∑ n = − ∞ ∞ T ⋅ s ( n T ) ⏟ s [ n ] e − i 2 π f T n . {\displaystyle S_{1/T}(f)\triangleq \sum _{n=-\infty }^{\infty }\underbrace {T\cdot s(nT)} _{s[n]}\ e^{-i2\pi fTn}.} This Fourier series (in frequency) is a continuous periodic function, whose periodicity is the sampling frequency 1 / T {\displaystyle 1/T} . The subscript 1 / T {\displaystyle 1/T} distinguishes it from the continuous Fourier transform S ( f ) {\displaystyle S(f)} , and from the angular frequency form of the DTFT. The latter is obtained by defining an angular frequency variable, ω ≜ 2 π f T {\displaystyle \omega \triangleq 2\pi fT} (which has normalized units of radians/sample), giving us a periodic function of angular frequency, with periodicity 2 π {\displaystyle 2\pi } : The utility of the DTFT is rooted in the Poisson summation formula, which tells us that the periodic function represented by the Fourier series is a periodic summation of the continuous Fourier transform: The components of the periodic summation are centered at integer values (denoted by k {\displaystyle k} ) of a normalized frequency (cycles per sample). Ordinary/physical frequency (cycles per second) is the product of k {\displaystyle k} and the sample-rate, f s = 1 / T . {\displaystyle f_{s}=1/T.} For sufficiently large f s , {\displaystyle f_{s},} the k = 0 {\displaystyle k=0} term can be observed in the region [ − f s / 2 , f s / 2 ] {\displaystyle [-f_{s}/2,f_{s}/2]} with little or no distortion (aliasing) from the other terms. Fig.1 depicts an example where 1 / T {\displaystyle 1/T} is not large enough to prevent aliasing. We also note that e − i 2 π f T n {\displaystyle e^{-i2\pi fTn}} is the Fourier transform of δ ( t − n T ) . {\displaystyle \delta (t-nT).} Therefore, an alternative definition of DTFT is: The modulated Dirac comb function is a mathematical abstraction sometimes referred to as impulse sampling. == Inverse transform == An operation that recovers the discrete data sequence from the DTFT function is called an inverse DTFT. For instance, the inverse continuous Fourier transform of both sides of Eq.3 produces the sequence in the form of a modulated Dirac comb function: ∑ n = − ∞ ∞ s [ n ] ⋅ δ ( t − n T ) = F − 1 { S 1 / T ( f ) } ≜ ∫ − ∞ ∞ S 1 / T ( f ) ⋅ e i 2 π f t d f . {\displaystyle \sum _{n=-\infty }^{\infty }s[n]\cdot \delta (t-nT)={\mathcal {F}}^{-1}\left\{S_{1/T}(f)\right\}\ \triangleq \int _{-\infty }^{\infty }S_{1/T}(f)\cdot e^{i2\pi ft}df.} However, noting that S 1 / T ( f ) {\displaystyle S_{1/T}(f)} is periodic, all the necessary information is contained within any interval of length 1 / T . {\displaystyle 1/T.} In both Eq.1 and Eq.2, the summations over n {\displaystyle n} are a Fourier series, with coefficients s [ n ] . {\displaystyle s[n].} The standard formulas for the Fourier coefficients are also the inverse transforms: == Periodic data == When the input data sequence s [ n ] {\displaystyle s[n]} is N {\displaystyle N} -periodic, Eq.2 can be computationally reduced to a discrete Fourier transform (DFT), because: All the available information is contained within N {\displaystyle N} samples. S 1 / T ( f ) {\displaystyle S_{1/T}(f)} converges to zero everywhere except at integer multiples of 1 / ( N T ) , {\displaystyle 1/(NT),} known as harmonic frequencies. At those frequencies, the DTFT diverges at different frequency-dependent rates. And those rates are given by the DFT of one cycle of the s [ n ] {\displaystyle s[n]} sequence. The DTFT is periodic, so the maximum number of unique harmonic amplitudes is ( 1 / T ) / ( 1 / ( N T ) ) = N . {\displaystyle (1/T)/(1/(NT))=N.} The DFT of one cycle of the s [ n ] {\displaystyle s[n]} sequence is: S [ k ] ≜ ∑ N s [ n ] ⋅ e − i 2 π k N n ⏟ any n-sequence of length N , k ∈ Z . {\displaystyle S[k]\triangleq \underbrace {\sum _{N}s[n]\cdot e^{-i2\pi {\frac {k}{N}}n}} _{\text{any n-sequence of length N}},\quad k\in \mathbf {Z} .} And s [ n ] {\displaystyle s[n]} can be expressed in terms of the inverse transform, which is sometimes referred to as a Discrete Fourier series (DFS):: p 542  s [ n ] = 1 N ∑ N S [ k ] ⋅ e i 2 π k N n ⏟ any k-sequence of length N , n ∈ Z . {\displaystyle s[n]={\frac {1}{N}}\underbrace {\sum _{N}S[k]\cdot e^{i2\pi {\frac {k}{N}}n}} _{\text{any k-sequence of length N}},\quad n\in \mathbf {Z} .} With these definitions, we can demonstrate the relationship between the DTFT and the DFT: S 1 / T ( f ) ≜ ∑ n = − ∞ ∞ s [ n ] ⋅ e − i 2 π f n T = ∑ n = − ∞ ∞ [ 1 N ∑ k = 0 N − 1 S [ k ] ⋅ e i 2 π k N n ] ⋅ e − i 2 π f n T = 1 N ∑ k = 0 N − 1 S [ k ] [ ∑ n = − ∞ ∞ e i 2 π k N n ⋅ e − i 2 π f n T ] ⏟ DTFT ⁡ ( e i 2 π k N n ) = 1 N ∑ k = 0 N − 1 S [ k ] ⋅ 1 T ∑ M = − ∞ ∞ δ ( f − k N T − M T ) {\displaystyle {\begin{aligned}S_{1/T}(f)&\triangleq \sum _{n=-\infty }^{\infty }s[n]\cdot e^{-i2\pi fnT}\\&=\sum _{n=-\infty }^{\infty }\left[{\frac {1}{N}}\sum _{k=0}^{N-1}S[k]\cdot e^{i2\pi {\frac {k}{N}}n}\right]\cdot e^{-i2\pi fnT}\\&={\frac {1}{N}}\sum _{k=0}^{N-1}S[k]\underbrace {\left[\sum _{n=-\infty }^{\infty }e^{i2\pi {\frac {k}{N}}n}\cdot e^{-i2\pi fnT}\right]} _{\operatorname {DTFT} \left(e^{i2\pi {\frac {k}{N}}n}\right)}\\&={\frac {1}{N}}\sum _{k=0}^{N-1}S[k]\cdot {\frac {1}{T}}\sum _{M=-\infty }^{\infty }\delta \left(f-{\tfrac {k}{NT}}-{\tfrac {M}{T}}\right)\end{aligned}}} Due to the N {\displaystyle N} -periodicity of both functions of k , {\displaystyle k,} this can be simplified to: S 1 / T ( f ) = 1 N T ∑ k = − ∞ ∞ S [ k ] ⋅ δ ( f − k N T ) , {\displaystyle S_{1/T}(f)={\frac {1}{NT}}\sum _{k=-\infty }^{\infty }S[k]\cdot \delta \left(f-{\frac {k}{NT}}\right),} which satisfies the inverse transform requirement: s [ n ] = T ∫ 0 1 T S 1 / T ( f ) ⋅ e i 2 π f n T d f = 1 N ∑ k = − ∞ ∞ S [ k ] ∫ 0 1 T δ ( f − k N T ) e i 2 π f n T d f ⏟ zero for k ∉ [ 0 , N − 1 ] = 1 N ∑ k = 0 N − 1 S [ k ] ∫ 0 1 T δ ( f − k N T ) e i 2 π f n T d f = 1 N ∑ k = 0 N − 1 S [ k ] ⋅ e i 2 π k N T n T = 1 N ∑ k = 0 N − 1 S [ k ] ⋅ e i 2 π k N n {\displaystyle {\begin{aligned}s[n]&=T\int _{0}^{\frac {1}{T}}S_{1/T}(f)\cdot e^{i2\pi fnT}df\\&={\frac {1}{N}}\sum _{k=-\infty }^{\infty }S[k]\underbrace {\int _{0}^{\frac {1}{T}}\delta \left(f-{\tfrac {k}{NT}}\right)e^{i2\pi fnT}df} _{{\text{zero for }}k\ \notin \ [0,N-1]}\\&={\frac {1}{N}}\sum _{k=0}^{N-1}S[k]\int _{0}^{\frac {1}{T}}\delta \left(f-{\tfrac {k}{NT}}\right)e^{i2\pi fnT}df\\&={\frac {1}{N}}\sum _{k=0}^{N-1}S[k]\cdot e^{i2\pi {\tfrac {k}{NT}}nT}\\&={\frac {1}{N}}\sum _{k=0}^{N-1}S[k]\cdot e^{i2\pi {\tfrac {k}{N}}n}\end{aligned}}} == Sampling the DTFT == When the DTFT is continuous, a common practice is to compute an arbitrary number of samples ( N ) {\displaystyle (N)} of one cycle of the periodic function S 1 / T {\displaystyle S_{1/T}} : : pp 557–559 & 703  : p 76  S 1 / T ( k N T ) ⏟ S k = ∑ n = − ∞ ∞ s [ n ] ⋅ e − i 2 π k N n k = 0 , … , N − 1 = ∑ N s N [ n ] ⋅ e − i 2 π k N n , ⏟ DFT (sum over any n -sequence of length N ) {\displaystyle {\begin{aligned}\underbrace {S_{1/T}\left({\frac {k}{NT}}\right)} _{S_{k}}&=\sum _{n=-\infty }^{\infty }s[n]\cdot e^{-i2\pi {\frac {k}{N}}n}\quad \quad k=0,\dots ,N-1\\&=\underbrace {\sum _{N}s_{_{N}}[n]\cdot e^{-i2\pi {\frac {k}{N}}n},} _{\text{DFT}}\quad \scriptstyle {{\text{(sum over any }}n{\text{-sequence of length }}N)}\end{aligned}}} where s N {\displaystyle s_{_{N}}} is a periodic summation: s N [ n ] ≜ ∑ m = − ∞ ∞ s [ n − m N ] . {\displaystyle s_{_{N}}[n]\ \triangleq \ \sum _{m=-\infty }^{\infty }s[n-mN].} (see Discrete Fourier series) The s N {\displaystyle s_{_{N}}} sequence is the inverse DFT. Thus, our sampling of the DTFT causes the inverse transform to become periodic. The array of | S k | 2 {\displaystyle |S_{k}|^{2}} values is known as a periodogram, and the parameter N {\displaystyle N} is called NFFT in the Matlab function of the same name. In order to evaluate one cycle of s N {\displaystyle s_{_{N}}} numerically, we require a finite-length s [ n ] {\displaystyle s[n]} sequence. For instance, a long sequence might be truncated by a window function of length L {\displaystyle L} resulting in three cases worthy of special mention. For notational simplicity, consider the s [ n ] {\displaystyle s[n]} values below to represent the values modified by the window function. Case: Frequency decimation. L = N ⋅ I , {\displaystyle L=N\cdot I,} for some integer I {\displaystyle I} (typically 6 or 8) A cycle of s N {\displaystyle s_{_{N}}} reduces to a summation of I {\displaystyle I} segments of length N . {\displaystyle N.} The DFT then goes by various names, such as: window-presum FFT Weight, overlap, add (WOLA) polyphase DFT polyphase filter bank multiple block windowing and time-aliasing. Recall that decimation of sampled data in one domain (time or frequency) produces overlap (sometimes known as aliasing) in the other, and vice versa. Compared to an L {\displaystyle L} -length DFT, the s N {\displaystyle s_{_{N}}} summation/overlap causes decimation in frequency,: p.558  leaving only DTFT samples least affected by spectral leakage. That is usually a priority when implementing an FFT filter-bank (channelizer). With a conventional window function of length L , {\displaystyle L,} scalloping loss would be unacceptable. So multi-block windows are created using FIR filter design tools. Their frequency profile is flat at the highest point and falls off quickly at the midpoint between the remaining DTFT samples. The larger the value of parameter I , {\displaystyle I,} the better the potential performance. Case: L = N + 1 {\displaystyle L=N+1} When a symmetric, L {\displaystyle L} -length window function ( s {\displaystyle s} ) is truncated by 1 coefficient it is called periodic or DFT-even. That is a common practice, but the truncation affects the DTFT (spectral leakage) by a small amount. It is at least of academic interest to characterize that effect. An N {\displaystyle N} -length DFT of the truncated window produces frequency samples at intervals of 1 / N , {\displaystyle 1/N,} instead of 1 / L . {\displaystyle 1/L.} The samples are real-valued,: p.52  but their values do not exactly match the DTFT of the symmetric window. The periodic summation, s N , {\displaystyle s_{_{N}},} along with an N {\displaystyle N} -length DFT, can also be used to sample the DTFT at intervals of 1 / N . {\displaystyle 1/N.} Those samples are also real-valued and do exactly match the DTFT (example: File:Sampling the Discrete-time Fourier transform.svg). To use the full symmetric window for spectral analysis at the 1 / N {\displaystyle 1/N} spacing, one would combine the n = 0 {\displaystyle n=0} and n = N {\displaystyle n=N} data samples (by addition, because the symmetrical window weights them equally) and then apply the truncated symmetric window and the N {\displaystyle N} -length DFT. Case: Frequency interpolation. L ≤ N {\displaystyle L\leq N} In this case, the DFT simplifies to a more familiar form: S k = ∑ n = 0 N − 1 s [ n ] ⋅ e − i 2 π k N n . {\displaystyle S_{k}=\sum _{n=0}^{N-1}s[n]\cdot e^{-i2\pi {\frac {k}{N}}n}.} In order to take advantage of a fast Fourier transform algorithm for computing the DFT, the summation is usually performed over all N {\displaystyle N} terms, even though N − L {\displaystyle N-L} of them are zeros. Therefore, the case L < N {\displaystyle L<N} is often referred to as zero-padding. Spectral leakage, which increases as L {\displaystyle L} decreases, is detrimental to certain important performance metrics, such as resolution of multiple frequency components and the amount of noise measured by each DTFT sample. But those things don't always matter, for instance when the s [ n ] {\displaystyle s[n]} sequence is a noiseless sinusoid (or a constant), shaped by a window function. Then it is a common practice to use zero-padding to graphically display and compare the detailed leakage patterns of window functions. To illustrate that for a rectangular window, consider the sequence: s [ n ] = e i 2 π 1 8 n , {\displaystyle s[n]=e^{i2\pi {\frac {1}{8}}n},\quad } and L = 64. {\displaystyle L=64.} Figures 2 and 3 are plots of the magnitude of two different sized DFTs, as indicated in their labels. In both cases, the dominant component is at the signal frequency: f = 1 / 8 = 0.125 {\displaystyle f=1/8=0.125} . Also visible in Fig 2 is the spectral leakage pattern of the L = 64 {\displaystyle L=64} rectangular window. The illusion in Fig 3 is a result of sampling the DTFT at just its zero-crossings. Rather than the DTFT of a finite-length sequence, it gives the impression of an infinitely long sinusoidal sequence. Contributing factors to the illusion are the use of a rectangular window, and the choice of a frequency (1/8 = 8/64) with exactly 8 (an integer) cycles per 64 samples. A Hann window would produce a similar result, except the peak would be widened to 3 samples (see DFT-even Hann window). == Convolution == The convolution theorem for sequences is: s ∗ y = D T F T − 1 [ D T F T { s } ⋅ D T F T { y } ] . {\displaystyle s*y\ =\ \scriptstyle {\rm {DTFT}}^{-1}\displaystyle \left[\scriptstyle {\rm {DTFT}}\displaystyle \{s\}\cdot \scriptstyle {\rm {DTFT}}\displaystyle \{y\}\right].} : p.297  An important special case is the circular convolution of sequences s and y defined by s N ∗ y , {\displaystyle s_{_{N}}*y,} where s N {\displaystyle s_{_{N}}} is a periodic summation. The discrete-frequency nature of D T F T { s N } {\displaystyle \scriptstyle {\rm {DTFT}}\displaystyle \{s_{_{N}}\}} means that the product with the continuous function D T F T { y } {\displaystyle \scriptstyle {\rm {DTFT}}\displaystyle \{y\}} is also discrete, which results in considerable simplification of the inverse transform: s N ∗ y = D T F T − 1 [ D T F T { s N } ⋅ D T F T { y } ] = D F T − 1 [ D F T { s N } ⋅ D F T { y N } ] . {\displaystyle s_{_{N}}*y\ =\ \scriptstyle {\rm {DTFT}}^{-1}\displaystyle \left[\scriptstyle {\rm {DTFT}}\displaystyle \{s_{_{N}}\}\cdot \scriptstyle {\rm {DTFT}}\displaystyle \{y\}\right]\ =\ \scriptstyle {\rm {DFT}}^{-1}\displaystyle \left[\scriptstyle {\rm {DFT}}\displaystyle \{s_{_{N}}\}\cdot \scriptstyle {\rm {DFT}}\displaystyle \{y_{_{N}}\}\right].} : p.548  For s and y sequences whose non-zero duration is less than or equal to N, a final simplification is: s N ∗ y = D F T − 1 [ D F T { s } ⋅ D F T { y } ] . {\displaystyle s_{_{N}}*y\ =\ \scriptstyle {\rm {DFT}}^{-1}\displaystyle \left[\scriptstyle {\rm {DFT}}\displaystyle \{s\}\cdot \scriptstyle {\rm {DFT}}\displaystyle \{y\}\right].} The significance of this result is explained at Circular convolution and Fast convolution algorithms. == Relationship to the Z-transform == S 2 π ( ω ) {\displaystyle S_{2\pi }(\omega )} is a Fourier series that can also be expressed in terms of the bilateral Z-transform. I.e.: S 2 π ( ω ) = S z ( z ) | z = e i ω = S z ( e i ω ) , {\displaystyle S_{2\pi }(\omega )=\left.S_{z}(z)\,\right|_{z=e^{i\omega }}=S_{z}(e^{i\omega }),} where the S z {\displaystyle S_{z}} notation distinguishes the Z-transform from the Fourier transform. Therefore, we can also express a portion of the Z-transform in terms of the Fourier transform: S z ( e i ω ) = S 1 / T ( ω 2 π T ) = ∑ k = − ∞ ∞ S ( ω 2 π T − k / T ) = ∑ k = − ∞ ∞ S ( ω − 2 π k 2 π T ) . {\displaystyle {\begin{aligned}S_{z}(e^{i\omega })&=\ S_{1/T}\left({\tfrac {\omega }{2\pi T}}\right)\ =\ \sum _{k=-\infty }^{\infty }S\left({\tfrac {\omega }{2\pi T}}-k/T\right)\\&=\sum _{k=-\infty }^{\infty }S\left({\tfrac {\omega -2\pi k}{2\pi T}}\right).\end{aligned}}} Note that when parameter T changes, the terms of S 2 π ( ω ) {\displaystyle S_{2\pi }(\omega )} remain a constant separation 2 π {\displaystyle 2\pi } apart, and their width scales up or down. The terms of S1/T(f) remain a constant width and their separation 1/T scales up or down. == Table of discrete-time Fourier transforms == Some common transform pairs are shown in the table below. The following notation applies: ω = 2 π f T {\displaystyle \omega =2\pi fT} is a real number representing continuous angular frequency (in radians per sample). ( f {\displaystyle f} is in cycles/sec, and T {\displaystyle T} is in sec/sample.) In all cases in the table, the DTFT is 2π-periodic (in ω {\displaystyle \omega } ). S 2 π ( ω ) {\displaystyle S_{2\pi }(\omega )} designates a function defined on − ∞ < ω < ∞ {\displaystyle -\infty <\omega <\infty } . S o ( ω ) {\displaystyle S_{o}(\omega )} designates a function defined on − π < ω ≤ π {\displaystyle -\pi <\omega \leq \pi } , and zero elsewhere. Then: S 2 π ( ω ) ≜ ∑ k = − ∞ ∞ S o ( ω − 2 π k ) . {\displaystyle S_{2\pi }(\omega )\ \triangleq \sum _{k=-\infty }^{\infty }S_{o}(\omega -2\pi k).} δ ( ω ) {\displaystyle \delta (\omega )} is the Dirac delta function sinc ⁡ ( t ) {\displaystyle \operatorname {sinc} (t)} is the normalized sinc function rect ⁡ [ n L ] ≜ { 1 | n | ≤ L / 2 0 | n | > L / 2 {\displaystyle \operatorname {rect} \left[{n \over L}\right]\triangleq {\begin{cases}1&|n|\leq L/2\\0&|n|>L/2\end{cases}}} tri ⁡ ( t ) {\displaystyle \operatorname {tri} (t)} is the triangle function n is an integer representing the discrete-time domain (in samples) u [ n ] {\displaystyle u[n]} is the discrete-time unit step function δ [ n ] {\displaystyle \delta [n]} is the Kronecker delta δ n , 0 {\displaystyle \delta _{n,0}} == Properties == This table shows some mathematical operations in the time domain and the corresponding effects in the frequency domain. ∗ {\displaystyle *\!} is the discrete convolution of two sequences s ∗ [ n ] {\displaystyle s^{*}[n]} is the complex conjugate of s [ n ] . {\displaystyle s[n].} == See also == Least-squares spectral analysis Multidimensional transform Zak transform == Notes == == Page citations == == References ==
Wikipedia/Discrete-time_Fourier_transform
The Goertzel algorithm is a technique in digital signal processing (DSP) for efficient evaluation of the individual terms of the discrete Fourier transform (DFT). It is useful in certain practical applications, such as recognition of dual-tone multi-frequency signaling (DTMF) tones produced by the push buttons of the keypad of a traditional analog telephone. The algorithm was first described by Gerald Goertzel in 1958. Like the DFT, the Goertzel algorithm analyses one selectable frequency component from a discrete signal. Unlike direct DFT calculations, the Goertzel algorithm applies a single real-valued coefficient at each iteration, using real-valued arithmetic for real-valued input sequences. For covering a full spectrum (except when using for continuous stream of data where coefficients are reused for subsequent calculations, which has computational complexity equivalent of sliding DFT), the Goertzel algorithm has a higher order of complexity than fast Fourier transform (FFT) algorithms, but for computing a small number of selected frequency components, it is more numerically efficient. The simple structure of the Goertzel algorithm makes it well suited to small processors and embedded applications. The Goertzel algorithm can also be used "in reverse" as a sinusoid synthesis function, which requires only 1 multiplication and 1 subtraction per generated sample. == The algorithm == The main calculation in the Goertzel algorithm has the form of a digital filter, and for this reason the algorithm is often called a Goertzel filter. The filter operates on an input sequence x [ n ] {\displaystyle x[n]} in a cascade of two stages with a parameter ω 0 {\displaystyle \omega _{0}} , giving the frequency to be analysed, normalised to radians per sample. The first stage calculates an intermediate sequence, s [ n ] {\displaystyle s[n]} : The second stage applies the following filter to s [ n ] {\displaystyle s[n]} , producing output sequence y [ n ] {\displaystyle y[n]} : The first filter stage can be observed to be a second-order IIR filter with a direct-form structure. This particular structure has the property that its internal state variables equal the past output values from that stage. Input values x [ n ] {\displaystyle x[n]} for n < 0 {\displaystyle n<0} are presumed all equal to 0. To establish the initial filter state so that evaluation can begin at sample x [ 0 ] {\displaystyle x[0]} , the filter states are assigned initial values s [ − 2 ] = s [ − 1 ] = 0 {\displaystyle s[-2]=s[-1]=0} . To avoid aliasing hazards, frequency ω 0 {\displaystyle \omega _{0}} is often restricted to the range 0 to π (see Nyquist–Shannon sampling theorem); using a value outside this range is not meaningless, but is equivalent to using an aliased frequency inside this range, since the exponential function is periodic with a period of 2π in ω 0 {\displaystyle \omega _{0}} . The second-stage filter can be observed to be a FIR filter, since its calculations do not use any of its past outputs. Z-transform methods can be applied to study the properties of the filter cascade. The Z transform of the first filter stage given in equation (1) is The Z transform of the second filter stage given in equation (2) is The combined transfer function of the cascade of the two filter stages is then This can be transformed back to an equivalent time-domain sequence, and the terms unrolled back to the first input term at index n = 0 {\displaystyle n=0} : == Numerical stability == It can be observed that the poles of the filter's Z transform are located at e + j ω 0 {\displaystyle e^{+j\omega _{0}}} and e − j ω 0 {\displaystyle e^{-j\omega _{0}}} , on a circle of unit radius centered on the origin of the complex Z-transform plane. This property indicates that the filter process is marginally stable and vulnerable to numerical-error accumulation when computed using low-precision arithmetic and long input sequences. A numerically stable version was proposed by Christian Reinsch. == DFT computations == For the important case of computing a DFT term, the following special restrictions are applied. The filtering terminates at index n = N {\displaystyle n=N} , where N {\displaystyle N} is the number of terms in the input sequence of the DFT. The frequencies chosen for the Goertzel analysis are restricted to the special form The index number k {\displaystyle k} indicating the "frequency bin" of the DFT is selected from the set of index numbers Making these substitutions into equation (6) and observing that the term e + j 2 π k = 1 {\displaystyle e^{+j2\pi k}=1} , equation (6) then takes the following form: We can observe that the right side of equation (9) is extremely similar to the defining formula for DFT term X [ k ] {\displaystyle X[k]} , the DFT term for index number k {\displaystyle k} , but not exactly the same. The summation shown in equation (9) requires N + 1 {\displaystyle N+1} input terms, but only N {\displaystyle N} input terms are available when evaluating a DFT. A simple but inelegant expedient is to extend the input sequence x [ n ] {\displaystyle x[n]} with one more artificial value x [ N ] = 0 {\displaystyle x[N]=0} . We can see from equation (9) that the mathematical effect on the final result is the same as removing term x [ N ] {\displaystyle x[N]} from the summation, thus delivering the intended DFT value. However, there is a more elegant approach that avoids the extra filter pass. From equation (1), we can note that when the extended input term x [ N ] = 0 {\displaystyle x[N]=0} is used in the final step, Thus, the algorithm can be completed as follows: terminate the IIR filter after processing input term x [ N − 1 ] {\displaystyle x[N-1]} , apply equation (10) to construct s [ N ] {\displaystyle s[N]} from the prior outputs s [ N − 1 ] {\displaystyle s[N-1]} and s [ N − 2 ] {\displaystyle s[N-2]} , apply equation (2) with the calculated s [ N ] {\displaystyle s[N]} value and with s [ N − 1 ] {\displaystyle s[N-1]} produced by the final direct calculation of the filter. The last two mathematical operations are simplified by combining them algebraically: Note that stopping the filter updates at term N − 1 {\displaystyle N-1} and immediately applying equation (2) rather than equation (11) misses the final filter state updates, yielding a result with incorrect phase. The particular filtering structure chosen for the Goertzel algorithm is the key to its efficient DFT calculations. We can observe that only one output value y [ N ] {\displaystyle y[N]} is used for calculating the DFT, so calculations for all the other output terms are omitted. Since the FIR filter is not calculated, the IIR stage calculations s [ 0 ] , s [ 1 ] {\displaystyle s[0],s[1]} , etc. can be discarded immediately after updating the first stage's internal state. This seems to leave a paradox: to complete the algorithm, the FIR filter stage must be evaluated once using the final two outputs from the IIR filter stage, while for computational efficiency the IIR filter iteration discards its output values. This is where the properties of the direct-form filter structure are applied. The two internal state variables of the IIR filter provide the last two values of the IIR filter output, which are the terms required to evaluate the FIR filter stage. == Applications == === Power-spectrum terms === Examining equation (6), a final IIR filter pass to calculate term y [ N ] {\displaystyle y[N]} using a supplemental input value x [ N ] = 0 {\displaystyle x[N]=0} applies a complex multiplier of magnitude 1 to the previous term y [ N − 1 ] {\displaystyle y[N-1]} . Consequently, y [ N ] {\displaystyle y[N]} and y [ N − 1 ] {\displaystyle y[N-1]} represent equivalent signal power. It is equally valid to apply equation (11) and calculate the signal power from term y [ N ] {\displaystyle y[N]} or to apply equation (2) and calculate the signal power from term y [ N − 1 ] {\displaystyle y[N-1]} . Both cases lead to the following expression for the signal power represented by DFT term X [ k ] {\displaystyle X[k]} : In the pseudocode below, the real-valued input data is stored in the array x and the variables sprev and sprev2 temporarily store output history from the IIR filter. Nterms is the number of samples in the array, and Kterm corresponds to the frequency of interest, multiplied by the sampling period. Nterms defined here Kterm selected here ω = 2 × π × Kterm / Nterms; coeff := 2 × cos(ω) sprev := 0 sprev2 := 0 for each index n in range 0 to Nterms-1 do s := x[n] + coeff × sprev - sprev2 sprev2 := sprev sprev := s end power := sprev2 + sprev22 - (coeff × sprev × sprev2) It is possible to organise the computations so that incoming samples are delivered singly to a software object that maintains the filter state between updates, with the final power result accessed after the other processing is done. === Single DFT term with real-valued arithmetic === The case of real-valued input data arises frequently, especially in embedded systems where the input streams result from direct measurements of physical processes. When the input data are real-valued, the filter internal state variables sprev and sprev2 can be observed also to be real-valued, consequently, no complex arithmetic is required in the first IIR stage. Optimizing for real-valued arithmetic typically is as simple as applying appropriate real-valued data types for the variables. After the calculations using input term x [ N − 1 ] {\displaystyle x[N-1]} , and filter iterations are terminated, equation (11) must be applied to evaluate the DFT term. The final calculation uses complex-valued arithmetic, but this can be converted into real-valued arithmetic by separating real and imaginary terms: Comparing to the power-spectrum application, the only difference are the calculation used to finish: (Same IIR filter calculations as in the signal power implementation) XKreal = sprev * cr - sprev2; XKimag = sprev * ci; === Phase detection === This application requires the same evaluation of DFT term X [ k ] {\displaystyle X[k]} , as discussed in the previous section, using a real-valued or complex-valued input stream. Then the signal phase can be evaluated as taking appropriate precautions for singularities, quadrant, and so forth when computing the inverse tangent function. === Complex signals in real arithmetic === Since complex signals decompose linearly into real and imaginary parts, the Goertzel algorithm can be computed in real arithmetic separately over the sequence of real parts, yielding y r [ n ] {\displaystyle y_{\text{r}}[n]} , and over the sequence of imaginary parts, yielding y i [ n ] {\displaystyle y_{\text{i}}[n]} . After that, the two complex-valued partial results can be recombined: == Computational complexity == According to computational complexity theory, computing a set of M {\displaystyle M} DFT terms using M {\displaystyle M} applications of the Goertzel algorithm on a data set with N {\displaystyle N} values with a "cost per operation" of K {\displaystyle K} has complexity O ( K N M ) {\displaystyle O(KNM)} . To compute a single DFT bin X ( f ) {\displaystyle X(f)} for a complex input sequence of length N {\displaystyle N} , the Goertzel algorithm requires 2 N {\displaystyle 2N} multiplications and 4 N {\displaystyle 4\ N} additions/subtractions within the loop, as well as 4 multiplications and 4 final additions/subtractions, for a total of 2 N + 4 {\displaystyle 2N+4} multiplications and 4 N + 4 {\displaystyle 4N+4} additions/subtractions. This is repeated for each of the M {\displaystyle M} frequencies. In contrast, using an FFT on a data set with N {\displaystyle N} values has complexity O ( K N log 2 ⁡ ( N ) ) {\displaystyle O(KN\log _{2}(N))} . This is harder to apply directly because it depends on the FFT algorithm used, but a typical example is a radix-2 FFT, which requires 2 log 2 ⁡ ( N ) {\displaystyle 2\log _{2}(N)} multiplications and 3 log 2 ⁡ ( N ) {\displaystyle 3\log _{2}(N)} additions/subtractions per DFT bin, for each of the N {\displaystyle N} bins. In the complexity order expressions, when the number of calculated terms M {\displaystyle M} is smaller than log ⁡ N {\displaystyle \log N} , the advantage of the Goertzel algorithm is clear. But because FFT code is comparatively complex, the "cost per unit of work" factor K {\displaystyle K} is often larger for an FFT, and the practical advantage favours the Goertzel algorithm even for M {\displaystyle M} several times larger than log 2 ⁡ ( N ) {\displaystyle \log _{2}(N)} . As a rule-of-thumb for determining whether a radix-2 FFT or a Goertzel algorithm is more efficient, adjust the number of terms N {\displaystyle N} in the data set upward to the nearest exact power of 2, calling this N 2 {\displaystyle N_{2}} , and the Goertzel algorithm is likely to be faster if M ≤ 5 N 2 6 N log 2 ⁡ ( N 2 ) {\displaystyle M\leq {\frac {5N_{2}}{6N}}\log _{2}(N_{2})} FFT implementations and processing platforms have a significant impact on the relative performance. Some FFT implementations perform internal complex-number calculations to generate coefficients on-the-fly, significantly increasing their "cost K per unit of work." FFT and DFT algorithms can use tables of pre-computed coefficient values for better numerical efficiency, but this requires more accesses to coefficient values buffered in external memory, which can lead to increased cache contention that counters some of the numerical advantage. Both algorithms gain approximately a factor of 2 efficiency when using real-valued rather than complex-valued input data. However, these gains are natural for the Goertzel algorithm but will not be achieved for the FFT without using certain algorithm variants specialised for transforming real-valued data. == See also == Bluestein's FFT algorithm (chirp-Z) Frequency-shift keying (FSK) Phase-shift keying (PSK) == References == == Further reading == Proakis, J. G.; Manolakis, D. G. (1996), Digital Signal Processing: Principles, Algorithms, and Applications, Upper Saddle River, NJ: Prentice Hall, pp. 480–481, Bibcode:1996dspp.book.....P == External links == Goertzel Algorithm at the Wayback Machine (archived 2018-06-28) A DSP algorithm for frequency analysis The Goertzel Algorithm by Kevin Banks Analysis of the Goertzel Algorithm by Uwe Beis in which he compares it to analog 2nd order Chebyshev low pass filter
Wikipedia/Goertzel_algorithm
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods that attempt to find approximate solutions of problems rather than the exact ones. Numerical analysis finds application in all fields of engineering and the physical sciences, and in the 21st century also the life and social sciences like economics, medicine, business and even the arts. Current growth in computing power has enabled the use of more complex numerical analysis, providing detailed and realistic mathematical models in science and engineering. Examples of numerical analysis include: ordinary differential equations as found in celestial mechanics (predicting the motions of planets, stars and galaxies), numerical linear algebra in data analysis, and stochastic differential equations and Markov chains for simulating living cells in medicine and biology. Before modern computers, numerical methods often relied on hand interpolation formulas, using data from large printed tables. Since the mid-20th century, computers calculate the required functions instead, but many of the same formulas continue to be used in software algorithms. The numerical point of view goes back to the earliest mathematical writings. A tablet from the Yale Babylonian Collection (YBC 7289), gives a sexagesimal numerical approximation of the square root of 2, the length of the diagonal in a unit square. Numerical analysis continues this long tradition: rather than giving exact symbolic answers translated into digits and applicable only to real-world measurements, approximate solutions within specified error bounds are used. == Applications == The overall goal of the field of numerical analysis is the design and analysis of techniques to give approximate but accurate solutions to a wide variety of hard problems, many of which are infeasible to solve symbolically: Advanced numerical methods are essential in making numerical weather prediction feasible. Computing the trajectory of a spacecraft requires the accurate numerical solution of a system of ordinary differential equations. Car companies can improve the crash safety of their vehicles by using computer simulations of car crashes. Such simulations essentially consist of solving partial differential equations numerically. In the financial field, (private investment funds) and other financial institutions use quantitative finance tools from numerical analysis to attempt to calculate the value of stocks and derivatives more precisely than other market participants. Airlines use sophisticated optimization algorithms to decide ticket prices, airplane and crew assignments and fuel needs. Historically, such algorithms were developed within the overlapping field of operations research. Insurance companies use numerical programs for actuarial analysis. == History == The field of numerical analysis predates the invention of modern computers by many centuries. Linear interpolation was already in use more than 2000 years ago. Many great mathematicians of the past were preoccupied by numerical analysis, as is obvious from the names of important algorithms like Newton's method, Lagrange interpolation polynomial, Gaussian elimination, or Euler's method. The origins of modern numerical analysis are often linked to a 1947 paper by John von Neumann and Herman Goldstine, but others consider modern numerical analysis to go back to work by E. T. Whittaker in 1912. To facilitate computations by hand, large books were produced with formulas and tables of data such as interpolation points and function coefficients. Using these tables, often calculated out to 16 decimal places or more for some functions, one could look up values to plug into the formulas given and achieve very good numerical estimates of some functions. The canonical work in the field is the NIST publication edited by Abramowitz and Stegun, a 1000-plus page book of a very large number of commonly used formulas and functions and their values at many points. The function values are no longer very useful when a computer is available, but the large listing of formulas can still be very handy. The mechanical calculator was also developed as a tool for hand computation. These calculators evolved into electronic computers in the 1940s, and it was then found that these computers were also useful for administrative purposes. But the invention of the computer also influenced the field of numerical analysis, since now longer and more complicated calculations could be done. The Leslie Fox Prize for Numerical Analysis was initiated in 1985 by the Institute of Mathematics and its Applications. == Key concepts == === Direct and iterative methods === Direct methods compute the solution to a problem in a finite number of steps. These methods would give the precise answer if they were performed in infinite precision arithmetic. Examples include Gaussian elimination, the QR factorization method for solving systems of linear equations, and the simplex method of linear programming. In practice, finite precision is used and the result is an approximation of the true solution (assuming stability). In contrast to direct methods, iterative methods are not expected to terminate in a finite number of steps, even if infinite precision were possible. Starting from an initial guess, iterative methods form successive approximations that converge to the exact solution only in the limit. A convergence test, often involving the residual, is specified in order to decide when a sufficiently accurate solution has (hopefully) been found. Even using infinite precision arithmetic these methods would not reach the solution within a finite number of steps (in general). Examples include Newton's method, the bisection method, and Jacobi iteration. In computational matrix algebra, iterative methods are generally needed for large problems. Iterative methods are more common than direct methods in numerical analysis. Some methods are direct in principle but are usually used as though they were not, e.g. GMRES and the conjugate gradient method. For these methods the number of steps needed to obtain the exact solution is so large that an approximation is accepted in the same manner as for an iterative method. As an example, consider the problem of solving 3x3 + 4 = 28 for the unknown quantity x. For the iterative method, apply the bisection method to f(x) = 3x3 − 24. The initial values are a = 0, b = 3, f(a) = −24, f(b) = 57. From this table it can be concluded that the solution is between 1.875 and 2.0625. The algorithm might return any number in that range with an error less than 0.2. === Conditioning === Ill-conditioned problem: Take the function f(x) = 1/(x − 1). Note that f(1.1) = 10 and f(1.001) = 1000: a change in x of less than 0.1 turns into a change in f(x) of nearly 1000. Evaluating f(x) near x = 1 is an ill-conditioned problem. Well-conditioned problem: By contrast, evaluating the same function f(x) = 1/(x − 1) near x = 10 is a well-conditioned problem. For instance, f(10) = 1/9 ≈ 0.111 and f(11) = 0.1: a modest change in x leads to a modest change in f(x). === Discretization === Furthermore, continuous problems must sometimes be replaced by a discrete problem whose solution is known to approximate that of the continuous problem; this process is called 'discretization'. For example, the solution of a differential equation is a function. This function must be represented by a finite amount of data, for instance by its value at a finite number of points at its domain, even though this domain is a continuum. == Generation and propagation of errors == The study of errors forms an important part of numerical analysis. There are several ways in which error can be introduced in the solution of the problem. === Round-off === Round-off errors arise because it is impossible to represent all real numbers exactly on a machine with finite memory (which is what all practical digital computers are). === Truncation and discretization error === Truncation errors are committed when an iterative method is terminated or a mathematical procedure is approximated and the approximate solution differs from the exact solution. Similarly, discretization induces a discretization error because the solution of the discrete problem does not coincide with the solution of the continuous problem. In the example above to compute the solution of 3 x 3 + 4 = 28 {\displaystyle 3x^{3}+4=28} , after ten iterations, the calculated root is roughly 1.99. Therefore, the truncation error is roughly 0.01. Once an error is generated, it propagates through the calculation. For example, the operation + on a computer is inexact. A calculation of the type ⁠ a + b + c + d + e {\displaystyle a+b+c+d+e} ⁠ is even more inexact. A truncation error is created when a mathematical procedure is approximated. To integrate a function exactly, an infinite sum of regions must be found, but numerically only a finite sum of regions can be found, and hence the approximation of the exact solution. Similarly, to differentiate a function, the differential element approaches zero, but numerically only a nonzero value of the differential element can be chosen. === Numerical stability and well-posed problems === An algorithm is called numerically stable if an error, whatever its cause, does not grow to be much larger during the calculation. This happens if the problem is well-conditioned, meaning that the solution changes by only a small amount if the problem data are changed by a small amount. To the contrary, if a problem is 'ill-conditioned', then any small error in the data will grow to be a large error. Both the original problem and the algorithm used to solve that problem can be well-conditioned or ill-conditioned, and any combination is possible. So an algorithm that solves a well-conditioned problem may be either numerically stable or numerically unstable. An art of numerical analysis is to find a stable algorithm for solving a well-posed mathematical problem. == Areas of study == The field of numerical analysis includes many sub-disciplines. Some of the major ones are: === Computing values of functions === One of the simplest problems is the evaluation of a function at a given point. The most straightforward approach, of just plugging in the number in the formula is sometimes not very efficient. For polynomials, a better approach is using the Horner scheme, since it reduces the necessary number of multiplications and additions. Generally, it is important to estimate and control round-off errors arising from the use of floating-point arithmetic. === Interpolation, extrapolation, and regression === Interpolation solves the following problem: given the value of some unknown function at a number of points, what value does that function have at some other point between the given points? Extrapolation is very similar to interpolation, except that now the value of the unknown function at a point which is outside the given points must be found. Regression is also similar, but it takes into account that the data are imprecise. Given some points, and a measurement of the value of some function at these points (with an error), the unknown function can be found. The least squares-method is one way to achieve this. === Solving equations and systems of equations === Another fundamental problem is computing the solution of some given equation. Two cases are commonly distinguished, depending on whether the equation is linear or not. For instance, the equation 2 x + 5 = 3 {\displaystyle 2x+5=3} is linear while 2 x 2 + 5 = 3 {\displaystyle 2x^{2}+5=3} is not. Much effort has been put in the development of methods for solving systems of linear equations. Standard direct methods, i.e., methods that use some matrix decomposition are Gaussian elimination, LU decomposition, Cholesky decomposition for symmetric (or hermitian) and positive-definite matrix, and QR decomposition for non-square matrices. Iterative methods such as the Jacobi method, Gauss–Seidel method, successive over-relaxation and conjugate gradient method are usually preferred for large systems. General iterative methods can be developed using a matrix splitting. Root-finding algorithms are used to solve nonlinear equations (they are so named since a root of a function is an argument for which the function yields zero). If the function is differentiable and the derivative is known, then Newton's method is a popular choice. Linearization is another technique for solving nonlinear equations. === Solving eigenvalue or singular value problems === Several important problems can be phrased in terms of eigenvalue decompositions or singular value decompositions. For instance, the spectral image compression algorithm is based on the singular value decomposition. The corresponding tool in statistics is called principal component analysis. === Optimization === Optimization problems ask for the point at which a given function is maximized (or minimized). Often, the point also has to satisfy some constraints. The field of optimization is further split in several subfields, depending on the form of the objective function and the constraint. For instance, linear programming deals with the case that both the objective function and the constraints are linear. A famous method in linear programming is the simplex method. The method of Lagrange multipliers can be used to reduce optimization problems with constraints to unconstrained optimization problems. === Evaluating integrals === Numerical integration, in some instances also known as numerical quadrature, asks for the value of a definite integral. Popular methods use one of the Newton–Cotes formulas (like the midpoint rule or Simpson's rule) or Gaussian quadrature. These methods rely on a "divide and conquer" strategy, whereby an integral on a relatively large set is broken down into integrals on smaller sets. In higher dimensions, where these methods become prohibitively expensive in terms of computational effort, one may use Monte Carlo or quasi-Monte Carlo methods (see Monte Carlo integration), or, in modestly large dimensions, the method of sparse grids. === Differential equations === Numerical analysis is also concerned with computing (in an approximate way) the solution of differential equations, both ordinary differential equations and partial differential equations. Partial differential equations are solved by first discretizing the equation, bringing it into a finite-dimensional subspace. This can be done by a finite element method, a finite difference method, or (particularly in engineering) a finite volume method. The theoretical justification of these methods often involves theorems from functional analysis. This reduces the problem to the solution of an algebraic equation. == Software == Since the late twentieth century, most algorithms are implemented in a variety of programming languages. The Netlib repository contains various collections of software routines for numerical problems, mostly in Fortran and C. Commercial products implementing many different numerical algorithms include the IMSL and NAG libraries; a free-software alternative is the GNU Scientific Library. Over the years the Royal Statistical Society published numerous algorithms in its Applied Statistics (code for these "AS" functions is here); ACM similarly, in its Transactions on Mathematical Software ("TOMS" code is here). The Naval Surface Warfare Center several times published its Library of Mathematics Subroutines (code here). There are several popular numerical computing applications such as MATLAB, TK Solver, S-PLUS, and IDL as well as free and open-source alternatives such as FreeMat, Scilab, GNU Octave (similar to Matlab), and IT++ (a C++ library). There are also programming languages such as R (similar to S-PLUS), Julia, and Python with libraries such as NumPy, SciPy and SymPy. Performance varies widely: while vector and matrix operations are usually fast, scalar loops may vary in speed by more than an order of magnitude. Many computer algebra systems such as Mathematica also benefit from the availability of arbitrary-precision arithmetic which can provide more accurate results. Also, any spreadsheet software can be used to solve simple problems relating to numerical analysis. Excel, for example, has hundreds of available functions, including for matrices, which may be used in conjunction with its built in "solver". == See also == == Notes == == References == === Citations === === Sources === == External links == === Journals === Numerische Mathematik, volumes 1–..., Springer, 1959– volumes 1–66, 1959–1994 (searchable; pages are images). (in English and German) Journal on Numerical Analysis (SINUM), volumes 1–..., SIAM, 1964– === Online texts === "Numerical analysis", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Numerical Recipes, William H. Press (free, downloadable previous editions) First Steps in Numerical Analysis (archived), R.J.Hosking, S.Joe, D.C.Joyce, and J.C.Turner CSEP (Computational Science Education Project), U.S. Department of Energy (archived 2017-08-01) Numerical Methods, ch 3. in the Digital Library of Mathematical Functions Numerical Interpolation, Differentiation and Integration, ch 25. in the Handbook of Mathematical Functions (Abramowitz and Stegun) Tobin A. Driscoll and Richard J. Braun: Fundamentals of Numerical Computation (free online version) === Online course material === Numerical Methods (Archived 28 July 2009 at the Wayback Machine), Stuart Dalziel University of Cambridge Lectures on Numerical Analysis, Dennis Deturck and Herbert S. Wilf University of Pennsylvania Numerical methods, John D. Fenton University of Karlsruhe Numerical Methods for Physicists, Anthony O’Hare Oxford University Lectures in Numerical Analysis (archived), R. Radok Mahidol University Introduction to Numerical Analysis for Engineering, Henrik Schmidt Massachusetts Institute of Technology Numerical Analysis for Engineering, D. W. Harder University of Waterloo Introduction to Numerical Analysis, Doron Levy University of Maryland Numerical Analysis - Numerical Methods (archived), John H. Mathews California State University Fullerton
Wikipedia/Numerical_algorithm
The short-time Fourier transform (STFT) is a Fourier-related transform used to determine the sinusoidal frequency and phase content of local sections of a signal as it changes over time. In practice, the procedure for computing STFTs is to divide a longer time signal into shorter segments of equal length and then compute the Fourier transform separately on each shorter segment. This reveals the Fourier spectrum on each shorter segment. One then usually plots the changing spectra as a function of time, known as a spectrogram or waterfall plot, such as commonly used in software defined radio (SDR) based spectrum displays. Full bandwidth displays covering the whole range of an SDR commonly use fast Fourier transforms (FFTs) with 2^24 points on desktop computers. == Forward STFT == === Continuous-time STFT === Simply, in the continuous-time case, the function to be transformed is multiplied by a window function which is nonzero for only a short period of time. The Fourier transform (a one-dimensional function) of the resulting signal is taken, then the window is slid along the time axis until the end resulting in a two-dimensional representation of the signal. Mathematically, this is written as: S T F T { x ( t ) } ( τ , ω ) ≡ X ( τ , ω ) = ∫ − ∞ ∞ x ( t ) w ( t − τ ) e − i ω t d t {\displaystyle \mathbf {STFT} \{x(t)\}(\tau ,\omega )\equiv X(\tau ,\omega )=\int _{-\infty }^{\infty }x(t)w(t-\tau )e^{-i\omega t}\,dt} where w ( τ ) {\displaystyle w(\tau )} is the window function, commonly a Hann window or Gaussian window centered around zero, and x ( t ) {\displaystyle x(t)} is the signal to be transformed (note the difference between the window function w {\displaystyle w} and the frequency ω {\displaystyle \omega } ). X ( τ , ω ) {\displaystyle X(\tau ,\omega )} is essentially the Fourier transform of x ( t ) w ( t − τ ) {\displaystyle x(t)w(t-\tau )} , a complex function representing the phase and magnitude of the signal over time and frequency. Often phase unwrapping is employed along either or both the time axis, τ {\displaystyle \tau } , and frequency axis, ω {\displaystyle \omega } , to suppress any jump discontinuity of the phase result of the STFT. The time index τ {\displaystyle \tau } is normally considered to be "slow" time and usually not expressed in as high resolution as time t {\displaystyle t} . Given that the STFT is essentially a Fourier transform times a window function, the STFT is also called windowed Fourier transform or time-dependent Fourier transform. === Discrete-time STFT === In the discrete time case, the data to be transformed could be broken up into chunks or frames (which usually overlap each other, to reduce artifacts at the boundary). Each chunk is Fourier transformed, and the complex result is added to a matrix, which records magnitude and phase for each point in time and frequency. This can be expressed as: S T F T { x [ n ] } ( m , ω ) ≡ X ( m , ω ) = ∑ n = 0 N − 1 x [ n ] w [ n − m ] e − i ω n {\displaystyle \mathbf {STFT} \{x[n]\}(m,\omega )\equiv X(m,\omega )=\sum _{n=0}^{N-1}x[n]w[n-m]e^{-i\omega n}} likewise, with signal x [ n ] {\displaystyle x[n]} and window w [ n ] {\displaystyle w[n]} . In this case, m is discrete and ω is continuous, but in most typical applications the STFT is performed on a computer using the fast Fourier transform, so both variables are discrete and quantized. The magnitude squared of the STFT yields the spectrogram representation of the power spectral density of the function: spectrogram ⁡ { x ( t ) } ( τ , ω ) ≡ | X ( τ , ω ) | 2 {\displaystyle \operatorname {spectrogram} \{x(t)\}(\tau ,\omega )\equiv |X(\tau ,\omega )|^{2}} See also the modified discrete cosine transform (MDCT), which is also a Fourier-related transform that uses overlapping windows. ==== Sliding DFT ==== If only a small number of ω are desired, or if the STFT is desired to be evaluated for every shift m of the window, then the STFT may be more efficiently evaluated using a sliding DFT algorithm. == Inverse STFT == The STFT is invertible, that is, the original signal can be recovered from the transform by the inverse STFT. The most widely accepted way of inverting the STFT is by using the overlap-add (OLA) method, which also allows for modifications to the STFT complex spectrum. This makes for a versatile signal processing method, referred to as the overlap and add with modifications method. === Continuous-time STFT === Given the width and definition of the window function w(t), we initially require the area of the window function to be scaled so that ∫ − ∞ ∞ w ( τ ) d τ = 1. {\displaystyle \int _{-\infty }^{\infty }w(\tau )\,d\tau =1.} It easily follows that ∫ − ∞ ∞ w ( t − τ ) d τ = 1 ∀ t {\displaystyle \int _{-\infty }^{\infty }w(t-\tau )\,d\tau =1\quad \forall \ t} and x ( t ) = x ( t ) ∫ − ∞ ∞ w ( t − τ ) d τ = ∫ − ∞ ∞ x ( t ) w ( t − τ ) d τ . {\displaystyle x(t)=x(t)\int _{-\infty }^{\infty }w(t-\tau )\,d\tau =\int _{-\infty }^{\infty }x(t)w(t-\tau )\,d\tau .} The continuous Fourier transform is X ( ω ) = ∫ − ∞ ∞ x ( t ) e − i ω t d t . {\displaystyle X(\omega )=\int _{-\infty }^{\infty }x(t)e^{-i\omega t}\,dt.} Substituting x(t) from above: X ( ω ) = ∫ − ∞ ∞ [ ∫ − ∞ ∞ x ( t ) w ( t − τ ) d τ ] e − i ω t d t {\displaystyle X(\omega )=\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }x(t)w(t-\tau )\,d\tau \right]\,e^{-i\omega t}\,dt} = ∫ − ∞ ∞ ∫ − ∞ ∞ x ( t ) w ( t − τ ) e − i ω t d τ d t . {\displaystyle =\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }x(t)w(t-\tau )\,e^{-i\omega t}\,d\tau \,dt.} Swapping order of integration: X ( ω ) = ∫ − ∞ ∞ ∫ − ∞ ∞ x ( t ) w ( t − τ ) e − i ω t d t d τ {\displaystyle X(\omega )=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }x(t)w(t-\tau )\,e^{-i\omega t}\,dt\,d\tau } = ∫ − ∞ ∞ [ ∫ − ∞ ∞ x ( t ) w ( t − τ ) e − i ω t d t ] d τ {\displaystyle =\int _{-\infty }^{\infty }\left[\int _{-\infty }^{\infty }x(t)w(t-\tau )\,e^{-i\omega t}\,dt\right]\,d\tau } = ∫ − ∞ ∞ X ( τ , ω ) d τ . {\displaystyle =\int _{-\infty }^{\infty }X(\tau ,\omega )\,d\tau .} So the Fourier transform can be seen as a sort of phase coherent sum of all of the STFTs of x(t). Since the inverse Fourier transform is x ( t ) = 1 2 π ∫ − ∞ ∞ X ( ω ) e + i ω t d ω , {\displaystyle x(t)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }X(\omega )e^{+i\omega t}\,d\omega ,} then x(t) can be recovered from X(τ,ω) as x ( t ) = 1 2 π ∫ − ∞ ∞ ∫ − ∞ ∞ X ( τ , ω ) e + i ω t d τ d ω . {\displaystyle x(t)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }X(\tau ,\omega )e^{+i\omega t}\,d\tau \,d\omega .} or x ( t ) = ∫ − ∞ ∞ [ 1 2 π ∫ − ∞ ∞ X ( τ , ω ) e + i ω t d ω ] d τ . {\displaystyle x(t)=\int _{-\infty }^{\infty }\left[{\frac {1}{2\pi }}\int _{-\infty }^{\infty }X(\tau ,\omega )e^{+i\omega t}\,d\omega \right]\,d\tau .} It can be seen, comparing to above that windowed "grain" or "wavelet" of x(t) is x ( t ) w ( t − τ ) = 1 2 π ∫ − ∞ ∞ X ( τ , ω ) e + i ω t d ω . {\displaystyle x(t)w(t-\tau )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }X(\tau ,\omega )e^{+i\omega t}\,d\omega .} the inverse Fourier transform of X(τ,ω) for τ fixed. An alternative definition that is valid only in the vicinity of τ, the inverse transform is: x ( t ) = 1 w ( t − τ ) 1 2 π ∫ − ∞ ∞ X ( τ , ω ) e + i ω t d ω . {\displaystyle x(t)={\frac {1}{w(t-\tau )}}{\frac {1}{2\pi }}\int _{-\infty }^{\infty }X(\tau ,\omega )e^{+i\omega t}\,d\omega .} In general, the window function w ( t ) {\displaystyle w(t)} has the following properties: (a) even symmetry: w ( t ) = w ( − t ) {\displaystyle w(t)=w(-t)} ; (b) non-increasing (for positive time): w ( t ) ≥ w ( s ) {\displaystyle w(t)\geq w(s)} if | t | ≤ | s | {\displaystyle |t|\leq |s|} ; (c) compact support: w ( t ) {\displaystyle w(t)} is equal to zero when |t| is large. == Resolution issues == One of the pitfalls of the STFT is that it has a fixed resolution. The width of the windowing function relates to how the signal is represented—it determines whether there is good frequency resolution (frequency components close together can be separated) or good time resolution (the time at which frequencies change). A wide window gives better frequency resolution but poor time resolution. A narrower window gives good time resolution but poor frequency resolution. These are called narrowband and wideband transforms, respectively. This is one of the reasons for the creation of the wavelet transform and multiresolution analysis, which can give good time resolution for high-frequency events and good frequency resolution for low-frequency events, the combination best suited for many real signals. This property is related to the Heisenberg uncertainty principle, but not directly – see Gabor limit for discussion. The product of the standard deviation in time and frequency is limited. The boundary of the uncertainty principle (best simultaneous resolution of both) is reached with a Gaussian window function (or mask function), as the Gaussian minimizes the Fourier uncertainty principle. This is called the Gabor transform (and with modifications for multiresolution becomes the Morlet wavelet transform). One can consider the STFT for varying window size as a two-dimensional domain (time and frequency), as illustrated in the example below, which can be calculated by varying the window size. However, this is no longer a strictly time-frequency representation – the kernel is not constant over the entire signal. === Examples === When the original function is: X ( t , f ) = ∫ − ∞ ∞ w ( t − τ ) x ( τ ) e − j 2 π f τ d τ {\displaystyle X(t,f)=\int _{-\infty }^{\infty }w(t-\tau )x(\tau )e^{-j2\pi f\tau }d\tau } We can have a simple example: w(t) = 1 for |t| smaller than or equal B w(t) = 0 otherwise B = window Now the original function of the Short-time Fourier transform can be changed as X ( t , f ) = ∫ t − B t + B x ( τ ) e − j 2 π f τ d τ {\displaystyle X(t,f)=\int _{t-B}^{t+B}x(\tau )e^{-j2\pi f\tau }d\tau } Another example: Using the following sample signal x ( t ) {\displaystyle x(t)} that is composed of a set of four sinusoidal waveforms joined together in sequence. Each waveform is only composed of one of four frequencies (10, 25, 50, 100 Hz). The definition of x ( t ) {\displaystyle x(t)} is: x ( t ) = { cos ⁡ ( 2 π 10 t ) 0 s ≤ t < 5 s cos ⁡ ( 2 π 25 t ) 5 s ≤ t < 10 s cos ⁡ ( 2 π 50 t ) 10 s ≤ t < 15 s cos ⁡ ( 2 π 100 t ) 15 s ≤ t < 20 s {\displaystyle x(t)={\begin{cases}\cos(2\pi 10t)&0\,\mathrm {s} \leq t<5\,\mathrm {s} \\\cos(2\pi 25t)&5\,\mathrm {s} \leq t<10\,\mathrm {s} \\\cos(2\pi 50t)&10\,\mathrm {s} \leq t<15\,\mathrm {s} \\\cos(2\pi 100t)&15\,\mathrm {s} \leq t<20\,\mathrm {s} \\\end{cases}}} Then it is sampled at 400 Hz. The following spectrograms were produced: The 25 ms window allows us to identify a precise time at which the signals change but the precise frequencies are difficult to identify. At the other end of the scale, the 1000 ms window allows the frequencies to be precisely seen but the time between frequency changes is blurred. Other examples: w ( t ) = e x p ( σ − t 2 ) {\displaystyle w(t)=exp(\sigma -t^{2})} Normally we call e x p ( σ − t 2 ) {\displaystyle exp(\sigma -t^{2})} a Gaussian function or Gabor function. When we use it, the short-time Fourier transform is called the "Gabor transform". === Explanation === It can also be explained with reference to the sampling and Nyquist frequency. Take a window of N samples from an arbitrary real-valued signal at sampling rate fs . Taking the Fourier transform produces N complex coefficients. Of these coefficients only half are useful (the last N/2 being the complex conjugate of the first N/2 in reverse order, as this is a real valued signal). These N/2 coefficients represent the frequencies 0 to fs/2 (Nyquist) and two consecutive coefficients are spaced apart by fs/N Hz. To increase the frequency resolution of the window the frequency spacing of the coefficients needs to be reduced. There are only two variables, but decreasing fs (and keeping N constant) will cause the window size to increase — since there are now fewer samples per unit time. The other alternative is to increase N, but this again causes the window size to increase. So any attempt to increase the frequency resolution causes a larger window size and therefore a reduction in time resolution—and vice versa. == Rayleigh frequency == As the Nyquist frequency is a limitation in the maximum frequency that can be meaningfully analysed, so is the Rayleigh frequency a limitation on the minimum frequency. The Rayleigh frequency is the minimum frequency that can be resolved by a finite duration time window. Given a time window that is Τ seconds long, the minimum frequency that can be resolved is 1/Τ Hz. The Rayleigh frequency is an important consideration in applications of the short-time Fourier transform (STFT), as well as any other method of harmonic analysis on a signal of finite record-length. == Application == STFTs as well as standard Fourier transforms and other tools are frequently used to analyze music. The spectrogram can, for example, show frequency on the horizontal axis, with the lowest frequencies at left, and the highest at the right. The height of each bar (augmented by color) represents the amplitude of the frequencies within that band. The depth dimension represents time, where each new bar was a separate distinct transform. Audio engineers use this kind of visual to gain information about an audio sample, for example, to locate the frequencies of specific noises (especially when used with greater frequency resolution) or to find frequencies which may be more or less resonant in the space where the signal was recorded. This information can be used for equalization or tuning other audio effects. == Implementation == Original function X ( t , f ) = ∫ − ∞ ∞ w ( t − τ ) x ( τ ) e − j 2 π f τ d τ {\displaystyle X(t,f)=\int _{-\infty }^{\infty }w(t-\tau )x(\tau )e^{-j2\pi f\tau }d\tau } Converting into the discrete form: t = n Δ t , f = m Δ f , τ = p Δ t {\displaystyle t=n\Delta _{t},f=m\Delta _{f},\tau =p\Delta _{t}} X ( n Δ t , m Δ f ) = ∑ − ∞ ∞ w ( ( n − p ) Δ t ) x ( p Δ t ) e − j 2 π p m Δ t Δ f Δ t {\displaystyle X(n\Delta _{t},m\Delta _{f})=\sum _{-\infty }^{\infty }w((n-p)\Delta _{t})x(p\Delta _{t})e^{-j2\pi pm\Delta _{t}\Delta _{f}}\Delta _{t}} Suppose that w ( t ) ≅ 0 for | t | > B , B Δ t = Q {\displaystyle w(t)\cong 0{\text{ for }}|t|>B,{\frac {B}{\Delta _{t}}}=Q} Then we can write the original function into X ( n Δ t , m Δ f ) = ∑ p = n − Q n + Q w ( ( n − p ) Δ t ) x ( p Δ t ) e − j 2 π p m Δ t Δ f Δ t {\displaystyle X(n\Delta _{t},m\Delta _{f})=\sum _{p=n-Q}^{n+Q}w((n-p)\Delta _{t})x(p\Delta _{t})e^{-j2\pi pm\Delta _{t}\Delta _{f}}\Delta _{t}} === Direct implementation === ==== Constraints ==== a. Nyquist criterion (avoiding the aliasing effect): Δ t < 1 2 Ω {\displaystyle \Delta _{t}<{\frac {1}{2\Omega }}} , where Ω {\displaystyle \Omega } is the bandwidth of x ( τ ) w ( t − τ ) {\displaystyle x(\tau )w(t-\tau )} === FFT-based method === ==== Constraint ==== a. Δ t Δ f = 1 N {\displaystyle \Delta _{t}\Delta _{f}={\tfrac {1}{N}}} , where N {\displaystyle N} is an integer b. N ≥ 2 Q + 1 {\displaystyle N\geq 2Q+1} c. Nyquist criterion (avoiding the aliasing effect): Δ t < 1 2 Ω {\displaystyle \Delta _{t}<{\frac {1}{2\Omega }}} , Ω {\displaystyle \Omega } is the bandwidth of x ( τ ) w ( t − τ ) {\displaystyle x(\tau )w(t-\tau )} X ( n Δ t , m Δ f ) = ∑ p = n − Q n + Q w ( ( n − p ) Δ t ) x ( p Δ t ) e − 2 π j p m N Δ t {\displaystyle X(n\Delta _{t},m\Delta _{f})=\sum _{p=n-Q}^{n+Q}w((n-p)\Delta _{t})x(p\Delta _{t})e^{-{\frac {2\pi jpm}{N}}}\Delta _{t}} if we have q = p − ( n − Q ) , then p = ( n − Q ) + q {\displaystyle {\text{if we have }}q=p-(n-Q),{\text{ then }}p=(n-Q)+q} X ( n Δ t , m Δ f ) = Δ t e 2 π j ( Q − n ) m N ∑ q = 0 N − 1 x 1 ( q ) e − 2 π j q m N {\displaystyle X(n\Delta _{t},m\Delta _{f})=\Delta _{t}e^{\frac {2\pi j(Q-n)m}{N}}\sum _{q=0}^{N-1}x_{1}(q)e^{-{\frac {2\pi jqm}{N}}}} where x 1 ( q ) = { w ( ( Q − q ) Δ t ) x ( ( n − Q + q ) Δ t ) 0 ≤ q ≤ 2 Q 0 2 Q < q < N {\displaystyle {\text{where }}x_{1}(q)={\begin{cases}w((Q-q)\Delta _{t})x((n-Q+q)\Delta _{t})&0\leq q\leq 2Q\\0&2Q<q<N\end{cases}}} === Recursive method === ==== Constraint ==== a. Δ t Δ f = 1 N {\displaystyle \Delta _{t}\Delta _{f}={\tfrac {1}{N}}} , where N {\displaystyle N} is an integer b. N ≥ 2 Q + 1 {\displaystyle N\geq 2Q+1} c. Nyquist criterion (avoiding the aliasing effect): Δ t < 1 2 Ω {\displaystyle \Delta _{t}<{\frac {1}{2\Omega }}} , Ω {\displaystyle \Omega } is the bandwidth of x ( τ ) w ( t − τ ) {\displaystyle x(\tau )w(t-\tau )} d. Only for implementing the rectangular-STFT Rectangular window imposes the constraint w ( ( n − p ) Δ t ) = 1 {\displaystyle w((n-p)\Delta _{t})=1} Substitution gives: X ( n Δ t , m Δ f ) = ∑ p = n − Q n + Q w ( ( n − p ) Δ t ) x ( p Δ t ) e − j 2 π p m N Δ t = ∑ p = n − Q n + Q x ( p Δ t ) e − j 2 π p m N Δ t {\displaystyle {\begin{aligned}X(n\Delta _{t},m\Delta _{f})&=\sum _{p=n-Q}^{n+Q}w((n-p)\Delta _{t})&x(p\Delta _{t})e^{-{\frac {j2\pi pm}{N}}}\Delta _{t}\\&=\sum _{p=n-Q}^{n+Q}&x(p\Delta _{t})e^{-{\frac {j2\pi pm}{N}}}\Delta _{t}\\\end{aligned}}} Change of variable n-1 for n: X ( ( n − 1 ) Δ t , m Δ f ) = ∑ p = n − 1 − Q n − 1 + Q x ( p Δ t ) e − j 2 π p m N Δ t {\displaystyle X((n-1)\Delta _{t},m\Delta _{f})=\sum _{p=n-1-Q}^{n-1+Q}x(p\Delta _{t})e^{-{\frac {j2\pi pm}{N}}}\Delta _{t}} Calculate X ( min n Δ t , m Δ f ) {\displaystyle X(\min {n}\Delta _{t},m\Delta _{f})} by the N-point FFT: X ( n 0 Δ t , m Δ f ) = Δ t e j 2 π ( Q − n 0 ) m N ∑ q = 0 N − 1 x 1 ( q ) e − j 2 π q m N , n 0 = min ( n ) {\displaystyle X(n_{0}\Delta _{t},m\Delta _{f})=\Delta _{t}e^{\frac {j2\pi (Q-n_{0})m}{N}}\sum _{q=0}^{N-1}x_{1}(q)e^{-j{\frac {2\pi qm}{N}}},\qquad n_{0}=\min {(n)}} where x 1 ( q ) = { x ( ( n − Q + q ) Δ t ) q ≤ 2 Q 0 q > 2 Q {\displaystyle x_{1}(q)={\begin{cases}x((n-Q+q)\Delta _{t})&q\leq 2Q\\0&q>2Q\end{cases}}} Applying the recursive formula to calculate X ( n Δ t , m Δ f ) {\displaystyle X(n\Delta _{t},m\Delta _{f})} X ( n Δ t , m Δ f ) = X ( ( n − 1 ) Δ t , m Δ f ) − x ( ( n − Q − 1 ) Δ t ) e − j 2 π ( n − Q − 1 ) m N Δ t + x ( ( n + Q ) Δ t ) e − j 2 π ( n + Q ) m N Δ t {\displaystyle X(n\Delta _{t},m\Delta _{f})=X((n-1)\Delta _{t},m\Delta _{f})-x((n-Q-1)\Delta _{t})e^{-{\frac {j2\pi (n-Q-1)m}{N}}}\Delta _{t}+x((n+Q)\Delta _{t})e^{-{\frac {j2\pi (n+Q)m}{N}}}\Delta _{t}} === Chirp Z transform === ==== Constraint ==== exp ⁡ ( − j 2 π p m Δ t Δ f ) = exp ⁡ ( − j π p 2 Δ t Δ f ) ⋅ exp ⁡ ( j π ( p − m ) 2 Δ t Δ f ) ⋅ exp ⁡ ( − j π m 2 Δ t Δ f ) {\displaystyle \exp {(-j2\pi pm\Delta _{t}\Delta _{f})}=\exp {(-j\pi p^{2}\Delta _{t}\Delta _{f})}\cdot \exp {(j\pi (p-m)^{2}\Delta _{t}\Delta _{f})}\cdot \exp {(-j\pi m^{2}\Delta _{t}\Delta _{f})}} so X ( n Δ t , m Δ f ) = Δ t ∑ p = n − Q n + Q w ( ( n − p ) Δ t ) x ( p Δ t ) e − j 2 π p m Δ t Δ f {\displaystyle X(n\Delta _{t},m\Delta _{f})=\Delta _{t}\sum _{p=n-Q}^{n+Q}w((n-p)\Delta _{t})x(p\Delta _{t})e^{-j2\pi pm\Delta _{t}\Delta _{f}}} X ( n Δ t , m Δ f ) = Δ t e − j 2 π m 2 Δ t Δ f ∑ p = n − Q n + Q w ( ( n − p ) Δ t ) x ( p Δ t ) e − j π p 2 Δ t Δ f e j π ( p − m ) 2 Δ t Δ f {\displaystyle X(n\Delta _{t},m\Delta _{f})=\Delta _{t}e^{-j2\pi m^{2}\Delta _{t}\Delta _{f}}\sum _{p=n-Q}^{n+Q}w((n-p)\Delta _{t})x(p\Delta _{t})e^{-j\pi p^{2}\Delta _{t}\Delta _{f}}e^{j\pi (p-m)^{2}\Delta _{t}\Delta _{f}}} === Implementation comparison === == See also == Least-squares spectral analysis Spectral density estimation Time-frequency analysis Time-frequency representation Reassignment method Other time-frequency transforms: Cone-shape distribution function Constant-Q transform Fractional Fourier transform Gabor transform Newland transform S transform Wavelet transform Chirplet transform == References == == External links == DiscreteTFDs – software for computing the short-time Fourier transform and other time-frequency distributions Singular Spectral Analysis – MultiTaper Method Toolkit – a free software program to analyze short, noisy time series kSpectra Toolkit for Mac OS X from SpectraWorks Time stretched short time Fourier transform for time frequency analysis of ultra wideband signals A BSD-licensed Matlab class to perform STFT and inverse STFT LTFAT – A free (GPL) Matlab / Octave toolbox to work with short-time Fourier transforms and time-frequency analysis Sonogram visible speech – A free (GPL)Freeware for short-time Fourier transforms and time-frequency analysis National Taiwan University, Time-Frequency Analysis and Wavelet Transform 2021, Professor of Jian-Jiun Ding, Department of Electrical Engineering
Wikipedia/Short-time_Fourier_transform
In mathematics and signal processing, the Hilbert transform is a specific singular integral that takes a function, u(t) of a real variable and produces another function of a real variable H(u)(t). The Hilbert transform is given by the Cauchy principal value of the convolution with the function 1 / ( π t ) {\displaystyle 1/(\pi t)} (see § Definition). The Hilbert transform has a particularly simple representation in the frequency domain: It imparts a phase shift of ±90° (π/2 radians) to every frequency component of a function, the sign of the shift depending on the sign of the frequency (see § Relationship with the Fourier transform). The Hilbert transform is important in signal processing, where it is a component of the analytic representation of a real-valued signal u(t). The Hilbert transform was first introduced by David Hilbert in this setting, to solve a special case of the Riemann–Hilbert problem for analytic functions. == Definition == The Hilbert transform of u can be thought of as the convolution of u(t) with the function h(t) = ⁠1/πt⁠, known as the Cauchy kernel. Because 1/t is not integrable across t = 0, the integral defining the convolution does not always converge. Instead, the Hilbert transform is defined using the Cauchy principal value (denoted here by p.v.). Explicitly, the Hilbert transform of a function (or signal) u(t) is given by H ⁡ ( u ) ( t ) = 1 π p . v . ⁡ ∫ − ∞ + ∞ u ( τ ) t − τ d τ , {\displaystyle \operatorname {H} (u)(t)={\frac {1}{\pi }}\,\operatorname {p.v.} \int _{-\infty }^{+\infty }{\frac {u(\tau )}{t-\tau }}\,\mathrm {d} \tau ,} provided this integral exists as a principal value. This is precisely the convolution of u with the tempered distribution p.v. ⁠1/πt⁠. Alternatively, by changing variables, the principal-value integral can be written explicitly as H ⁡ ( u ) ( t ) = 2 π lim ε → 0 ∫ ε ∞ u ( t − τ ) − u ( t + τ ) 2 τ d τ . {\displaystyle \operatorname {H} (u)(t)={\frac {2}{\pi }}\,\lim _{\varepsilon \to 0}\int _{\varepsilon }^{\infty }{\frac {u(t-\tau )-u(t+\tau )}{2\tau }}\,\mathrm {d} \tau .} When the Hilbert transform is applied twice in succession to a function u, the result is H ⁡ ( H ⁡ ( u ) ) ( t ) = − u ( t ) , {\displaystyle \operatorname {H} {\bigl (}\operatorname {H} (u){\bigr )}(t)=-u(t),} provided the integrals defining both iterations converge in a suitable sense. In particular, the inverse transform is − H {\displaystyle -\operatorname {H} } . This fact can most easily be seen by considering the effect of the Hilbert transform on the Fourier transform of u(t) (see § Relationship with the Fourier transform below). For an analytic function in the upper half-plane, the Hilbert transform describes the relationship between the real part and the imaginary part of the boundary values. That is, if f(z) is analytic in the upper half complex plane {z : Im{z} > 0}, and u(t) = Re{f (t + 0·i)}, then Im{f(t + 0·i)} = H(u)(t) up to an additive constant, provided this Hilbert transform exists. === Notation === In signal processing the Hilbert transform of u(t) is commonly denoted by u ^ ( t ) {\displaystyle {\hat {u}}(t)} . However, in mathematics, this notation is already extensively used to denote the Fourier transform of u(t). Occasionally, the Hilbert transform may be denoted by u ~ ( t ) {\displaystyle {\tilde {u}}(t)} . Furthermore, many sources define the Hilbert transform as the negative of the one defined here. == History == The Hilbert transform arose in Hilbert's 1905 work on a problem Riemann posed concerning analytic functions, which has come to be known as the Riemann–Hilbert problem. Hilbert's work was mainly concerned with the Hilbert transform for functions defined on the circle. Some of his earlier work related to the Discrete Hilbert Transform dates back to lectures he gave in Göttingen. The results were later published by Hermann Weyl in his dissertation. Schur improved Hilbert's results about the discrete Hilbert transform and extended them to the integral case. These results were restricted to the spaces L2 and ℓ2. In 1928, Marcel Riesz proved that the Hilbert transform can be defined for u in L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} (Lp space) for 1 < p < ∞, that the Hilbert transform is a bounded operator on L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} for 1 < p < ∞, and that similar results hold for the Hilbert transform on the circle as well as the discrete Hilbert transform. The Hilbert transform was a motivating example for Antoni Zygmund and Alberto Calderón during their study of singular integrals. Their investigations have played a fundamental role in modern harmonic analysis. Various generalizations of the Hilbert transform, such as the bilinear and trilinear Hilbert transforms are still active areas of research today. == Relationship with the Fourier transform == The Hilbert transform is a multiplier operator. The multiplier of H is σH(ω) = −i sgn(ω), where sgn is the signum function. Therefore: F ( H ⁡ ( u ) ) ( ω ) = − i sgn ⁡ ( ω ) ⋅ F ( u ) ( ω ) , {\displaystyle {\mathcal {F}}{\bigl (}\operatorname {H} (u){\bigr )}(\omega )=-i\operatorname {sgn}(\omega )\cdot {\mathcal {F}}(u)(\omega ),} where F {\displaystyle {\mathcal {F}}} denotes the Fourier transform. Since sgn(x) = sgn(2πx), it follows that this result applies to the three common definitions of F {\displaystyle {\mathcal {F}}} . By Euler's formula, σ H ( ω ) = { i = e + i π / 2 if ω < 0 0 if ω = 0 − i = e − i π / 2 if ω > 0 {\displaystyle \sigma _{\operatorname {H} }(\omega )={\begin{cases}~~i=e^{+i\pi /2}&{\text{if }}\omega <0\\~~0&{\text{if }}\omega =0\\-i=e^{-i\pi /2}&{\text{if }}\omega >0\end{cases}}} Therefore, H(u)(t) has the effect of shifting the phase of the negative frequency components of u(t) by +90° (π⁄2 radians) and the phase of the positive frequency components by −90°, and i·H(u)(t) has the effect of restoring the positive frequency components while shifting the negative frequency ones an additional +90°, resulting in their negation (i.e., a multiplication by −1). When the Hilbert transform is applied twice, the phase of the negative and positive frequency components of u(t) are respectively shifted by +180° and −180°, which are equivalent amounts. The signal is negated; i.e., H(H(u)) = −u, because ( σ H ( ω ) ) 2 = e ± i π = − 1 for ω ≠ 0. {\displaystyle \left(\sigma _{\operatorname {H} }(\omega )\right)^{2}=e^{\pm i\pi }=-1\quad {\text{for }}\omega \neq 0.} == Table of selected Hilbert transforms == In the following table, the frequency parameter ω {\displaystyle \omega } is real. Notes An extensive table of Hilbert transforms is available. Note that the Hilbert transform of a constant is zero. == Domain of definition == It is by no means obvious that the Hilbert transform is well-defined at all, as the improper integral defining it must converge in a suitable sense. However, the Hilbert transform is well-defined for a broad class of functions, namely those in L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} for 1 < p < ∞. More precisely, if u is in L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} for 1 < p < ∞, then the limit defining the improper integral H ⁡ ( u ) ( t ) = 2 π lim ε → 0 ∫ ε ∞ u ( t − τ ) − u ( t + τ ) 2 τ d τ {\displaystyle \operatorname {H} (u)(t)={\frac {2}{\pi }}\lim _{\varepsilon \to 0}\int _{\varepsilon }^{\infty }{\frac {u(t-\tau )-u(t+\tau )}{2\tau }}\,d\tau } exists for almost every t. The limit function is also in L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} and is in fact the limit in the mean of the improper integral as well. That is, 2 π ∫ ε ∞ u ( t − τ ) − u ( t + τ ) 2 τ d τ → H ⁡ ( u ) ( t ) {\displaystyle {\frac {2}{\pi }}\int _{\varepsilon }^{\infty }{\frac {u(t-\tau )-u(t+\tau )}{2\tau }}\,\mathrm {d} \tau \to \operatorname {H} (u)(t)} as ε → 0 in the Lp norm, as well as pointwise almost everywhere, by the Titchmarsh theorem. In the case p = 1, the Hilbert transform still converges pointwise almost everywhere, but may itself fail to be integrable, even locally. In particular, convergence in the mean does not in general happen in this case. The Hilbert transform of an L1 function does converge, however, in L1-weak, and the Hilbert transform is a bounded operator from L1 to L1,w. (In particular, since the Hilbert transform is also a multiplier operator on L2, Marcinkiewicz interpolation and a duality argument furnishes an alternative proof that H is bounded on Lp.) == Properties == === Boundedness === If 1 < p < ∞, then the Hilbert transform on L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} is a bounded linear operator, meaning that there exists a constant Cp such that ‖ H ⁡ u ‖ p ≤ C p ‖ u ‖ p {\displaystyle \left\|\operatorname {H} u\right\|_{p}\leq C_{p}\left\|u\right\|_{p}} for all u ∈ L p ( R ) {\displaystyle u\in L^{p}(\mathbb {R} )} . The best constant C p {\displaystyle C_{p}} is given by C p = { tan ⁡ π 2 p if 1 < p ≤ 2 cot ⁡ π 2 p if 2 < p < ∞ {\displaystyle C_{p}={\begin{cases}\tan {\frac {\pi }{2p}}&{\text{if}}~1<p\leq 2\\[4pt]\cot {\frac {\pi }{2p}}&{\text{if}}~2<p<\infty \end{cases}}} An easy way to find the best C p {\displaystyle C_{p}} for p {\displaystyle p} being a power of 2 is through the so-called Cotlar's identity that ( H ⁡ f ) 2 = f 2 + 2 H ⁡ ( f H ⁡ f ) {\displaystyle (\operatorname {H} f)^{2}=f^{2}+2\operatorname {H} (f\operatorname {H} f)} for all real valued f. The same best constants hold for the periodic Hilbert transform. The boundedness of the Hilbert transform implies the L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} convergence of the symmetric partial sum operator S R f = ∫ − R R f ^ ( ξ ) e 2 π i x ξ d ξ {\displaystyle S_{R}f=\int _{-R}^{R}{\hat {f}}(\xi )e^{2\pi ix\xi }\,\mathrm {d} \xi } to f in L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} . === Anti-self adjointness === The Hilbert transform is an anti-self adjoint operator relative to the duality pairing between L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} and the dual space L q ( R ) {\displaystyle L^{q}(\mathbb {R} )} , where p and q are Hölder conjugates and 1 < p, q < ∞. Symbolically, ⟨ H ⁡ u , v ⟩ = ⟨ u , − H ⁡ v ⟩ {\displaystyle \langle \operatorname {H} u,v\rangle =\langle u,-\operatorname {H} v\rangle } for u ∈ L p ( R ) {\displaystyle u\in L^{p}(\mathbb {R} )} and v ∈ L q ( R ) {\displaystyle v\in L^{q}(\mathbb {R} )} . === Inverse transform === The Hilbert transform is an anti-involution, meaning that H ⁡ ( H ⁡ ( u ) ) = − u {\displaystyle \operatorname {H} {\bigl (}\operatorname {H} \left(u\right){\bigr )}=-u} provided each transform is well-defined. Since H preserves the space L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} , this implies in particular that the Hilbert transform is invertible on L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} , and that H − 1 = − H {\displaystyle \operatorname {H} ^{-1}=-\operatorname {H} } === Complex structure === Because H2 = −I ("I" is the identity operator) on the real Banach space of real-valued functions in L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} , the Hilbert transform defines a linear complex structure on this Banach space. In particular, when p = 2, the Hilbert transform gives the Hilbert space of real-valued functions in L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} the structure of a complex Hilbert space. The (complex) eigenstates of the Hilbert transform admit representations as holomorphic functions in the upper and lower half-planes in the Hardy space H2 by the Paley–Wiener theorem. === Differentiation === Formally, the derivative of the Hilbert transform is the Hilbert transform of the derivative, i.e. these two linear operators commute: H ⁡ ( d u d t ) = d d t H ⁡ ( u ) {\displaystyle \operatorname {H} \left({\frac {\mathrm {d} u}{\mathrm {d} t}}\right)={\frac {\mathrm {d} }{\mathrm {d} t}}\operatorname {H} (u)} Iterating this identity, H ⁡ ( d k u d t k ) = d k d t k H ⁡ ( u ) {\displaystyle \operatorname {H} \left({\frac {\mathrm {d} ^{k}u}{\mathrm {d} t^{k}}}\right)={\frac {\mathrm {d} ^{k}}{\mathrm {d} t^{k}}}\operatorname {H} (u)} This is rigorously true as stated provided u and its first k derivatives belong to L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} . One can check this easily in the frequency domain, where differentiation becomes multiplication by ω. === Convolutions === The Hilbert transform can formally be realized as a convolution with the tempered distribution h ( t ) = p . v . ⁡ 1 π t {\displaystyle h(t)=\operatorname {p.v.} {\frac {1}{\pi \,t}}} Thus formally, H ⁡ ( u ) = h ∗ u {\displaystyle \operatorname {H} (u)=h*u} However, a priori this may only be defined for u a distribution of compact support. It is possible to work somewhat rigorously with this since compactly supported functions (which are distributions a fortiori) are dense in Lp. Alternatively, one may use the fact that h(t) is the distributional derivative of the function log|t|/π; to wit H ⁡ ( u ) ( t ) = d d t ( 1 π ( u ∗ log ⁡ | ⋅ | ) ( t ) ) {\displaystyle \operatorname {H} (u)(t)={\frac {\mathrm {d} }{\mathrm {d} t}}\left({\frac {1}{\pi }}\left(u*\log {\bigl |}\cdot {\bigr |}\right)(t)\right)} For most operational purposes the Hilbert transform can be treated as a convolution. For example, in a formal sense, the Hilbert transform of a convolution is the convolution of the Hilbert transform applied on only one of either of the factors: H ⁡ ( u ∗ v ) = H ⁡ ( u ) ∗ v = u ∗ H ⁡ ( v ) {\displaystyle \operatorname {H} (u*v)=\operatorname {H} (u)*v=u*\operatorname {H} (v)} This is rigorously true if u and v are compactly supported distributions since, in that case, h ∗ ( u ∗ v ) = ( h ∗ u ) ∗ v = u ∗ ( h ∗ v ) {\displaystyle h*(u*v)=(h*u)*v=u*(h*v)} By passing to an appropriate limit, it is thus also true if u ∈ Lp and v ∈ Lq provided that 1 < 1 p + 1 q {\displaystyle 1<{\frac {1}{p}}+{\frac {1}{q}}} from a theorem due to Titchmarsh. === Invariance === The Hilbert transform has the following invariance properties on L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . It commutes with translations. That is, it commutes with the operators Ta f(x) = f(x + a) for all a in R . {\displaystyle \mathbb {R} .} It commutes with positive dilations. That is it commutes with the operators Mλ f (x) = f (λ x) for all λ > 0. It anticommutes with the reflection R f (x) = f (−x). Up to a multiplicative constant, the Hilbert transform is the only bounded operator on L2 with these properties. In fact there is a wider set of operators that commute with the Hilbert transform. The group SL ( 2 , R ) {\displaystyle {\text{SL}}(2,\mathbb {R} )} acts by unitary operators Ug on the space L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} by the formula U g − 1 ⁡ f ( x ) = 1 c x + d f ( a x + b c x + d ) , g = [ a b c d ] , for a d − b c = ± 1. {\displaystyle \operatorname {U} _{g}^{-1}f(x)={\frac {1}{cx+d}}\,f\left({\frac {ax+b}{cx+d}}\right)\,,\qquad g={\begin{bmatrix}a&b\\c&d\end{bmatrix}}~,\qquad {\text{ for }}~ad-bc=\pm 1.} This unitary representation is an example of a principal series representation of SL ( 2 , R ) . {\displaystyle ~{\text{SL}}(2,\mathbb {R} )~.} In this case it is reducible, splitting as the orthogonal sum of two invariant subspaces, Hardy space H 2 ( R ) {\displaystyle H^{2}(\mathbb {R} )} and its conjugate. These are the spaces of L2 boundary values of holomorphic functions on the upper and lower halfplanes. H 2 ( R ) {\displaystyle H^{2}(\mathbb {R} )} and its conjugate consist of exactly those L2 functions with Fourier transforms vanishing on the negative and positive parts of the real axis respectively. Since the Hilbert transform is equal to H = −i (2P − I), with P being the orthogonal projection from L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} onto H 2 ⁡ ( R ) , {\displaystyle \operatorname {H} ^{2}(\mathbb {R} ),} and I the identity operator, it follows that H 2 ⁡ ( R ) {\displaystyle \operatorname {H} ^{2}(\mathbb {R} )} and its orthogonal complement are eigenspaces of H for the eigenvalues ±i. In other words, H commutes with the operators Ug. The restrictions of the operators Ug to H 2 ⁡ ( R ) {\displaystyle \operatorname {H} ^{2}(\mathbb {R} )} and its conjugate give irreducible representations of SL ( 2 , R ) {\displaystyle {\text{SL}}(2,\mathbb {R} )} – the so-called limit of discrete series representations. == Extending the domain of definition == === Hilbert transform of distributions === It is further possible to extend the Hilbert transform to certain spaces of distributions (Pandey 1996, Chapter 3). Since the Hilbert transform commutes with differentiation, and is a bounded operator on Lp, H restricts to give a continuous transform on the inverse limit of Sobolev spaces: D L p = lim ⟵ n → ∞ W n , p ( R ) {\displaystyle {\mathcal {D}}_{L^{p}}={\underset {n\to \infty }{\underset {\longleftarrow }{\lim }}}W^{n,p}(\mathbb {R} )} The Hilbert transform can then be defined on the dual space of D L p {\displaystyle {\mathcal {D}}_{L^{p}}} , denoted D L p ′ {\displaystyle {\mathcal {D}}_{L^{p}}'} , consisting of Lp distributions. This is accomplished by the duality pairing: For u ∈ D L p ′ {\displaystyle u\in {\mathcal {D}}'_{L^{p}}} , define: H ⁡ ( u ) ∈ D L p ′ = ⟨ H ⁡ u , v ⟩ ≜ ⟨ u , − H ⁡ v ⟩ , for all v ∈ D L p . {\displaystyle \operatorname {H} (u)\in {\mathcal {D}}'_{L^{p}}=\langle \operatorname {H} u,v\rangle \ \triangleq \ \langle u,-\operatorname {H} v\rangle ,\ {\text{for all}}\ v\in {\mathcal {D}}_{L^{p}}.} It is possible to define the Hilbert transform on the space of tempered distributions as well by an approach due to Gel'fand and Shilov, but considerably more care is needed because of the singularity in the integral. === Hilbert transform of bounded functions === The Hilbert transform can be defined for functions in L ∞ ( R ) {\displaystyle L^{\infty }(\mathbb {R} )} as well, but it requires some modifications and caveats. Properly understood, the Hilbert transform maps L ∞ ( R ) {\displaystyle L^{\infty }(\mathbb {R} )} to the Banach space of bounded mean oscillation (BMO) classes. Interpreted naïvely, the Hilbert transform of a bounded function is clearly ill-defined. For instance, with u = sgn(x), the integral defining H(u) diverges almost everywhere to ±∞. To alleviate such difficulties, the Hilbert transform of an L∞ function is therefore defined by the following regularized form of the integral H ⁡ ( u ) ( t ) = p . v . ⁡ ∫ − ∞ ∞ u ( τ ) { h ( t − τ ) − h 0 ( − τ ) } d τ {\displaystyle \operatorname {H} (u)(t)=\operatorname {p.v.} \int _{-\infty }^{\infty }u(\tau )\left\{h(t-\tau )-h_{0}(-\tau )\right\}\,\mathrm {d} \tau } where as above h(x) = ⁠1/πx⁠ and h 0 ( x ) = { 0 if | x | < 1 1 π x if | x | ≥ 1 {\displaystyle h_{0}(x)={\begin{cases}0&{\text{if}}~|x|<1\\{\frac {1}{\pi \,x}}&{\text{if}}~|x|\geq 1\end{cases}}} The modified transform H agrees with the original transform up to an additive constant on functions of compact support from a general result by Calderón and Zygmund. Furthermore, the resulting integral converges pointwise almost everywhere, and with respect to the BMO norm, to a function of bounded mean oscillation. A deep result of Fefferman's work is that a function is of bounded mean oscillation if and only if it has the form f + H(g) for some f , g ∈ L ∞ ( R ) {\displaystyle f,g\in L^{\infty }(\mathbb {R} )} . == Conjugate functions == The Hilbert transform can be understood in terms of a pair of functions f(x) and g(x) such that the function F ( x ) = f ( x ) + i g ( x ) {\displaystyle F(x)=f(x)+i\,g(x)} is the boundary value of a holomorphic function F(z) in the upper half-plane. Under these circumstances, if f and g are sufficiently integrable, then one is the Hilbert transform of the other. Suppose that f ∈ L p ( R ) . {\displaystyle f\in L^{p}(\mathbb {R} ).} Then, by the theory of the Poisson integral, f admits a unique harmonic extension into the upper half-plane, and this extension is given by u ( x + i y ) = u ( x , y ) = 1 π ∫ − ∞ ∞ f ( s ) y ( x − s ) 2 + y 2 d s {\displaystyle u(x+iy)=u(x,y)={\frac {1}{\pi }}\int _{-\infty }^{\infty }f(s)\;{\frac {y}{(x-s)^{2}+y^{2}}}\;\mathrm {d} s} which is the convolution of f with the Poisson kernel P ( x , y ) = y π ( x 2 + y 2 ) {\displaystyle P(x,y)={\frac {y}{\pi \,\left(x^{2}+y^{2}\right)}}} Furthermore, there is a unique harmonic function v defined in the upper half-plane such that F(z) = u(z) + i v(z) is holomorphic and lim y → ∞ v ( x + i y ) = 0 {\displaystyle \lim _{y\to \infty }v\,(x+i\,y)=0} This harmonic function is obtained from f by taking a convolution with the conjugate Poisson kernel Q ( x , y ) = x π ( x 2 + y 2 ) . {\displaystyle Q(x,y)={\frac {x}{\pi \,\left(x^{2}+y^{2}\right)}}.} Thus v ( x , y ) = 1 π ∫ − ∞ ∞ f ( s ) x − s ( x − s ) 2 + y 2 d s . {\displaystyle v(x,y)={\frac {1}{\pi }}\int _{-\infty }^{\infty }f(s)\;{\frac {x-s}{\,(x-s)^{2}+y^{2}\,}}\;\mathrm {d} s.} Indeed, the real and imaginary parts of the Cauchy kernel are i π z = P ( x , y ) + i Q ( x , y ) {\displaystyle {\frac {i}{\pi \,z}}=P(x,y)+i\,Q(x,y)} so that F = u + i v is holomorphic by Cauchy's integral formula. The function v obtained from u in this way is called the harmonic conjugate of u. The (non-tangential) boundary limit of v(x,y) as y → 0 is the Hilbert transform of f. Thus, succinctly, H ⁡ ( f ) = lim y → 0 Q ( − , y ) ⋆ f {\displaystyle \operatorname {H} (f)=\lim _{y\to 0}Q(-,y)\star f} === Titchmarsh's theorem === Titchmarsh's theorem (named for E. C. Titchmarsh who included it in his 1937 work) makes precise the relationship between the boundary values of holomorphic functions in the upper half-plane and the Hilbert transform. It gives necessary and sufficient conditions for a complex-valued square-integrable function F(x) on the real line to be the boundary value of a function in the Hardy space H2(U) of holomorphic functions in the upper half-plane U. The theorem states that the following conditions for a complex-valued square-integrable function F : R → C {\displaystyle F:\mathbb {R} \to \mathbb {C} } are equivalent: F(x) is the limit as z → x of a holomorphic function F(z) in the upper half-plane such that ∫ − ∞ ∞ | F ( x + i y ) | 2 d x < K {\displaystyle \int _{-\infty }^{\infty }|F(x+i\,y)|^{2}\;\mathrm {d} x<K} The real and imaginary parts of F(x) are Hilbert transforms of each other. The Fourier transform F ( F ) ( x ) {\displaystyle {\mathcal {F}}(F)(x)} vanishes for x < 0. A weaker result is true for functions of class Lp for p > 1. Specifically, if F(z) is a holomorphic function such that ∫ − ∞ ∞ | F ( x + i y ) | p d x < K {\displaystyle \int _{-\infty }^{\infty }|F(x+i\,y)|^{p}\;\mathrm {d} x<K} for all y, then there is a complex-valued function F(x) in L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} such that F(x + i y) → F(x) in the Lp norm as y → 0 (as well as holding pointwise almost everywhere). Furthermore, F ( x ) = f ( x ) + i g ( x ) {\displaystyle F(x)=f(x)+i\,g(x)} where f is a real-valued function in L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} and g is the Hilbert transform (of class Lp) of f. This is not true in the case p = 1. In fact, the Hilbert transform of an L1 function f need not converge in the mean to another L1 function. Nevertheless, the Hilbert transform of f does converge almost everywhere to a finite function g such that ∫ − ∞ ∞ | g ( x ) | p 1 + x 2 d x < ∞ {\displaystyle \int _{-\infty }^{\infty }{\frac {|g(x)|^{p}}{1+x^{2}}}\;\mathrm {d} x<\infty } This result is directly analogous to one by Andrey Kolmogorov for Hardy functions in the disc. Although usually called Titchmarsh's theorem, the result aggregates much work of others, including Hardy, Paley and Wiener (see Paley–Wiener theorem), as well as work by Riesz, Hille, and Tamarkin === Riemann–Hilbert problem === One form of the Riemann–Hilbert problem seeks to identify pairs of functions F+ and F− such that F+ is holomorphic on the upper half-plane and F− is holomorphic on the lower half-plane, such that for x along the real axis, F + ( x ) − F − ( x ) = f ( x ) {\displaystyle F_{+}(x)-F_{-}(x)=f(x)} where f(x) is some given real-valued function of x ∈ R {\displaystyle x\in \mathbb {R} } . The left-hand side of this equation may be understood either as the difference of the limits of F± from the appropriate half-planes, or as a hyperfunction distribution. Two functions of this form are a solution of the Riemann–Hilbert problem. Formally, if F± solve the Riemann–Hilbert problem f ( x ) = F + ( x ) − F − ( x ) {\displaystyle f(x)=F_{+}(x)-F_{-}(x)} then the Hilbert transform of f(x) is given by H ( f ) ( x ) = − i ( F + ( x ) + F − ( x ) ) . {\displaystyle H(f)(x)=-i{\bigl (}F_{+}(x)+F_{-}(x){\bigr )}.} == Hilbert transform on the circle == For a periodic function f the circular Hilbert transform is defined: f ~ ( x ) ≜ 1 2 π p . v . ⁡ ∫ 0 2 π f ( t ) cot ⁡ ( x − t 2 ) d t {\displaystyle {\tilde {f}}(x)\triangleq {\frac {1}{2\pi }}\operatorname {p.v.} \int _{0}^{2\pi }f(t)\,\cot \left({\frac {x-t}{2}}\right)\,\mathrm {d} t} The circular Hilbert transform is used in giving a characterization of Hardy space and in the study of the conjugate function in Fourier series. The kernel, cot ⁡ ( x − t 2 ) {\displaystyle \cot \left({\frac {x-t}{2}}\right)} is known as the Hilbert kernel since it was in this form the Hilbert transform was originally studied. The Hilbert kernel (for the circular Hilbert transform) can be obtained by making the Cauchy kernel 1⁄x periodic. More precisely, for x ≠ 0 1 2 cot ⁡ ( x 2 ) = 1 x + ∑ n = 1 ∞ ( 1 x + 2 n π + 1 x − 2 n π ) {\displaystyle {\frac {1}{\,2\,}}\cot \left({\frac {x}{2}}\right)={\frac {1}{x}}+\sum _{n=1}^{\infty }\left({\frac {1}{x+2n\pi }}+{\frac {1}{\,x-2n\pi \,}}\right)} Many results about the circular Hilbert transform may be derived from the corresponding results for the Hilbert transform from this correspondence. Another more direct connection is provided by the Cayley transform C(x) = (x – i) / (x + i), which carries the real line onto the circle and the upper half plane onto the unit disk. It induces a unitary map U f ( x ) = 1 ( x + i ) π f ( C ( x ) ) {\displaystyle U\,f(x)={\frac {1}{(x+i)\,{\sqrt {\pi }}}}\,f\left(C\left(x\right)\right)} of L2(T) onto L 2 ( R ) . {\displaystyle L^{2}(\mathbb {R} ).} The operator U carries the Hardy space H2(T) onto the Hardy space H 2 ( R ) {\displaystyle H^{2}(\mathbb {R} )} . == Hilbert transform in signal processing == === Bedrosian's theorem === Bedrosian's theorem states that the Hilbert transform of the product of a low-pass and a high-pass signal with non-overlapping spectra is given by the product of the low-pass signal and the Hilbert transform of the high-pass signal, or H ⁡ ( f LP ( t ) ⋅ f HP ( t ) ) = f LP ( t ) ⋅ H ⁡ ( f HP ( t ) ) , {\displaystyle \operatorname {H} \left(f_{\text{LP}}(t)\cdot f_{\text{HP}}(t)\right)=f_{\text{LP}}(t)\cdot \operatorname {H} \left(f_{\text{HP}}(t)\right),} where fLP and fHP are the low- and high-pass signals respectively. A category of communication signals to which this applies is called the narrowband signal model. A member of that category is amplitude modulation of a high-frequency sinusoidal "carrier": u ( t ) = u m ( t ) ⋅ cos ⁡ ( ω t + φ ) , {\displaystyle u(t)=u_{m}(t)\cdot \cos(\omega t+\varphi ),} where um(t) is the narrow bandwidth "message" waveform, such as voice or music. Then by Bedrosian's theorem: H ⁡ ( u ) ( t ) = { + u m ( t ) ⋅ sin ⁡ ( ω t + φ ) if ω > 0 − u m ( t ) ⋅ sin ⁡ ( ω t + φ ) if ω < 0 {\displaystyle \operatorname {H} (u)(t)={\begin{cases}+u_{m}(t)\cdot \sin(\omega t+\varphi )&{\text{if }}\omega >0\\-u_{m}(t)\cdot \sin(\omega t+\varphi )&{\text{if }}\omega <0\end{cases}}} === Analytic representation === A specific type of conjugate function is: u a ( t ) ≜ u ( t ) + i ⋅ H ( u ) ( t ) , {\displaystyle u_{a}(t)\triangleq u(t)+i\cdot H(u)(t),} known as the analytic representation of u ( t ) . {\displaystyle u(t).} The name reflects its mathematical tractability, due largely to Euler's formula. Applying Bedrosian's theorem to the narrowband model, the analytic representation is: A Fourier transform property indicates that this complex heterodyne operation can shift all the negative frequency components of um(t) above 0 Hz. In that case, the imaginary part of the result is a Hilbert transform of the real part. This is an indirect way to produce Hilbert transforms. === Angle (phase/frequency) modulation === The form: u ( t ) = A ⋅ cos ⁡ ( ω t + φ m ( t ) ) {\displaystyle u(t)=A\cdot \cos(\omega t+\varphi _{m}(t))} is called angle modulation, which includes both phase modulation and frequency modulation. The instantaneous frequency is ω + φ m ′ ( t ) . {\displaystyle \omega +\varphi _{m}^{\prime }(t).} For sufficiently large ω, compared to φ m ′ {\displaystyle \varphi _{m}^{\prime }} : H ⁡ ( u ) ( t ) ≈ A ⋅ sin ⁡ ( ω t + φ m ( t ) ) {\displaystyle \operatorname {H} (u)(t)\approx A\cdot \sin(\omega t+\varphi _{m}(t))} and: u a ( t ) ≈ A ⋅ e i ( ω t + φ m ( t ) ) . {\displaystyle u_{a}(t)\approx A\cdot e^{i(\omega t+\varphi _{m}(t))}.} === Single sideband modulation (SSB) === When um(t) in Eq.1 is also an analytic representation (of a message waveform), that is: u m ( t ) = m ( t ) + i ⋅ m ^ ( t ) {\displaystyle u_{m}(t)=m(t)+i\cdot {\widehat {m}}(t)} the result is single-sideband modulation: u a ( t ) = ( m ( t ) + i ⋅ m ^ ( t ) ) ⋅ e i ( ω t + φ ) {\displaystyle u_{a}(t)=(m(t)+i\cdot {\widehat {m}}(t))\cdot e^{i(\omega t+\varphi )}} whose transmitted component is: u ( t ) = Re ⁡ { u a ( t ) } = m ( t ) ⋅ cos ⁡ ( ω t + φ ) − m ^ ( t ) ⋅ sin ⁡ ( ω t + φ ) {\displaystyle {\begin{aligned}u(t)&=\operatorname {Re} \{u_{a}(t)\}\\&=m(t)\cdot \cos(\omega t+\varphi )-{\widehat {m}}(t)\cdot \sin(\omega t+\varphi )\end{aligned}}} === Causality === The function h ( t ) = 1 / ( π t ) {\displaystyle h(t)=1/(\pi t)} presents two causality-based challenges to practical implementation in a convolution (in addition to its undefined value at 0): Its duration is infinite (technically infinite support). Finite-length windowing reduces the effective frequency range of the transform; shorter windows result in greater losses at low and high frequencies. See also quadrature filter. It is a non-causal filter. So a delayed version, h ( t − τ ) , {\displaystyle h(t-\tau ),} is required. The corresponding output is subsequently delayed by τ . {\displaystyle \tau .} When creating the imaginary part of an analytic signal, the source (real part) must also be delayed by τ {\displaystyle \tau } . == Discrete Hilbert transform == For a discrete function, u [ n ] , {\displaystyle u[n],} with discrete-time Fourier transform (DTFT), U ( ω ) {\displaystyle U(\omega )} , and discrete Hilbert transform u ^ [ n ] , {\displaystyle {\widehat {u}}[n],} the DTFT of u ^ [ n ] {\displaystyle {\widehat {u}}[n]} in the region −π < ω < π is given by: DTFT ⁡ ( u ^ ) = U ( ω ) ⋅ ( − i ⋅ sgn ⁡ ( ω ) ) . {\displaystyle \operatorname {DTFT} ({\widehat {u}})=U(\omega )\cdot (-i\cdot \operatorname {sgn}(\omega )).} The inverse DTFT, using the convolution theorem, is: u ^ [ n ] = D T F T − 1 ( U ( ω ) ) ∗ D T F T − 1 ( − i ⋅ sgn ⁡ ( ω ) ) = u [ n ] ∗ 1 2 π ∫ − π π ( − i ⋅ sgn ⁡ ( ω ) ) ⋅ e i ω n d ω = u [ n ] ∗ 1 2 π [ ∫ − π 0 i ⋅ e i ω n d ω − ∫ 0 π i ⋅ e i ω n d ω ] ⏟ h [ n ] , {\displaystyle {\begin{aligned}{\widehat {u}}[n]&={\scriptstyle \mathrm {DTFT} ^{-1}}(U(\omega ))\ *\ {\scriptstyle \mathrm {DTFT} ^{-1}}(-i\cdot \operatorname {sgn}(\omega ))\\&=u[n]\ *\ {\frac {1}{2\pi }}\int _{-\pi }^{\pi }(-i\cdot \operatorname {sgn}(\omega ))\cdot e^{i\omega n}\,\mathrm {d} \omega \\&=u[n]\ *\ \underbrace {{\frac {1}{2\pi }}\left[\int _{-\pi }^{0}i\cdot e^{i\omega n}\,\mathrm {d} \omega -\int _{0}^{\pi }i\cdot e^{i\omega n}\,\mathrm {d} \omega \right]} _{h[n]},\end{aligned}}} where h [ n ] ≜ { 0 , if n even 2 π n if n odd {\displaystyle h[n]\ \triangleq \ {\begin{cases}0,&{\text{if }}n{\text{ even}}\\{\frac {2}{\pi n}}&{\text{if }}n{\text{ odd}}\end{cases}}} which is an infinite impulse response (IIR). Practical considerations Method 1: Direct convolution of streaming u [ n ] {\displaystyle u[n]} data with an FIR approximation of h [ n ] , {\displaystyle h[n],} which we will designate by h ~ [ n ] . {\displaystyle {\tilde {h}}[n].} Examples of truncated h [ n ] {\displaystyle h[n]} are shown in figures 1 and 2. Fig 1 has an odd number of anti-symmetric coefficients and is called Type III. This type inherently exhibits responses of zero magnitude at frequencies 0 and Nyquist, resulting in a bandpass filter shape. A Type IV design (even number of anti-symmetric coefficients) is shown in Fig 2. It has a highpass frequency response. Type III is the usual choice. for these reasons: A typical (i.e. properly filtered and sampled) u [ n ] {\displaystyle u[n]} sequence has no useful components at the Nyquist frequency. The Type IV impulse response requires a 1 2 {\displaystyle {\tfrac {1}{2}}} sample shift in the h [ n ] {\displaystyle h[n]} sequence. That causes the zero-valued coefficients to become non-zero, as seen in Figure 2. So a Type III design is potentially twice as efficient as Type IV. The group delay of a Type III design is an integer number of samples, which facilitates aligning u ^ [ n ] {\displaystyle {\widehat {u}}[n]} with u [ n ] {\displaystyle u[n]} to create an analytic signal. The group delay of Type IV is halfway between two samples. The abrupt truncation of h [ n ] {\displaystyle h[n]} creates a rippling (Gibbs effect) of the flat frequency response. That can be mitigated by use of a window function to taper h ~ [ n ] {\displaystyle {\tilde {h}}[n]} to zero. Method 2: Piecewise convolution. It is well known that direct convolution is computationally much more intensive than methods like overlap-save that give access to the efficiencies of the Fast Fourier transform via the convolution theorem. Specifically, the discrete Fourier transform (DFT) of a segment of u [ n ] {\displaystyle u[n]} is multiplied pointwise with a DFT of the h ~ [ n ] {\displaystyle {\tilde {h}}[n]} sequence. An inverse DFT is done on the product, and the transient artifacts at the leading and trailing edges of the segment are discarded. Over-lapping input segments prevent gaps in the output stream. An equivalent time domain description is that segments of length N {\displaystyle N} (an arbitrary parameter) are convolved with the periodic function: h ~ N [ n ] ≜ ∑ m = − ∞ ∞ h ~ [ n − m N ] . {\displaystyle {\tilde {h}}_{N}[n]\ \triangleq \sum _{m=-\infty }^{\infty }{\tilde {h}}[n-mN].} When the duration of non-zero values of h ~ [ n ] {\displaystyle {\tilde {h}}[n]} is M < N , {\displaystyle M<N,} the output sequence includes N − M + 1 {\displaystyle N-M+1} samples of u ^ . {\displaystyle {\widehat {u}}.} M − 1 {\displaystyle M-1} outputs are discarded from each block of N , {\displaystyle N,} and the input blocks are overlapped by that amount to prevent gaps. Method 3: Same as method 2, except the DFT of h ~ [ n ] {\displaystyle {\tilde {h}}[n]} is replaced by samples of the − i sgn ⁡ ( ω ) {\displaystyle -i\operatorname {sgn} (\omega )} distribution (whose real and imaginary components are all just 0 {\displaystyle 0} or ± 1. {\displaystyle \pm 1.} ) That convolves u [ n ] {\displaystyle u[n]} with a periodic summation: h N [ n ] ≜ ∑ m = − ∞ ∞ h [ n − m N ] , {\displaystyle h_{N}[n]\ \triangleq \sum _{m=-\infty }^{\infty }h[n-mN],} for some arbitrary parameter, N . {\displaystyle N.} h [ n ] {\displaystyle h[n]} is not an FIR, so the edge effects extend throughout the entire transform. Deciding what to delete and the corresponding amount of overlap is an application-dependent design issue. Fig 3 depicts the difference between methods 2 and 3. Only half of the antisymmetric impulse response is shown, and only the non-zero coefficients. The blue graph corresponds to method 2 where h [ n ] {\displaystyle h[n]} is truncated by a rectangular window function, rather than tapered. It is generated by a Matlab function, hilb(65). Its transient effects are exactly known and readily discarded. The frequency response, which is determined by the function argument, is the only application-dependent design issue. The red graph is h 512 [ n ] , {\displaystyle h_{512}[n],} corresponding to method 3. It is the inverse DFT of the − i sgn ⁡ ( ω ) {\displaystyle -i\operatorname {sgn} (\omega )} distribution. Specifically, it is the function that is convolved with a segment of u [ n ] {\displaystyle u[n]} by the MATLAB function, hilbert(u,512). The real part of the output sequence is the original input sequence, so that the complex output is an analytic representation of u [ n ] . {\displaystyle u[n].} When the input is a segment of a pure cosine, the resulting convolution for two different values of N {\displaystyle N} is depicted in Fig 4 (red and blue plots). Edge effects prevent the result from being a pure sine function (green plot). Since h N [ n ] {\displaystyle h_{N}[n]} is not an FIR sequence, the theoretical extent of the effects is the entire output sequence. But the differences from a sine function diminish with distance from the edges. Parameter N {\displaystyle N} is the output sequence length. If it exceeds the length of the input sequence, the input is modified by appending zero-valued elements. In most cases, that reduces the magnitude of the edge distortions. But their duration is dominated by the inherent rise and fall times of the h [ n ] {\displaystyle h[n]} impulse response. Fig 5 is an example of piecewise convolution, using both methods 2 (in blue) and 3 (red dots). A sine function is created by computing the Discrete Hilbert transform of a cosine function, which was processed in four overlapping segments, and pieced back together. As the FIR result (blue) shows, the distortions apparent in the IIR result (red) are not caused by the difference between h [ n ] {\displaystyle h[n]} and h N [ n ] {\displaystyle h_{N}[n]} (green and red in Fig 3). The fact that h N [ n ] {\displaystyle h_{N}[n]} is tapered (windowed) is actually helpful in this context. The real problem is that it's not windowed enough. Effectively, M = N , {\displaystyle M=N,} whereas the overlap-save method needs M < N . {\displaystyle M<N.} == Number-theoretic Hilbert transform == The number theoretic Hilbert transform is an extension of the discrete Hilbert transform to integers modulo an appropriate prime number. In this it follows the generalization of discrete Fourier transform to number theoretic transforms. The number theoretic Hilbert transform can be used to generate sets of orthogonal discrete sequences. == See also == Analytic signal Harmonic conjugate Hilbert spectroscopy Hilbert transform in the complex plane Hilbert–Huang transform Kramers–Kronig relations Riesz transform Single-sideband modulation Singular integral operators of convolution type == Notes == == Page citations == == References == == Further reading == == External links == Derivation of the boundedness of the Hilbert transform Mathworld Hilbert transform — Contains a table of transforms Weisstein, Eric W. "Titchmarsh theorem". MathWorld. "GS256 Lecture 3: Hilbert Transformation" (PDF). Archived from the original (PDF) on 2012-02-27. an entry level introduction to Hilbert transformation.
Wikipedia/Discrete_Hilbert_transform
The fast Fourier transform (FFT) is an important tool in the fields of image and signal processing. The hexagonal fast Fourier transform (HFFT) uses existing FFT routines to compute the discrete Fourier transform (DFT) of images that have been captured with hexagonal sampling. The hexagonal grid serves as the optimal sampling lattice for isotropically band-limited two-dimensional signals and has a sampling efficiency which is 13.4% greater than the sampling efficiency obtained from rectangular sampling. Several other advantages of hexagonal sampling include consistent connectivity, higher symmetry, greater angular resolution, and equidistant neighbouring pixels. Sometimes, more than one of these advantages compound together, thereby increasing the efficiency by 50% in terms of computation and storage when compared to rectangular sampling. Despite all of these advantages of hexagonal sampling over rectangular sampling, its application has been limited because of the lack of an efficient coordinate system. However that limitation has been removed with the recent development of the hexagonal efficient coordinate system (HECS, formerly known as array set addressing or ASA) which includes the benefit of a separable Fourier kernel. The existence of a separable Fourier kernel for a hexagonally sampled image allows the use of existing FFT routines to efficiently compute the DFT of such an image. == Preliminaries == === Hexagonal Efficient Coordinate System (HECS) === The hexagonal efficient coordinate system (formerly known as array set addressing (ASA)) was developed based on the fact that a hexagonal grid can be represented as a combination of two interleaved rectangular arrays. It is easy to address each individual array using familiar integer-valued row and column indices and the individual arrays are distinguished by a single binary coordinate. Therefore, a full address for any point in the hexagonal grid can be uniquely represented by three coordinates. ( a , r , c ) ∈ { 0 , 1 } × Z × Z {\displaystyle (a,r,c)\in \{0,1\}\times \mathbb {Z} \times \mathbb {Z} } where the coordinates a, r and c represent the array, row and column respectively. The figure shows how the hexagonal grid is represented by two interleaved rectangular arrays in HECS coordinates. === Hexagonal discrete Fourier transform === The hexagonal discrete Fourier transform (HDFT) has been developed by Mersereau and it has been converted to an HECS representation by Rummelt. Let x ( a , r , c ) {\displaystyle x(a,r,c)} be a two-dimensional hexagonally sampled signal and let both arrays be of size n × m {\displaystyle n\times m} . Let, X ( b , s , d ) {\displaystyle X(b,s,d)} be the Fourier transform of x. The HDFT equation for the forward transform as shown in is given by X ( b , s , d ) = ∑ a ∑ r ∑ c x ( a , r , c ) E ( ⋅ ) {\displaystyle X(b,s,d)=\sum _{a}\sum _{r}\sum _{c}x(a,r,c)E(\cdot )} where E ( ⋅ ) = exp ⁡ [ − j π ( ( a + 2 c ) ( b + 2 d ) 2 m + ( a + 2 r ) ( b + 2 s ) n ) ] {\displaystyle E(\cdot )=\exp \left[-j\pi \left({\frac {(a+2c)(b+2d)}{2m}}+{\frac {(a+2r)(b+2s)}{n}}\right)\right]} Note that the above equation is separable and hence can be expressed as X ( b , s , d ) = f 0 ( b , s , d ) + W ( ⋅ ) f 1 ( b , s , d ) {\displaystyle X(b,s,d)=f_{0}(b,s,d)+W(\cdot )f_{1}(b,s,d)} where W ( ⋅ ) = exp ⁡ [ − j π ( b + 2 d 2 m + b + 2 s n ) ] {\displaystyle W(\cdot )=\exp \left[-j\pi \left({\frac {b+2d}{2m}}+{\frac {b+2s}{n}}\right)\right]} and g a ( b , r , d ) = ∑ c x ( a , r , c ) exp ⁡ ( − j 2 π ( c ) ( b + 2 d ) 2 m ) {\displaystyle g_{a}(b,r,d)=\sum _{c}x(a,r,c)\exp \left(-j2\pi {\frac {(c)(b+2d)}{2m}}\right)} f a ( b , s , d ) = ∑ r g a ( b , r , d ) exp ⁡ ( − j 2 π ( r ) ( b + 2 s ) n ) {\displaystyle f_{a}(b,s,d)=\sum _{r}g_{a}(b,r,d)\exp \left(-j2\pi {\frac {(r)(b+2s)}{n}}\right)} == Hexagonal fast Fourier transform (HFFT) == The linear transforms g a {\displaystyle g_{a}} and f a {\displaystyle f_{a}} are similar to the rectangular Fourier kernel where a linear transform is applied along each dimension of the 2-D rectangular data. Note that each of these equations, discussed above, is a combination of four rectangular arrays that serve as precursors to the HDFT. Two, out of those four rectangular g a {\displaystyle g_{a}} terms contribute to the sub-array of HFFT. Now, by switching the binary coordinate, we have four different forms of equations. In, the three out of those four expressions have been evaluated using what the author called "non-standard transforms (NSTs)" (shown below) while one expression is computed using any correct and applicable FFT algorithm. g a ( 0 , r , d ) = ∑ c x ( a , r , c ) exp ⁡ ( − j 2 π ( c ) ( d ) m ) {\displaystyle g_{a}(0,r,d)=\sum _{c}x(a,r,c)\exp \left(-j2\pi {\frac {(c)(d)}{m}}\right)} g a ( 1 , r , d ) = ∑ c x ( a , r , c ) exp ⁡ ( − j 2 π ( c ) ( 2 d + 1 ) 2 m ) {\displaystyle g_{a}(1,r,d)=\sum _{c}x(a,r,c)\exp \left(-j2\pi {\frac {(c)(2d+1)}{2m}}\right)} f a ( 0 , s , d ) = ∑ r g a ( a , r , d ) exp ⁡ ( − j 2 π ( r ) ( 2 s ) n ) {\displaystyle f_{a}(0,s,d)=\sum _{r}g_{a}(a,r,d)\exp \left(-j2\pi {\frac {(r)(2s)}{n}}\right)} f a ( 1 , s , d ) = ∑ r g a ( a , r , d ) exp ⁡ ( − j 2 π ( r ) ( 2 s + 1 ) n ) {\displaystyle f_{a}(1,s,d)=\sum _{r}g_{a}(a,r,d)\exp \left(-j2\pi {\frac {(r)(2s+1)}{n}}\right)} Looking at the second expression, g a ( 1 , r , d ) {\displaystyle g_{a}(1,r,d)} , we see that it is nothing more than a standard discrete Fourier transform (DFT) with a constant offset along the rows of rectangular sub-arrays of a hexagonally-sampled image x ( a , r , c ) {\displaystyle x(a,r,c)} . This expression is nothing more than a circular rotation of the DFT. Note that the shift must happen in the integer number of samples for the property to hold. This way, the function g a {\displaystyle g_{a}} can be computed using the standard DFT, in same number of operations, without introducing an NST. If we have a look at 0-array f a {\displaystyle f_{a}} , the expression will always be symmetric about half its spatial period. Because of this, it is enough to compute only half of it. We find that this expression is the standard DFT of the columns of g a {\displaystyle g_{a}} , which is decimated by a factor of 2 and then is duplicated to span the space of r for the identical second period of the complex exponential. Mathematically, X even [ k ] = ∑ n = 0 N − 1 x [ n ] e − 2 j π N 2 k n = ∑ n = 0 N 2 − 1 x [ n ] e − 2 j π N / 2 k n + ∑ n = N 2 N − 1 x [ n ] e − 2 j π N / 2 k n = ∑ n = 0 N 2 − 1 x [ n ] e − 2 j π N / 2 k n + ∑ n = 0 N 2 − 1 x [ n + N 2 ] e − 2 j π N / 2 k n = ∑ n = 0 N 2 − 1 ( x [ n ] + x [ n + N 2 ] ) e − 2 j π N / 2 k n {\displaystyle {\begin{aligned}X_{\text{even}}[k]&=\sum _{n=0}^{N-1}x[n]e^{-{\tfrac {2j\pi }{N}}2kn}\\[5pt]&=\sum _{n=0}^{{\tfrac {N}{2}}-1}x[n]e^{-{\tfrac {2j\pi }{N/2}}kn}+\sum _{n={\tfrac {N}{2}}}^{N-1}x[n]e^{-{\tfrac {2j\pi }{N/2}}kn}\\[5pt]&=\sum _{n=0}^{{\tfrac {N}{2}}-1}x[n]e^{-{\tfrac {2j\pi }{N/2}}kn}+\sum _{n=0}^{{\tfrac {N}{2}}-1}x\left[n+{\tfrac {N}{2}}\right]e^{-{\tfrac {2j\pi }{N/2}}kn}\\[5pt]&=\sum _{n=0}^{{\tfrac {N}{2}}-1}\left(x[n]+x\left[n+{\tfrac {N}{2}}\right]\right)e^{-{\tfrac {2j\pi }{N/2}}kn}\end{aligned}}} The expression for the 1-array f a {\displaystyle f_{a}} is equivalent to the 0-array expression with a shift of one sample. Hence, the 1-array expression can be expressed as columns of the DFT of g a {\displaystyle g_{a}} decimated by a factor of two, starting with second sample providing a constant offset needed by 1-array, and then doubled in space to span the range of s. Thus, the method developed by James B. Birdsong and Nicholas I. Rummelt in is able to successfully compute the HFFT using the standard FFT routines unlike the previous work in. == References ==
Wikipedia/Hexagonal_fast_Fourier_transform
Intel Integrated Performance Primitives (Intel IPP) is an extensive library of ready-to-use, domain-specific functions that are highly optimized for diverse Intel architectures. Its royalty-free APIs help developers take advantage of single instruction, multiple data (SIMD) instructions. The library supports Intel and compatible processors and is available for Linux and Windows. It is available separately or as a part of Intel oneAPI Base Toolkit. Intel IPP releases use a semantic versioning scheme, so that even though the major version looks like a year (YYYY), it is not technically meant to be a year. So it might not change every calendar year. == Features == The library takes advantage of processor features including MMX, SSE, SSE2, SSE3, SSSE3, SSE4, AVX, AVX2, AVX-512, AES-NI, Intel Advanced Matrix Extensions (Intel AMX) and multi-core processors. Intel IPP includes functions for: Video decode/encode Audio decode/encode JPEG/JPEG2000/JPEG XR Computer vision Data compression Image color conversion Image processing Ray tracing and rendering Signal processing Speech coding Speech recognition String processing Vector and matrix mathematics With launch of the Intel Cryptography Primitives Library in October 2024, the cryptography domain API has been split off and moved into the new library. == Organization == Intel IPP is divided into three major processing groups: signal processing (with linear array or vector data), image processing (with 2D arrays for typical color spaces) and data compression. Half the entry points are of the matrix type, a third are of the signal type, and the remainder are of the image types. Intel IPP functions are divided into 4 data types: data types include 8u (8-bit unsigned), 8s (8-bit signed), 16s, 32f (32-bit floating-point), 64f, etc. Typically, an application developer works with only one dominant data type for most processing functions, converting between input to processing to output formats at the end points. == History == Version 2.0 files are dated April 22, 2002. Version 3.0 Version 4.0 files are dated November 11, 2003. 4.0 runtime fully supports applications coded for 3.0 and 2.0. Version 5.1 files are dated March 9, 2006. 5.1 runtime does not support applications coded for 4.0 or before. Version 5.2 files are dated April 11, 2007. 5.2 runtime does not support applications coded for 5.1 or before. Introduced June 5, 2007, adding code samples for data compression, new video codec support, support for 64-bit applications on Mac OS X, support for Windows Vista, and new functions for ray-tracing and rendering. Version 6.1 was released with the Intel C++ Compiler on June 28, 2009. Update 1 for version 6.1 was released on July 28, 2009. Update 2 files are dated October 19, 2009. Version 7.1 Version 8.0 Version 8.1 Version 8.2 Version 9.0 Initial Release, August 25, 2015 Version 9.0 Update 1, December 1, 2015 Version 9.0 Update 2 Version 9.0 Update 3 Version 9.0 Update 4 Version 2017 Initial Release Version 2017 Update 1 Version 2017 Update 2 Version 2017 Update 3, February 28, 2016 Version 2018 Initial Release Version 2018 Update 1 Version 2018 Update 2 Version 2018 Update 2.1 Version 2018 Update 3 Version 2018 Update 3.1 Version 2018 Update 4, September 20, 2018 Version 2019 Initial Release Version 2019 Update 1 Version 2019 Update 2 Version 2019 Update 3, February 14, 2019 Version 2019 Update 4 Version 2019 Update 5 Version 2020 Initial Release, December 12, 2019 Version 2020 Update 1, March 30, 2020 Version 2020 Update 2, July 16, 2020 Version 2020 Update 3 Version 2021 Initial Release Version 2021.1 Version 2021.2 Version 2021.3 Version 2021.4 Version 2021.5 Version 2021.6 Version 2021.7, December 2022 Version 2021.8, April 2023 Version 2021.9.0, July 2023 Version 2021.9.1, October 2023 Version 2021.10.0, November 2023 Version 2021.10.1, December 2023 Version 2021.11.0, March 2024 Version 2021.12.0, June 2024 Version 2022.0.0, October 2024 Version 2022.1.0 March 2025 == Counterparts == Sun: mediaLib for Solaris Apple: vDSP, vImage, Accelerate etc. for macOS AMD: Framewave (formerly the AMD Performance Library or APL) Khronos Group: OpenMAX DL NVIDIA Performance Primitives == See also == Intel oneAPI Base Toolkit Intel oneAPI HPC Toolkit Intel oneAPI Data Analytics Library (oneDAL) Intel oneAPI Math Kernel Library (oneMKL) Intel oneAPI Threading Building Blocks (oneTBB) Intel Advisor Intel VTune Profiler Intel Developer Zone (Intel DZ; support and discussion) == References == == External links == Official website Intel oneAPI Base Toolkit Home Page Stewart Taylor, "Intel Integrated Performance Primitives - How to Optimize Software Applications Using Intel IPP", Intel Press. Jpeg Delphi implementation using official JPEG Group C library or Intel Jpeg Library 1.5 (ijl.dll included) How To Install OpenCV using IPP (french). Archived 2020-08-08 at the Wayback Machine
Wikipedia/Integrated_Performance_Primitives
An astronomical interferometer or telescope array is a set of separate telescopes, mirror segments, or radio telescope antennas that work together as a single telescope to provide higher resolution images of astronomical objects such as stars, nebulas and galaxies by means of interferometry. The advantage of this technique is that it can theoretically produce images with the angular resolution of a huge telescope with an aperture equal to the separation, called baseline, between the component telescopes. The main drawback is that it does not collect as much light as the complete instrument's mirror. Thus it is mainly useful for fine resolution of more luminous astronomical objects, such as close binary stars. Another drawback is that the maximum angular size of a detectable emission source is limited by the minimum gap between detectors in the collector array. Interferometry is most widely used in radio astronomy, in which signals from separate radio telescopes are combined. A mathematical signal processing technique called aperture synthesis is used to combine the separate signals to create high-resolution images. In Very Long Baseline Interferometry (VLBI) radio telescopes separated by thousands of kilometers are combined to form a radio interferometer with a resolution which would be given by a hypothetical single dish with an aperture thousands of kilometers in diameter. At the shorter wavelengths used in infrared astronomy and optical astronomy it is more difficult to combine the light from separate telescopes, because the light must be kept coherent within a fraction of a wavelength over long optical paths, requiring very precise optics. Practical infrared and optical astronomical interferometers have only recently been developed, and are at the cutting edge of astronomical research. At optical wavelengths, aperture synthesis allows the atmospheric seeing resolution limit to be overcome, allowing the angular resolution to reach the diffraction limit of the optics. Astronomical interferometers can produce higher resolution astronomical images than any other type of telescope. At radio wavelengths, image resolutions of a few micro-arcseconds have been obtained, and image resolutions of a fractional milliarcsecond have been achieved at visible and infrared wavelengths. One simple layout of an astronomical interferometer is a parabolic arrangement of mirror pieces, giving a partially complete reflecting telescope but with a "sparse" or "dilute" aperture. In fact, the parabolic arrangement of the mirrors is not important, as long as the optical path lengths from the astronomical object to the beam combiner (focus) are the same as would be given by the complete mirror case. Instead, most existing arrays use a planar geometry, and Labeyrie's hypertelescope will use a spherical geometry. == History == One of the first uses of optical interferometry was applied by the Michelson stellar interferometer on the Mount Wilson Observatory's reflector telescope to measure the diameters of stars. The red giant star Betelgeuse was the first to have its diameter determined in this way on December 13, 1920. In the 1940s radio interferometry was used to perform the first high resolution radio astronomy observations. For the next three decades astronomical interferometry research was dominated by research at radio wavelengths, leading to the development of large instruments such as the Very Large Array and the Atacama Large Millimeter Array. Optical/infrared interferometry was extended to measurements using separated telescopes by Johnson, Betz and Townes (1974) in the infrared and by Labeyrie (1975) in the visible. In the late 1970s improvements in computer processing allowed for the first "fringe-tracking" interferometer, which operates fast enough to follow the blurring effects of astronomical seeing, leading to the Mk I, II and III series of interferometers. Similar techniques have now been applied at other astronomical telescope arrays, including the Keck Interferometer and the Palomar Testbed Interferometer. In the 1980s the aperture synthesis interferometric imaging technique was extended to visible light and infrared astronomy by the Cavendish Astrophysics Group, providing the first very high resolution images of nearby stars. In 1995 this technique was demonstrated on an array of separate optical telescopes for the first time, allowing a further improvement in resolution, and allowing even higher resolution imaging of stellar surfaces. Software packages such as BSMEM or MIRA are used to convert the measured visibility amplitudes and closure phases into astronomical images. The same techniques have now been applied at a number of other astronomical telescope arrays, including the Navy Precision Optical Interferometer, the Infrared Spatial Interferometer and the IOTA array. A number of other interferometers have made closure phase measurements and are expected to produce their first images soon, including the VLTI, the CHARA array and Le Coroller and Dejonghe's Hypertelescope prototype. If completed, the MRO Interferometer with up to ten movable telescopes will produce among the first higher fidelity images from a long baseline interferometer. The Navy Optical Interferometer took the first step in this direction in 1996, achieving 3-way synthesis of an image of Mizar; then a first-ever six-way synthesis of Eta Virginis in 2002; and most recently "closure phase" as a step to the first synthesized images produced by geostationary satellites. == Modern astronomical interferometry == Astronomical interferometry is principally conducted using Michelson (and sometimes other type) interferometers. The principal operational interferometric observatories which use this type of instrumentation include VLTI, NPOI, and CHARA. Current projects will use interferometers to search for extrasolar planets, either by astrometric measurements of the reciprocal motion of the star (as used by the Palomar Testbed Interferometer and the VLTI), through the use of nulling (as will be used by the Keck Interferometer and Darwin) or through direct imaging (as proposed for Labeyrie's Hypertelescope). Engineers at the European Southern Observatory ESO designed the Very Large Telescope VLT so that it can also be used as an interferometer. Along with the four 8.2-metre (320 in) unit telescopes, four mobile 1.8-metre auxiliary telescopes (ATs) were included in the overall VLT concept to form the Very Large Telescope Interferometer (VLTI). The ATs can move between 30 different stations, and at present, the telescopes can form groups of two or three for interferometry. When using interferometry, a complex system of mirrors brings the light from the different telescopes to the astronomical instruments where it is combined and processed. This is technically demanding as the light paths must be kept equal to within 1/1000 mm (the same order as the wavelength of light) over distances of a few hundred metres. For the Unit Telescopes, this gives an equivalent mirror diameter of up to 130 metres (430 ft), and when combining the auxiliary telescopes, equivalent mirror diameters of up to 200 metres (660 ft) can be achieved. This is up to 25 times better than the resolution of a single VLT unit telescope. The VLTI gives astronomers the ability to study celestial objects in unprecedented detail. It is possible to see details on the surfaces of stars and even to study the environment close to a black hole. With a spatial resolution of 4 milliarcseconds, the VLTI has allowed astronomers to obtain one of the sharpest images ever of a star. This is equivalent to resolving the head of a screw at a distance of 300 km (190 mi). Notable 1990s results included the Mark III measurement of diameters of 100 stars and many accurate stellar positions, COAST and NPOI producing many very high resolution images, and Infrared Stellar Interferometer measurements of stars in the mid-infrared for the first time. Additional results include direct measurements of the sizes of and distances to Cepheid variable stars, and young stellar objects. High on the Chajnantor plateau in the Chilean Andes, the European Southern Observatory (ESO), together with its international partners, is building ALMA, which will gather radiation from some of the coldest objects in the Universe. ALMA will be a single telescope of a new design, composed initially of 66 high-precision antennas and operating at wavelengths of 0.3 to 9.6 mm. Its main 12-meter array will have fifty antennas, 12 metres in diameter, acting together as a single telescope – an interferometer. An additional compact array of four 12-metre and twelve 7-meter antennas will complement this. The antennas can be spread across the desert plateau over distances from 150 metres to 16 kilometres, which will give ALMA a powerful variable "zoom". It will be able to probe the Universe at millimetre and submillimetre wavelengths with unprecedented sensitivity and resolution, with a resolution up to ten times greater than the Hubble Space Telescope, and complementing images made with the VLT interferometer. Optical interferometers are mostly seen by astronomers as very specialized instruments, capable of a very limited range of observations. It is often said that an interferometer achieves the effect of a telescope the size of the distance between the apertures; this is only true in the limited sense of angular resolution. The amount of light gathered—and hence the dimmest object that can be seen—depends on the real aperture size, so an interferometer would offer little improvement as the image is dim (the thinned-array curse). The combined effects of limited aperture area and atmospheric turbulence generally limits interferometers to observations of comparatively bright stars and active galactic nuclei. However, they have proven useful for making very high precision measurements of simple stellar parameters such as size and position (astrometry), for imaging the nearest giant stars and probing the cores of nearby active galaxies. For details of individual instruments, see the list of astronomical interferometers at visible and infrared wavelengths. At radio wavelengths, interferometers such as the Very Large Array and MERLIN have been in operation for many years. The distances between telescopes are typically 10–100 km (6.2–62.1 mi), although arrays with much longer baselines utilize the techniques of Very Long Baseline Interferometry. In the (sub)-millimetre, existing arrays include the Submillimeter Array and the IRAM Plateau de Bure facility. The Atacama Large Millimeter Array has been fully operational since March 2013. Max Tegmark and Matias Zaldarriaga have proposed the Fast Fourier Transform Telescope which would rely on extensive computer power rather than standard lenses and mirrors. If Moore's law continues, such designs may become practical and cheap in a few years. Progressing quantum computing might eventually allow more extensive use of interferometry, as newer proposals suggest. == See also == Event Horizon Telescope (EHT) and Laser Interferometer Space Antenna (LISA) ExoLife Finder, a proposed hybrid interferometric telescope Hypertelescope Cambridge Optical Aperture Synthesis Telescope, an optical interferometer Navy Precision Optical Interferometer, a Michelson Optical Interferometer Radio astronomy § Radio interferometry Radio telescope § Radio interferometry List 4C Array Akeno Giant Air Shower Array (AGASA) Allen Telescope Array (ATA), formerly known as the One Hectare Telescope (1hT) Antarctic Muon And Neutrino Detector Array (AMANDA) Atacama Large Millimeter Array (ALMA) Australia Telescope Compact Array CHARA array Cherenkov Telescope Array (CTA) Chicago Air Shower Array (CASA) Infrared Optical Telescope Array (IOTA) Interplanetary Scintillation Array (IPS array) also called the Pulsar Array LOFAR (LOw Frequency ARray) Modular Neutron Array (MoNA) Murchison Widefield Array (MWA) Northern Extended Millimeter Array (NOEMA) Nuclear Spectroscopic Telescope Array (NuSTAR) Square Kilometre Array (SKA) Submillimeter Array (SMA) Sunyaev-Zel'dovich Array (SZA) Telescope Array Project Very Large Array (VLA) Very Long Baseline Array (VLBA) Very Small Array == References == == Further reading == Hariharan, P. (1991). Basics of Interferometry. Academic Press. ISBN 978-0123252180. Thompson, Richard; Moran, James; Swens, George (2001). Interferometry And Synthesis In Radio Astronomy. Wiley-VCH. ISBN 978-0471254928. == External links == How to combine the light from multiple telescopes for astrometric measurements at NPOI... Why an Optical Interferometer? Remote Sensing the potential and limits of astronomical interferometry [1] The Antoine Labeyrie's hypertelescope project's website
Wikipedia/Fast_Fourier_Transform_Telescope
The vector-radix FFT algorithm, is a multidimensional fast Fourier transform (FFT) algorithm, which is a generalization of the ordinary Cooley–Tukey FFT algorithm that divides the transform dimensions by arbitrary radices. It breaks a multidimensional (MD) discrete Fourier transform (DFT) down into successively smaller MD DFTs until, ultimately, only trivial MD DFTs need to be evaluated. The most common multidimensional FFT algorithm is the row-column algorithm, which means transforming the array first in one index and then in the other, see more in FFT. Then a radix-2 direct 2-D FFT has been developed, and it can eliminate 25% of the multiplies as compared to the conventional row-column approach. And this algorithm has been extended to rectangular arrays and arbitrary radices, which is the general vector-radix algorithm. Vector-radix FFT algorithm can reduce the number of complex multiplications significantly, compared to row-vector algorithm. For example, for a N M {\displaystyle N^{M}} element matrix (M dimensions, and size N on each dimension), the number of complex multiples of vector-radix FFT algorithm for radix-2 is 2 M − 1 2 M N M log 2 ⁡ N {\displaystyle {\frac {2^{M}-1}{2^{M}}}N^{M}\log _{2}N} , meanwhile, for row-column algorithm, it is M N M 2 log 2 ⁡ N {\displaystyle {\frac {MN^{M}}{2}}\log _{2}N} . And generally, even larger savings in multiplies are obtained when this algorithm is operated on larger radices and on higher dimensional arrays. Overall, the vector-radix algorithm significantly reduces the structural complexity of the traditional DFT having a better indexing scheme, at the expense of a slight increase in arithmetic operations. So this algorithm is widely used for many applications in engineering, science, and mathematics, for example, implementations in image processing, and high speed FFT processor designing. == 2-D DIT case == As with the Cooley–Tukey FFT algorithm, the two dimensional vector-radix FFT is derived by decomposing the regular 2-D DFT into sums of smaller DFT's multiplied by "twiddle" factors. A decimation-in-time (DIT) algorithm means the decomposition is based on time domain x {\displaystyle x} , see more in Cooley–Tukey FFT algorithm. We suppose the 2-D DFT is defined X ( k 1 , k 2 ) = ∑ n 1 = 0 N 1 − 1 ∑ n 2 = 0 N 2 − 1 x [ n 1 , n 2 ] ⋅ W N 1 k 1 n 1 W N 2 k 2 n 2 , {\displaystyle X(k_{1},k_{2})=\sum _{n_{1}=0}^{N_{1}-1}\sum _{n_{2}=0}^{N_{2}-1}x[n_{1},n_{2}]\cdot W_{N_{1}}^{k_{1}n_{1}}W_{N_{2}}^{k_{2}n_{2}},} where k 1 = 0 , … , N 1 − 1 {\displaystyle k_{1}=0,\dots ,N_{1}-1} ,and k 2 = 0 , … , N 2 − 1 {\displaystyle k_{2}=0,\dots ,N_{2}-1} , and x [ n 1 , n 2 ] {\displaystyle x[n_{1},n_{2}]} is an N 1 × N 2 {\displaystyle N_{1}\times N_{2}} matrix, and W N = exp ⁡ ( − j 2 π / N ) {\displaystyle W_{N}=\exp(-j2\pi /N)} . For simplicity, let us assume that N 1 = N 2 = N {\displaystyle N_{1}=N_{2}=N} , and the radix- ( r × r ) {\displaystyle (r\times r)} is such that N / r {\displaystyle N/r} is an integer. Using the change of variables: n i = r p i + q i {\displaystyle n_{i}=rp_{i}+q_{i}} , where p i = 0 , … , ( N / r ) − 1 ; q i = 0 , … , r − 1 ; {\displaystyle p_{i}=0,\ldots ,(N/r)-1;q_{i}=0,\ldots ,r-1;} k i = u i + v i N / r {\displaystyle k_{i}=u_{i}+v_{i}N/r} , where u i = 0 , … , ( N / r ) − 1 ; v i = 0 , … , r − 1 ; {\displaystyle u_{i}=0,\ldots ,(N/r)-1;v_{i}=0,\ldots ,r-1;} where i = 1 {\displaystyle i=1} or 2 {\displaystyle 2} , then the two dimensional DFT can be written as: X ( u 1 + v 1 N / r , u 2 + v 2 N / r ) = ∑ q 1 = 0 r − 1 ∑ q 2 = 0 r − 1 [ ∑ p 1 = 0 N / r − 1 ∑ p 2 = 0 N / r − 1 x [ r p 1 + q 1 , r p 2 + q 2 ] W N / r p 1 u 1 W N / r p 2 u 2 ] ⋅ W N q 1 u 1 + q 2 u 2 W r q 1 v 1 W r q 2 v 2 , {\displaystyle X(u_{1}+v_{1}N/r,u_{2}+v_{2}N/r)=\sum _{q_{1}=0}^{r-1}\sum _{q_{2}=0}^{r-1}\left[\sum _{p_{1}=0}^{N/r-1}\sum _{p_{2}=0}^{N/r-1}x[rp_{1}+q_{1},rp_{2}+q_{2}]W_{N/r}^{p_{1}u_{1}}W_{N/r}^{p_{2}u_{2}}\right]\cdot W_{N}^{q_{1}u_{1}+q_{2}u_{2}}W_{r}^{q_{1}v_{1}}W_{r}^{q_{2}v_{2}},} The equation above defines the basic structure of the 2-D DIT radix- ( r × r ) {\displaystyle (r\times r)} "butterfly". (See 1-D "butterfly" in Cooley–Tukey FFT algorithm) When r = 2 {\displaystyle r=2} , the equation can be broken into four summations, and this leads to: X ( k 1 , k 2 ) = S 00 ( k 1 , k 2 ) + S 01 ( k 1 , k 2 ) W N k 2 + S 10 ( k 1 , k 2 ) W N k 1 + S 11 ( k 1 , k 2 ) W N k 1 + k 2 {\displaystyle X(k_{1},k_{2})=S_{00}(k_{1},k_{2})+S_{01}(k_{1},k_{2})W_{N}^{k_{2}}+S_{10}(k_{1},k_{2})W_{N}^{k_{1}}+S_{11}(k_{1},k_{2})W_{N}^{k_{1}+k_{2}}} for 0 ≤ k 1 , k 2 < N 2 {\displaystyle 0\leq k_{1},k_{2}<{\frac {N}{2}}} , where S i j ( k 1 , k 2 ) = ∑ n 1 = 0 N / 2 − 1 ∑ n 2 = 0 N / 2 − 1 x [ 2 n 1 + i , 2 n 2 + j ] ⋅ W N / 2 n 1 k 1 W N / 2 n 2 k 2 {\displaystyle S_{ij}(k_{1},k_{2})=\sum _{n_{1}=0}^{N/2-1}\sum _{n_{2}=0}^{N/2-1}x[2n_{1}+i,2n_{2}+j]\cdot W_{N/2}^{n_{1}k_{1}}W_{N/2}^{n_{2}k_{2}}} . The S i j {\displaystyle S_{ij}} can be viewed as the N / 2 {\displaystyle N/2} -dimensional DFT, each over a subset of the original sample: S 00 {\displaystyle S_{00}} is the DFT over those samples of x {\displaystyle x} for which both n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} are even; S 01 {\displaystyle S_{01}} is the DFT over the samples for which n 1 {\displaystyle n_{1}} is even and n 2 {\displaystyle n_{2}} is odd; S 10 {\displaystyle S_{10}} is the DFT over the samples for which n 1 {\displaystyle n_{1}} is odd and n 2 {\displaystyle n_{2}} is even; S 11 {\displaystyle S_{11}} is the DFT over the samples for which both n 1 {\displaystyle n_{1}} and n 2 {\displaystyle n_{2}} are odd. Thanks to the periodicity of the complex exponential, we can obtain the following additional identities, valid for 0 ≤ k 1 , k 2 < N 2 {\displaystyle 0\leq k_{1},k_{2}<{\frac {N}{2}}} : X ( k 1 + N 2 , k 2 ) = S 00 ( k 1 , k 2 ) + S 01 ( k 1 , k 2 ) W N k 2 − S 10 ( k 1 , k 2 ) W N k 1 − S 11 ( k 1 , k 2 ) W N k 1 + k 2 {\displaystyle X{\biggl (}k_{1}+{\frac {N}{2}},k_{2}{\biggr )}=S_{00}(k_{1},k_{2})+S_{01}(k_{1},k_{2})W_{N}^{k_{2}}-S_{10}(k_{1},k_{2})W_{N}^{k_{1}}-S_{11}(k_{1},k_{2})W_{N}^{k_{1}+k_{2}}} ; X ( k 1 , k 2 + N 2 ) = S 00 ( k 1 , k 2 ) − S 01 ( k 1 , k 2 ) W N k 2 + S 10 ( k 1 , k 2 ) W N k 1 − S 11 ( k 1 , k 2 ) W N k 1 + k 2 {\displaystyle X{\biggl (}k_{1},k_{2}+{\frac {N}{2}}{\biggr )}=S_{00}(k_{1},k_{2})-S_{01}(k_{1},k_{2})W_{N}^{k_{2}}+S_{10}(k_{1},k_{2})W_{N}^{k_{1}}-S_{11}(k_{1},k_{2})W_{N}^{k_{1}+k_{2}}} ; X ( k 1 + N 2 , k 2 + N 2 ) = S 00 ( k 1 , k 2 ) − S 01 ( k 1 , k 2 ) W N k 2 − S 10 ( k 1 , k 2 ) W N k 1 + S 11 ( k 1 , k 2 ) W N k 1 + k 2 {\displaystyle X{\biggl (}k_{1}+{\frac {N}{2}},k_{2}+{\frac {N}{2}}{\biggr )}=S_{00}(k_{1},k_{2})-S_{01}(k_{1},k_{2})W_{N}^{k_{2}}-S_{10}(k_{1},k_{2})W_{N}^{k_{1}}+S_{11}(k_{1},k_{2})W_{N}^{k_{1}+k_{2}}} . == 2-D DIF case == Similarly, a decimation-in-frequency (DIF, also called the Sande–Tukey algorithm) algorithm means the decomposition is based on frequency domain X {\displaystyle X} , see more in Cooley–Tukey FFT algorithm. Using the change of variables: n i = p i + q i N / r {\displaystyle n_{i}=p_{i}+q_{i}N/r} , where p i = 0 , … , ( N / r ) − 1 ; q i = 0 , … , r − 1 ; {\displaystyle p_{i}=0,\ldots ,(N/r)-1;q_{i}=0,\ldots ,r-1;} k i = r u i + v i {\displaystyle k_{i}=ru_{i}+v_{i}} , where u i = 0 , … , ( N / r ) − 1 ; v i = 0 , … , r − 1 ; {\displaystyle u_{i}=0,\ldots ,(N/r)-1;v_{i}=0,\ldots ,r-1;} where i = 1 {\displaystyle i=1} or 2 {\displaystyle 2} , and the DFT equation can be written as: X ( r u 1 + v 1 , r u 2 + v 2 ) = ∑ p 1 = 0 N / r − 1 ∑ p 2 = 0 N / r − 1 [ ∑ q 1 = 0 r − 1 ∑ q 2 = 0 r − 1 x [ p 1 + q 1 N / r , p 2 + q 2 N / r ] W r q 1 v 1 W r q 2 v 2 ] ⋅ W N p 1 v 1 + p 2 v 2 W N / r p 1 u 1 W N / r p 2 u 2 , {\displaystyle X(ru_{1}+v_{1},ru_{2}+v_{2})=\sum _{p_{1}=0}^{N/r-1}\sum _{p_{2}=0}^{N/r-1}\left[\sum _{q_{1}=0}^{r-1}\sum _{q_{2}=0}^{r-1}x[p_{1}+q_{1}N/r,p_{2}+q_{2}N/r]W_{r}^{q_{1}v_{1}}W_{r}^{q_{2}v_{2}}\right]\cdot W_{N}^{p_{1}v_{1}+p_{2}v_{2}}W_{N/r}^{p_{1}u_{1}}W_{N/r}^{p_{2}u_{2}},} == Other approaches == The split-radix FFT algorithm has been proved to be a useful method for 1-D DFT. And this method has been applied to the vector-radix FFT to obtain a split vector-radix FFT. In conventional 2-D vector-radix algorithm, we decompose the indices k 1 , k 2 {\displaystyle k_{1},k_{2}} into 4 groups: X ( 2 k 1 , 2 k 2 ) : even-even X ( 2 k 1 , 2 k 2 + 1 ) : even-odd X ( 2 k 1 + 1 , 2 k 2 ) : odd-even X ( 2 k 1 + 1 , 2 k 2 + 1 ) : odd-odd {\displaystyle {\begin{array}{lcl}X(2k_{1},2k_{2})&:&{\text{even-even}}\\X(2k_{1},2k_{2}+1)&:&{\text{even-odd}}\\X(2k_{1}+1,2k_{2})&:&{\text{odd-even}}\\X(2k_{1}+1,2k_{2}+1)&:&{\text{odd-odd}}\end{array}}} By the split vector-radix algorithm, the first three groups remain unchanged, the fourth odd-odd group is further decomposed into another four sub-groups, and seven groups in total: X ( 2 k 1 , 2 k 2 ) : even-even X ( 2 k 1 , 2 k 2 + 1 ) : even-odd X ( 2 k 1 + 1 , 2 k 2 ) : odd-even X ( 4 k 1 + 1 , 4 k 2 + 1 ) : odd-odd X ( 4 k 1 + 1 , 4 k 2 + 3 ) : odd-odd X ( 4 k 1 + 3 , 4 k 2 + 1 ) : odd-odd X ( 4 k 1 + 3 , 4 k 2 + 3 ) : odd-odd {\displaystyle {\begin{array}{lcl}X(2k_{1},2k_{2})&:&{\text{even-even}}\\X(2k_{1},2k_{2}+1)&:&{\text{even-odd}}\\X(2k_{1}+1,2k_{2})&:&{\text{odd-even}}\\X(4k_{1}+1,4k_{2}+1)&:&{\text{odd-odd}}\\X(4k_{1}+1,4k_{2}+3)&:&{\text{odd-odd}}\\X(4k_{1}+3,4k_{2}+1)&:&{\text{odd-odd}}\\X(4k_{1}+3,4k_{2}+3)&:&{\text{odd-odd}}\end{array}}} That means the fourth term in 2-D DIT radix- ( 2 × 2 ) {\displaystyle (2\times 2)} equation, S 11 ( k 1 , k 2 ) W N k 1 + k 2 {\displaystyle S_{11}(k_{1},k_{2})W_{N}^{k_{1}+k_{2}}} becomes: A 11 ( k 1 , k 2 ) W N k 1 + k 2 + A 13 ( k 1 , k 2 ) W N k 1 + 3 k 2 + A 31 ( k 1 , k 2 ) W N 3 k 1 + k 2 + A 33 ( k 1 , k 2 ) W N 3 ( k 1 + k 2 ) , {\displaystyle A_{11}(k_{1},k_{2})W_{N}^{k_{1}+k_{2}}+A_{13}(k_{1},k_{2})W_{N}^{k_{1}+3k_{2}}+A_{31}(k_{1},k_{2})W_{N}^{3k_{1}+k_{2}}+A_{33}(k_{1},k_{2})W_{N}^{3(k_{1}+k_{2})},} where A i j ( k 1 , k 2 ) = ∑ n 1 = 0 N / 4 − 1 ∑ n 2 = 0 N / 4 − 1 x [ 4 n 1 + i , 4 n 2 + j ] ⋅ W N / 4 n 1 k 1 W N / 4 n 2 k 2 {\displaystyle A_{ij}(k_{1},k_{2})=\sum _{n_{1}=0}^{N/4-1}\sum _{n_{2}=0}^{N/4-1}x[4n_{1}+i,4n_{2}+j]\cdot W_{N/4}^{n_{1}k_{1}}W_{N/4}^{n_{2}k_{2}}} The 2-D N by N DFT is then obtained by successive use of the above decomposition, up to the last stage. It has been shown that the split vector radix algorithm has saved about 30% of the complex multiplications and about the same number of the complex additions for typical 1024 × 1024 {\displaystyle 1024\times 1024} array, compared with the vector-radix algorithm. == References ==
Wikipedia/Vector-radix_FFT_algorithm
In quantum computing, the quantum Fourier transform (QFT) is a linear transformation on quantum bits, and is the quantum analogue of the discrete Fourier transform. The quantum Fourier transform is a part of many quantum algorithms, notably Shor's algorithm for factoring and computing the discrete logarithm, the quantum phase estimation algorithm for estimating the eigenvalues of a unitary operator, and algorithms for the hidden subgroup problem. The quantum Fourier transform was discovered by Don Coppersmith. With small modifications to the QFT, it can also be used for performing fast integer arithmetic operations such as addition and multiplication. The quantum Fourier transform can be performed efficiently on a quantum computer with a decomposition into the product of simpler unitary matrices. The discrete Fourier transform on 2 n {\displaystyle 2^{n}} amplitudes can be implemented as a quantum circuit consisting of only O ( n 2 ) {\displaystyle O(n^{2})} Hadamard gates and controlled phase shift gates, where n {\displaystyle n} is the number of qubits. This can be compared with the classical discrete Fourier transform, which takes O ( n 2 n ) {\displaystyle O(n2^{n})} gates (where n {\displaystyle n} is the number of bits), which is exponentially more than O ( n 2 ) {\displaystyle O(n^{2})} . The quantum Fourier transform acts on a quantum state vector (a quantum register), and the classical discrete Fourier transform acts on a vector. Both types of vectors can be written as lists of complex numbers. In the classical case, the vector can be represented with e.g. an array of floating-point numbers, and in the quantum case it is a sequence of probability amplitudes for all the possible outcomes upon measurement (the outcomes are the basis states, or eigenstates). Because measurement collapses the quantum state to a single basis state, not every task that uses the classical Fourier transform can take advantage of the quantum Fourier transform's exponential speedup. The best quantum Fourier transform algorithms known (as of late 2000) require only O ( n log ⁡ n ) {\displaystyle O(n\log n)} gates to achieve an efficient approximation, provided that a controlled phase gate is implemented as a native operation. == Definition == The quantum Fourier transform is the classical discrete Fourier transform applied to the vector of amplitudes of a quantum state, which has length N = 2 n {\displaystyle N=2^{n}} if it is applied to a register of n {\displaystyle n} qubits. The classical Fourier transform acts on a vector ( x 0 , x 1 , … , x N − 1 ) ∈ C N {\displaystyle (x_{0},x_{1},\ldots ,x_{N-1})\in \mathbb {C} ^{N}} and maps it to the vector ( y 0 , y 1 , … , y N − 1 ) ∈ C N {\displaystyle (y_{0},y_{1},\ldots ,y_{N-1})\in \mathbb {C} ^{N}} according to the formula y k = 1 N ∑ j = 0 N − 1 x j ω N − j k , k = 0 , 1 , 2 , … , N − 1 , {\displaystyle y_{k}={\frac {1}{\sqrt {N}}}\sum _{j=0}^{N-1}x_{j}\omega _{N}^{-jk},\quad k=0,1,2,\ldots ,N-1,} where ω N = e 2 π i N {\displaystyle \omega _{N}=e^{\frac {2\pi i}{N}}} is an N-th root of unity. Similarly, the quantum Fourier transform acts on a quantum state | x ⟩ = ∑ j = 0 N − 1 x j | j ⟩ {\textstyle |x\rangle =\sum _{j=0}^{N-1}x_{j}|j\rangle } and maps it to a quantum state ∑ j = 0 N − 1 y j | j ⟩ {\textstyle \sum _{j=0}^{N-1}y_{j}|j\rangle } according to the formula y k = 1 N ∑ j = 0 N − 1 x j ω N j k , k = 0 , 1 , 2 , … , N − 1. {\displaystyle y_{k}={\frac {1}{\sqrt {N}}}\sum _{j=0}^{N-1}x_{j}\omega _{N}^{jk},\quad k=0,1,2,\ldots ,N-1.} (Conventions for the sign of the phase factor exponent vary; here the quantum Fourier transform has the same effect as the inverse discrete Fourier transform, and conversely.) Since ω N l {\displaystyle \omega _{N}^{l}} is a rotation, the inverse quantum Fourier transform acts similarly but with x j = 1 N ∑ k = 0 N − 1 y k ω N − j k , j = 0 , 1 , 2 , … , N − 1 , {\displaystyle x_{j}={\frac {1}{\sqrt {N}}}\sum _{k=0}^{N-1}y_{k}\omega _{N}^{-jk},\quad j=0,1,2,\ldots ,N-1,} In case that | x ⟩ {\displaystyle |x\rangle } is a basis state, the quantum Fourier transform can also be expressed as the map QFT : | x ⟩ ↦ 1 N ∑ k = 0 N − 1 ω N x k | k ⟩ . {\displaystyle \operatorname {QFT} :|x\rangle \mapsto {\frac {1}{\sqrt {N}}}\sum _{k=0}^{N-1}\omega _{N}^{xk}|k\rangle .} Equivalently, the quantum Fourier transform can be viewed as a unitary matrix (or quantum gate) acting on quantum state vectors, where the unitary matrix F N {\displaystyle F_{N}} is the DFT matrix F N = 1 N [ 1 1 1 1 ⋯ 1 1 ω ω 2 ω 3 ⋯ ω N − 1 1 ω 2 ω 4 ω 6 ⋯ ω 2 ( N − 1 ) 1 ω 3 ω 6 ω 9 ⋯ ω 3 ( N − 1 ) ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ 1 ω N − 1 ω 2 ( N − 1 ) ω 3 ( N − 1 ) ⋯ ω ( N − 1 ) ( N − 1 ) ] , {\displaystyle F_{N}={\frac {1}{\sqrt {N}}}{\begin{bmatrix}1&1&1&1&\cdots &1\\1&\omega &\omega ^{2}&\omega ^{3}&\cdots &\omega ^{N-1}\\1&\omega ^{2}&\omega ^{4}&\omega ^{6}&\cdots &\omega ^{2(N-1)}\\1&\omega ^{3}&\omega ^{6}&\omega ^{9}&\cdots &\omega ^{3(N-1)}\\\vdots &\vdots &\vdots &\vdots &\ddots &\vdots \\1&\omega ^{N-1}&\omega ^{2(N-1)}&\omega ^{3(N-1)}&\cdots &\omega ^{(N-1)(N-1)}\end{bmatrix}},} where ω = ω N {\displaystyle \omega =\omega _{N}} . For example, in the case of N = 4 = 2 2 {\displaystyle N=4=2^{2}} and phase ω = i {\displaystyle \omega =i} the transformation matrix is F 4 = 1 2 [ 1 1 1 1 1 i − 1 − i 1 − 1 1 − 1 1 − i − 1 i ] {\displaystyle F_{4}={\frac {1}{2}}{\begin{bmatrix}1&1&1&1\\1&i&-1&-i\\1&-1&1&-1\\1&-i&-1&i\end{bmatrix}}} == Properties == === Unitarity === Most of the properties of the quantum Fourier transform follow from the fact that it is a unitary transformation. This can be checked by performing matrix multiplication and ensuring that the relation F F † = F † F = I {\displaystyle FF^{\dagger }=F^{\dagger }F=I} holds, where F † {\displaystyle F^{\dagger }} is the Hermitian adjoint of F {\displaystyle F} . Alternately, one can check that orthogonal vectors of norm 1 get mapped to orthogonal vectors of norm 1. From the unitary property it follows that the inverse of the quantum Fourier transform is the Hermitian adjoint of the Fourier matrix, thus F − 1 = F † {\displaystyle F^{-1}=F^{\dagger }} . Since there is an efficient quantum circuit implementing the quantum Fourier transform, the circuit can be run in reverse to perform the inverse quantum Fourier transform. Thus both transforms can be efficiently performed on a quantum computer. == Circuit implementation == The quantum gates used in the circuit of n {\displaystyle n} qubits are the Hadamard gate and the dyadic rational phase gate R k {\displaystyle R_{k}} : H = 1 2 ( 1 1 1 − 1 ) and R k = ( 1 0 0 e i 2 π / 2 k ) {\displaystyle H={\frac {1}{\sqrt {2}}}{\begin{pmatrix}1&1\\1&-1\end{pmatrix}}\qquad {\text{and}}\qquad R_{k}={\begin{pmatrix}1&0\\0&e^{i2\pi /2^{k}}\end{pmatrix}}} The circuit is composed of H {\displaystyle H} gates and the controlled version of R k {\displaystyle R_{k}} : An orthonormal basis consists of the basis states | 0 ⟩ , … , | 2 n − 1 ⟩ . {\displaystyle |0\rangle ,\ldots ,|2^{n}-1\rangle .} These basis states span all possible states of the qubits: | x ⟩ = | x 1 x 2 … x n ⟩ = | x 1 ⟩ ⊗ | x 2 ⟩ ⊗ ⋯ ⊗ | x n ⟩ {\displaystyle |x\rangle =|x_{1}x_{2}\ldots x_{n}\rangle =|x_{1}\rangle \otimes |x_{2}\rangle \otimes \cdots \otimes |x_{n}\rangle } where, with tensor product notation ⊗ {\displaystyle \otimes } , | x j ⟩ {\displaystyle |x_{j}\rangle } indicates that qubit j {\displaystyle j} is in state x j {\displaystyle x_{j}} , with x j {\displaystyle x_{j}} either 0 or 1. By convention, the basis state index x {\displaystyle x} is the binary number encoded by the x j {\displaystyle x_{j}} , with x 1 {\displaystyle x_{1}} the most significant bit. The action of the Hadamard gate is H | x j ⟩ = ( 1 2 ) ( | 0 ⟩ + e 2 π i x j 2 − 1 | 1 ⟩ ) {\displaystyle H|x_{j}\rangle =\left({\frac {1}{\sqrt {2}}}\right)\left(|0\rangle +e^{2\pi ix_{j}2^{-1}}|1\rangle \right)} , where the sign depends on x i {\displaystyle x_{i}} . The quantum Fourier transform can be written as the tensor product of a series of terms: QFT ( | x ⟩ ) = 1 N ⨂ j = 1 n ( | 0 ⟩ + ω N x 2 n − j | 1 ⟩ ) . {\displaystyle {\text{QFT}}(|x\rangle )={\frac {1}{\sqrt {N}}}\bigotimes _{j=1}^{n}\left(|0\rangle +\omega _{N}^{x2^{n-j}}|1\rangle \right).} Using the fractional binary notation [ 0. x 1 … x m ] = ∑ k = 1 m x k 2 − k , {\displaystyle [0.x_{1}\ldots x_{m}]=\sum _{k=1}^{m}x_{k}2^{-k},} the action of the quantum Fourier transform can be expressed in a compact manner: QFT ( | x 1 x 2 … x n ⟩ ) = 1 N ( | 0 ⟩ + e 2 π i [ 0. x n ] | 1 ⟩ ) ⊗ ( | 0 ⟩ + e 2 π i [ 0. x n − 1 x n ] | 1 ⟩ ) ⊗ ⋯ ⊗ ( | 0 ⟩ + e 2 π i [ 0. x 1 x 2 … x n ] | 1 ⟩ ) . {\displaystyle {\text{QFT}}(|x_{1}x_{2}\ldots x_{n}\rangle )={\frac {1}{\sqrt {N}}}\ \left(|0\rangle +e^{2\pi i\,[0.x_{n}]}|1\rangle \right)\otimes \left(|0\rangle +e^{2\pi i\,[0.x_{n-1}x_{n}]}|1\rangle \right)\otimes \cdots \otimes \left(|0\rangle +e^{2\pi i\,[0.x_{1}x_{2}\ldots x_{n}]}|1\rangle \right).} To obtain this state from the circuit depicted above, a swap operation of the qubits must be performed to reverse their order. At most n / 2 {\displaystyle n/2} swaps are required. Because the discrete Fourier transform, an operation on n qubits, can be factored into the tensor product of n single-qubit operations, it is easily represented as a quantum circuit (up to an order reversal of the output). Each of those single-qubit operations can be implemented efficiently using one Hadamard gate and a linear number of controlled phase gates. The first term requires one Hadamard gate and ( n − 1 ) {\displaystyle (n-1)} controlled phase gates, the next term requires one Hadamard gate and ( n − 2 ) {\displaystyle (n-2)} controlled phase gate, and each following term requires one fewer controlled phase gate. Summing up the number of gates, excluding the ones needed for the output reversal, gives n + ( n − 1 ) + ⋯ + 1 = n ( n + 1 ) / 2 = O ( n 2 ) {\displaystyle n+(n-1)+\cdots +1=n(n+1)/2=O(n^{2})} gates, which is quadratic polynomial in the number of qubits. This value is much smaller than for the classical Fourier transformation. The circuit-level implementation of the quantum Fourier transform on a linear nearest neighbor architecture has been studied before. The circuit depth is linear in the number of qubits. == Example == The quantum Fourier transform on three qubits, F 8 {\displaystyle F_{8}} with n = 3 , N = 8 = 2 3 {\displaystyle n=3,N=8=2^{3}} , is represented by the following transformation: QFT : | x ⟩ ↦ 1 8 ∑ k = 0 7 ω x k | k ⟩ , {\displaystyle {\text{QFT}}:|x\rangle \mapsto {\frac {1}{\sqrt {8}}}\sum _{k=0}^{7}\omega ^{xk}|k\rangle ,} where ω = ω 8 {\displaystyle \omega =\omega _{8}} is an eighth root of unity satisfying ω 8 = ( e i 2 π 8 ) 8 = 1 {\displaystyle \omega ^{8}=\left(e^{\frac {i2\pi }{8}}\right)^{8}=1} . The matrix representation of the Fourier transform on three qubits is: F 8 = 1 8 [ 1 1 1 1 1 1 1 1 1 ω ω 2 ω 3 ω 4 ω 5 ω 6 ω 7 1 ω 2 ω 4 ω 6 1 ω 2 ω 4 ω 6 1 ω 3 ω 6 ω ω 4 ω 7 ω 2 ω 5 1 ω 4 1 ω 4 1 ω 4 1 ω 4 1 ω 5 ω 2 ω 7 ω 4 ω ω 6 ω 3 1 ω 6 ω 4 ω 2 1 ω 6 ω 4 ω 2 1 ω 7 ω 6 ω 5 ω 4 ω 3 ω 2 ω ] . {\displaystyle F_{8}={\frac {1}{\sqrt {8}}}{\begin{bmatrix}1&1&1&1&1&1&1&1\\1&\omega &\omega ^{2}&\omega ^{3}&\omega ^{4}&\omega ^{5}&\omega ^{6}&\omega ^{7}\\1&\omega ^{2}&\omega ^{4}&\omega ^{6}&1&\omega ^{2}&\omega ^{4}&\omega ^{6}\\1&\omega ^{3}&\omega ^{6}&\omega &\omega ^{4}&\omega ^{7}&\omega ^{2}&\omega ^{5}\\1&\omega ^{4}&1&\omega ^{4}&1&\omega ^{4}&1&\omega ^{4}\\1&\omega ^{5}&\omega ^{2}&\omega ^{7}&\omega ^{4}&\omega &\omega ^{6}&\omega ^{3}\\1&\omega ^{6}&\omega ^{4}&\omega ^{2}&1&\omega ^{6}&\omega ^{4}&\omega ^{2}\\1&\omega ^{7}&\omega ^{6}&\omega ^{5}&\omega ^{4}&\omega ^{3}&\omega ^{2}&\omega \\\end{bmatrix}}.} The 3-qubit quantum Fourier transform can be rewritten as: QFT ( | x 1 , x 2 , x 3 ⟩ ) = 1 8 ( | 0 ⟩ + e 2 π i [ 0. x 3 ] | 1 ⟩ ) ⊗ ( | 0 ⟩ + e 2 π i [ 0. x 2 x 3 ] | 1 ⟩ ) ⊗ ( | 0 ⟩ + e 2 π i [ 0. x 1 x 2 x 3 ] | 1 ⟩ ) . {\displaystyle {\text{QFT}}(|x_{1},x_{2},x_{3}\rangle )={\frac {1}{\sqrt {8}}}\ \left(|0\rangle +e^{2\pi i\,[0.x_{3}]}|1\rangle \right)\otimes \left(|0\rangle +e^{2\pi i\,[0.x_{2}x_{3}]}|1\rangle \right)\otimes \left(|0\rangle +e^{2\pi i\,[0.x_{1}x_{2}x_{3}]}|1\rangle \right).} The following sketch shows the respective circuit for n = 3 {\displaystyle n=3} (with reversed order of output qubits with respect to the proper QFT): As calculated above, the number of gates used is n ( n + 1 ) / 2 {\displaystyle n(n+1)/2} which is equal to 6 {\displaystyle 6} , for n = 3 {\displaystyle n=3} . == Relation to quantum Hadamard transform == Using the generalized Fourier transform on finite (abelian) groups, there are actually two natural ways to define a quantum Fourier transform on an n-qubit quantum register. The QFT as defined above is equivalent to the DFT, which considers these n qubits as indexed by the cyclic group Z / 2 n Z {\displaystyle \mathbb {Z} /2^{n}\mathbb {Z} } . However, it also makes sense to consider the qubits as indexed by the Boolean group ( Z / 2 Z ) n {\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}} , and in this case the Fourier transform is the Hadamard transform. This is achieved by applying a Hadamard gate to each of the n qubits in parallel. Shor's algorithm uses both types of Fourier transforms, an initial Hadamard transform as well as a QFT. == For other groups == The Fourier transform can be formulated for groups other than the cyclic group, and extended to the quantum setting. For example, consider the symmetric group S n {\displaystyle S_{n}} . The Fourier transform can be expressed in matrix form F n = ∑ λ ∈ Λ n ∑ p , q ∈ P ( λ ) ∑ g ∈ S n d λ n ! [ λ ( g ) ] q , p | λ , p , q ⟩ ⟨ g | , {\displaystyle {\mathfrak {F}}_{n}=\sum _{\lambda \in \Lambda _{n}}\sum _{p,q\in {\mathcal {P}}(\lambda )}\sum _{g\in S_{n}}{\sqrt {\frac {d_{\lambda }}{n!}}}[\lambda (g)]_{q,p}|\lambda ,p,q\rangle \langle g|,} where [ λ ( g ) ] q , p {\displaystyle [\lambda (g)]_{q,p}} is the ( q , p ) {\displaystyle (q,p)} element of the matrix representation of λ ( g ) {\displaystyle \lambda (g)} , P ( λ ) {\displaystyle {\mathcal {P}}(\lambda )} is the set of paths from the root node to λ {\displaystyle \lambda } in the Bratteli diagram of S n {\displaystyle S_{n}} , Λ n {\displaystyle \Lambda _{n}} is the set of representations of S n {\displaystyle S_{n}} indexed by Young diagrams, and g {\displaystyle g} is a permutation. == Over a finite field == The discrete Fourier transform can also be formulated over a finite field F q {\displaystyle F_{q}} , and a quantum version can be defined. Consider N = q = p n {\displaystyle N=q=p^{n}} . Let ϕ : G F ( q ) → G F ( p ) {\displaystyle \phi :GF(q)\to GF(p)} be an arbitrary linear map (trace, for example). Then for each x ∈ G F ( q ) {\displaystyle x\in GF(q)} define F q , ϕ : | x ⟩ ↦ 1 q ∑ y ∈ G F ( q ) ω ϕ ( x y ) | y ⟩ {\displaystyle F_{q,\phi }:|x\rangle \mapsto {\frac {1}{\sqrt {q}}}\sum _{y\in GF(q)}\omega ^{\phi (xy)}|y\rangle } for ω = e 2 π i / p {\displaystyle \omega =e^{2\pi i/p}} and extend F q , ϕ {\displaystyle F_{q,\phi }} linearly. == References == == Further reading == Parthasarathy, K. R. (2006). Lectures on Quantum Computation, Quantum Error Correcting Codes and Information Theory. Tata Institute of Fundamental Research. ISBN 978-81-7319-688-1. Preskill, John (September 1998). "Lecture Notes for Physics 229: Quantum Information and Computation" (PDF). == External links == Wolfram Demonstration Project: Quantum Circuit Implementing Grover's Search Algorithm Wolfram Demonstration Project: Quantum Circuit Implementing Quantum Fourier Transform Quirk online life quantum fourier transform
Wikipedia/Quantum_Fourier_transform
In applied mathematics, the non-uniform discrete Fourier transform (NUDFT or NDFT) of a signal is a type of Fourier transform, related to a discrete Fourier transform or discrete-time Fourier transform, but in which the input signal is not sampled at equally spaced points or frequencies (or both). It is a generalization of the shifted DFT. It has important applications in signal processing, magnetic resonance imaging, and the numerical solution of partial differential equations. As a generalized approach for nonuniform sampling, the NUDFT allows one to obtain frequency domain information of a finite length signal at any frequency. One of the reasons to adopt the NUDFT is that many signals have their energy distributed nonuniformly in the frequency domain. Therefore, a nonuniform sampling scheme could be more convenient and useful in many digital signal processing applications. For example, the NUDFT provides a variable spectral resolution controlled by the user. == Definition == The nonuniform discrete Fourier transform transforms a sequence of N {\displaystyle N} complex numbers x 0 , … , x N − 1 {\displaystyle x_{0},\ldots ,x_{N-1}} into another sequence of complex numbers X 0 , … , X N − 1 {\displaystyle X_{0},\ldots ,X_{N-1}} defined by where p 0 , … , p N − 1 ∈ [ 0 , 1 ] {\displaystyle p_{0},\ldots ,p_{N-1}\in [0,1]} are sample points and f 0 , … , f N − 1 ∈ [ 0 , N ] {\displaystyle f_{0},\ldots ,f_{N-1}\in [0,N]} are frequencies. Note that if p n = n / N {\displaystyle p_{n}=n/N} and f k = k {\displaystyle f_{k}=k} , then equation (1) reduces to the discrete Fourier transform. There are three types of NUDFTs. Note that these types are not universal and different authors will refer to different types by different numbers. The nonuniform discrete Fourier transform of type I (NUDFT-I) uses uniform sample points p n = n / N {\displaystyle p_{n}=n/N} but nonuniform (i.e. non-integer) frequencies f k {\displaystyle f_{k}} . This corresponds to evaluating a generalized Fourier series at equispaced points. It is also known as NDFT or forward NDFT The nonuniform discrete Fourier transform of type II (NUDFT-II) uses uniform (i.e. integer) frequencies f k = k {\displaystyle f_{k}=k} but nonuniform sample points p n {\displaystyle p_{n}} . This corresponds to evaluating a Fourier series at nonequispaced points. It is also known as adjoint NDFT. The nonuniform discrete Fourier transform of type III (NUDFT-III) uses both nonuniform sample points p n {\displaystyle p_{n}} and nonuniform frequencies f k {\displaystyle f_{k}} . This corresponds to evaluating a generalized Fourier series at nonequispaced points. It is also known as NNDFT. A similar set of NUDFTs can be defined by substituting − i {\displaystyle -i} for + i {\displaystyle +i} in equation (1). Unlike in the uniform case, however, this substitution is unrelated to the inverse Fourier transform. The inversion of the NUDFT is a separate problem, discussed below. == Multidimensional NUDFT == The multidimensional NUDFT converts a d {\displaystyle d} -dimensional array of complex numbers x n {\displaystyle x_{\mathbf {n} }} into another d {\displaystyle d} -dimensional array of complex numbers X k {\displaystyle X_{\mathbf {k} }} defined by X k = ∑ n = 0 N − 1 x n e − 2 π i p n ⋅ f k {\displaystyle X_{\mathbf {k} }=\sum _{\mathbf {n} =\mathbf {0} }^{\mathbf {N} -1}x_{\mathbf {n} }e^{-2\pi i\mathbf {p} _{\mathbf {n} }\cdot {\boldsymbol {f}}_{\mathbf {k} }}} where p n ∈ [ 0 , 1 ] d {\displaystyle \mathbf {p} _{\mathbf {n} }\in [0,1]^{d}} are sample points, f k ∈ [ 0 , N 1 ] × [ 0 , N 2 ] × ⋯ × [ 0 , N d ] {\displaystyle {\boldsymbol {f}}_{\mathbf {k} }\in [0,N_{1}]\times [0,N_{2}]\times \cdots \times [0,N_{d}]} are frequencies, and n = ( n 1 , n 2 , … , n d ) {\displaystyle \mathbf {n} =(n_{1},n_{2},\ldots ,n_{d})} and k = ( k 1 , k 2 , … , k d ) {\displaystyle \mathbf {k} =(k_{1},k_{2},\ldots ,k_{d})} are d {\displaystyle d} -dimensional vectors of indices from 0 to N − 1 = ( N 1 − 1 , N 2 − 1 , … , N d − 1 ) {\displaystyle \mathbf {N} -1=(N_{1}-1,N_{2}-1,\ldots ,N_{d}-1)} . The multidimensional NUDFTs of types I, II, and III are defined analogously to the 1D case. == Relationship to Z-transform == The NUDFT-I can be expressed as a Z-transform. The NUDFT-I of a sequence x [ n ] {\displaystyle x[n]} of length N {\displaystyle N} is X ( z k ) = X ( z ) | z = z k = ∑ n = 0 N − 1 x [ n ] z k − n , k = 0 , 1 , . . . , N − 1 , {\displaystyle X(z_{k})=X(z)|_{z=z_{k}}=\sum _{n=0}^{N-1}x[n]z_{k}^{-n},\quad k=0,1,...,N-1,} where X ( z ) {\displaystyle X(z)} is the Z-transform of x [ n ] {\displaystyle x[n]} , and { z i } i = 0 , 1 , . . . , N − 1 {\displaystyle \{z_{i}\}_{i=0,1,...,N-1}} are arbitrarily distinct points in the z-plane. Note that the NUDFT reduces to the DFT when the sampling points are located on the unit circle at equally spaced angles. Expressing the above as a matrix, we get X = D x {\displaystyle \mathbf {X} =\mathbf {D} \mathbf {x} } where X = [ X ( z 0 ) X ( z 1 ) ⋮ X ( z N − 1 ) ] , x = [ x [ 0 ] x [ 1 ] ⋮ x [ N − 1 ] ] , and D = [ 1 z 0 − 1 z 0 − 2 ⋯ z 0 − ( N − 1 ) 1 z 1 − 1 z 1 − 2 ⋯ z 1 − ( N − 1 ) ⋮ ⋮ ⋮ ⋱ ⋮ 1 z N − 1 − 1 z N − 1 − 2 ⋯ z N − 1 − ( N − 1 ) ] . {\displaystyle \mathbf {X} ={\begin{bmatrix}X(z_{0})\\X(z_{1})\\\vdots \\X(z_{N-1})\end{bmatrix}},\quad \mathbf {x} ={\begin{bmatrix}x[0]\\x[1]\\\vdots \\x[N-1]\end{bmatrix}},{\text{ and}}\quad \mathbf {D} ={\begin{bmatrix}1&z_{0}^{-1}&z_{0}^{-2}&\cdots &z_{0}^{-(N-1)}\\1&z_{1}^{-1}&z_{1}^{-2}&\cdots &z_{1}^{-(N-1)}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&z_{N-1}^{-1}&z_{N-1}^{-2}&\cdots &z_{N-1}^{-(N-1)}\end{bmatrix}}.} === Direct inversion of the NUDFT-I === As we can see, the NUDFT-I is characterized by D {\displaystyle \mathbf {D} } and hence the N {\displaystyle N} z k {\displaystyle {z_{k}}} points. If we further factorize det ( D ) {\displaystyle \det(\mathbf {D} )} , we can see that D {\displaystyle \mathbf {D} } is nonsingular provided the N {\displaystyle N} z k {\displaystyle {z_{k}}} points are distinct. If D {\displaystyle \mathbf {D} } is nonsingular, we can get a unique inverse NUDFT-I as follows: x = D − 1 X {\displaystyle \mathbf {x} =\mathbf {D^{-1}} \mathbf {X} } . Given X and D {\displaystyle \mathbf {X} {\text{ and }}\mathbf {D} } , we can use Gaussian elimination to solve for x {\displaystyle \mathbf {x} } . However, the complexity of this method is O ( N 3 ) {\displaystyle O(N^{3})} . To solve this problem more efficiently, we first determine X ( z ) {\displaystyle X(z)} directly by polynomial interpolation: X ^ [ k ] = X ( z k ) , k = 0 , 1 , . . . , N − 1 {\displaystyle {\hat {X}}[k]=X(z_{k}),\quad k=0,1,...,N-1} . Then x [ n ] {\displaystyle x[n]} are the coefficients of the above interpolating polynomial. Expressing X ( z ) {\displaystyle X(z)} as the Lagrange polynomial of order N − 1 {\displaystyle N-1} , we get X ( z ) = ∑ k = 0 N − 1 L k ( z ) L k ( z k ) X ^ [ k ] , {\displaystyle X(z)=\sum _{k=0}^{N-1}{\frac {L_{k}(z)}{L_{k}(z_{k})}}{\hat {X}}[k],} where { L i ( z ) } i = 0 , 1 , . . . , N − 1 {\displaystyle \{L_{i}(z)\}_{i=0,1,...,N-1}} are the fundamental polynomials: L k ( z ) = ∏ i ≠ k ( 1 − z i z − 1 ) , k = 0 , 1 , . . . , N − 1 {\displaystyle L_{k}(z)=\prod _{i\neq k}(1-z_{i}z^{-1}),\quad k=0,1,...,N-1} . Expressing X ( z ) {\displaystyle X(z)} by the Newton interpolation method, we get X ( z ) = c 0 + c 1 ( 1 − z 0 z − 1 ) + c 2 ( 1 − z 0 z − 1 ) ( 1 − z 1 z − 1 ) + ⋯ + c N − 1 ∏ k = 0 N − 2 ( 1 − z k z − 1 ) , {\displaystyle X(z)=c_{0}+c_{1}(1-z_{0}z^{-1})+c_{2}(1-z_{0}z^{-1})(1-z_{1}z^{-1})+\cdots +c_{N-1}\prod _{k=0}^{N-2}(1-z_{k}z^{-1}),} where c j {\displaystyle c_{j}} is the divided difference of the j {\displaystyle j} th order of X ^ [ 0 ] , X ^ [ 1 ] , . . . , X ^ [ j ] {\displaystyle {\hat {X}}[0],{\hat {X}}[1],...,{\hat {X}}[j]} with respect to z 0 , z 1 , . . . , z j {\displaystyle z_{0},z_{1},...,z_{j}} : c 0 = X ^ [ 0 ] , {\displaystyle c_{0}={\hat {X}}[0],} c 1 = X ^ [ 1 ] − c 0 1 − z 0 z 1 − 1 , {\displaystyle c_{1}={\frac {{\hat {X}}[1]-c_{0}}{1-z_{0}z_{1}^{-1}}},} c 2 = X ^ [ 2 ] − c 0 − c 1 ( 1 − z 0 z − 1 ) ( 1 − z 0 z 2 − 1 ) ( 1 − z 1 z 2 − 1 ) , {\displaystyle c_{2}={\frac {{\hat {X}}[2]-c_{0}-c_{1}(1-z_{0}z^{-1})}{(1-z_{0}z_{2}^{-1})(1-z_{1}z_{2}^{-1})}},} ⋮ {\displaystyle \vdots } The disadvantage of the Lagrange representation is that any additional point included will increase the order of the interpolating polynomial, leading to the need to recompute all the fundamental polynomials. However, any additional point included in the Newton representation only requires the addition of one more term. We can use a lower triangular system to solve { c j } {\displaystyle \{c_{j}\}} : L c = X {\displaystyle \mathbf {L} \mathbf {c} =\mathbf {X} } where X = [ X ^ [ 0 ] X ^ [ 1 ] ⋮ X ^ [ N − 1 ] ] , c = [ c 0 c 1 ⋮ c N − 1 ] , and L = [ 1 0 0 ⋯ 0 1 ( 1 − z 0 z 1 − 1 ) 0 ⋯ 0 1 ( 1 − z 0 z 2 − 1 ) ( 1 − z 0 z 2 − 1 ) ( 1 − z 1 z 2 − 1 ) ⋯ 0 ⋮ ⋮ ⋮ ⋱ ⋮ 1 ( 1 − z 0 z N − 1 − 1 ) ( 1 − z 0 z N − 1 − 1 ) ( 1 − z 1 z N − 1 − 1 ) ⋯ ∏ k = 0 N − 2 ( 1 − z k z N − 1 − 1 ) ] . {\displaystyle \mathbf {X} ={\begin{bmatrix}{\hat {X}}[0]\\{\hat {X}}[1]\\\vdots \\{\hat {X}}[N-1]\end{bmatrix}},\quad \mathbf {c} ={\begin{bmatrix}c_{0}\\c_{1}\\\vdots \\c_{N-1}\end{bmatrix}},{\text{ and}}\quad \mathbf {L} ={\begin{bmatrix}1&0&0&\cdots &0\\1&(1-z_{0}z_{1}^{-1})&0&\cdots &0\\1&(1-z_{0}z_{2}^{-1})&(1-z_{0}z_{2}^{-1})(1-z_{1}z_{2}^{-1})&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&(1-z_{0}z_{N-1}^{-1})&(1-z_{0}z_{N-1}^{-1})(1-z_{1}z_{N-1}^{-1})&\cdots &\prod _{k=0}^{N-2}(1-z_{k}z_{N-1}^{-1})\end{bmatrix}}.} By the above equation, { c j } {\displaystyle \{c_{j}\}} can be computed within O ( N 2 ) {\displaystyle O(N^{2})} operations. In this way Newton interpolation is more efficient than Lagrange Interpolation unless the latter is modified by L k + 1 ( z ) = ( 1 − z k + 1 z − 1 ) ( 1 − z k z − 1 ) L k ( z ) , k = 0 , 1 , . . . , N − 1 {\displaystyle L_{k+1}(z)={\frac {(1-z_{k+1}z^{-1})}{(1-z_{k}z^{-1})}}L_{k}(z),\quad k=0,1,...,N-1} . == Nonuniform fast Fourier transform == While a naive application of equation (1) results in an O ( N 2 ) {\displaystyle O(N^{2})} algorithm for computing the NUDFT, O ( N log ⁡ N ) {\displaystyle O(N\log N)} algorithms based on the fast Fourier transform (FFT) do exist. Such algorithms are referred to as NUFFTs or NFFTs and have been developed based on oversampling and interpolation, min-max interpolation, and low-rank approximation. In general, NUFFTs leverage the FFT by converting the nonuniform problem into a uniform problem (or a sequence of uniform problems) to which the FFT can be applied. Software libraries for performing NUFFTs are available in 1D, 2D, and 3D. == Applications == The applications of the NUDFT include: Digital signal processing Magnetic resonance imaging Numerical partial differential equations Semi-Lagrangian schemes Spectral methods Spectral analysis Digital filter design Antenna array design Detection and decoding of dual-tone multi-frequency (DTMF) signals == See also == Discrete Fourier transform Fast Fourier transform Least-squares spectral analysis Lomb–Scargle periodogram Spectral estimation Unevenly spaced time series == References == == External links == Non-Uniform Fourier Transform: A Tutorial. NFFT 3.0 – Tutorial NUFFT software library
Wikipedia/Non-uniform_discrete_Fourier_transform
The prime-factor algorithm (PFA), also called the Good–Thomas algorithm (1958/1963), is a fast Fourier transform (FFT) algorithm that re-expresses the discrete Fourier transform (DFT) of a size N = N1N2 as a two-dimensional N1 × N2 DFT, but only for the case where N1 and N2 are relatively prime. These smaller transforms of size N1 and N2 can then be evaluated by applying PFA recursively or by using some other FFT algorithm. PFA should not be confused with the mixed-radix generalization of the popular Cooley–Tukey algorithm, which also subdivides a DFT of size N = N1N2 into smaller transforms of size N1 and N2. The latter algorithm can use any factors (not necessarily relatively prime), but it has the disadvantage that it also requires extra multiplications by roots of unity called twiddle factors, in addition to the smaller transforms. On the other hand, PFA has the disadvantages that it only works for relatively prime factors (e.g. it is useless for power-of-two sizes) and that it requires more complicated re-indexing of the data based on the additive group isomorphisms. Note, however, that PFA can be combined with mixed-radix Cooley–Tukey, with the former factorizing N into relatively prime components and the latter handling repeated factors. PFA is also closely related to the nested Winograd FFT algorithm, where the latter performs the decomposed N1 by N2 transform via more sophisticated two-dimensional convolution techniques. Some older papers therefore also call Winograd's algorithm a PFA FFT. (Although the PFA is distinct from the Cooley–Tukey algorithm, Good's 1958 work on the PFA was cited as inspiration by Cooley and Tukey in their 1965 paper, and there was initially some confusion about whether the two algorithms were different. In fact, it was the only prior FFT work cited by them, as they were not then aware of the earlier research by Gauss and others.) == Algorithm == Let a ( x ) {\displaystyle a(x)} be a polynomial and ω n {\displaystyle \omega _{n}} be a principal n {\displaystyle n} -th root of unity. We define the DFT of a ( x ) {\displaystyle a(x)} as the n {\displaystyle n} -tuple ( a ^ j ) = ( a ( ω n j ) ) {\displaystyle ({\hat {a}}_{j})=(a(\omega _{n}^{j}))} . In other words, a ^ j = ∑ i = 0 n − 1 a i ω n i j for all j = 0 , 1 , … , n − 1. {\displaystyle {\hat {a}}_{j}=\sum _{i=0}^{n-1}a_{i}\omega _{n}^{ij}\quad {\text{ for all }}j=0,1,\dots ,n-1.} For simplicity, we denote the transformation as DFT ω n {\displaystyle {\text{DFT}}_{\omega _{n}}} . The PFA relies on a coprime factorization of n = ∏ d = 0 D − 1 n d {\textstyle n=\prod _{d=0}^{D-1}n_{d}} and turns DFT ω n {\displaystyle {\text{DFT}}_{\omega _{n}}} into ⨂ d DFT ω n d {\textstyle \bigotimes _{d}{\text{DFT}}_{\omega _{n_{d}}}} for some choices of ω n d {\displaystyle \omega _{n_{d}}} 's where ⨂ {\textstyle \bigotimes } is the tensor product. === Mapping based on CRT === For a coprime factorization ⁠ n = ∏ d = 0 D − 1 n d {\displaystyle \textstyle n=\prod _{d=0}^{D-1}n_{d}} ⁠, we have the Chinese remainder map m ↦ ( m mod n d ) {\displaystyle m\mapsto (m{\bmod {n}}_{d})} from Z n {\displaystyle \mathbb {Z} _{n}} to ∏ d = 0 D − 1 Z n d {\textstyle \prod _{d=0}^{D-1}\mathbb {Z} _{n_{d}}} with ( m d ) ↦ ∑ d = 0 D − 1 e d m d {\textstyle (m_{d})\mapsto \sum _{d=0}^{D-1}e_{d}m_{d}} as its inverse where ⁠ e d {\displaystyle e_{d}} ⁠'s are the central orthogonal idempotent elements with ∑ d = 0 D − 1 e d = 1 ( mod n ) {\textstyle \sum _{d=0}^{D-1}e_{d}=1{\pmod {n}}} . Choosing ω n d = ω n e d {\displaystyle \omega _{n_{d}}=\omega _{n}^{e_{d}}} (therefore, ⁠ ∏ d = 0 D − 1 ω n d = ω n ∑ d = 0 D − 1 e d = ω n {\displaystyle \prod _{d=0}^{D-1}\omega _{n_{d}}=\omega _{n}^{\sum _{d=0}^{D-1}e_{d}}=\omega _{n}} ⁠), we rewrite DFT ω n {\displaystyle {\text{DFT}}_{\omega _{n}}} as follows: a ^ j = ∑ i = 0 n − 1 a i ω n i j = ∑ i = 0 n − 1 a i ( ∏ d = 0 D − 1 ω n d ) i j = ∑ i = 0 n − 1 a i ∏ d = 0 D − 1 ω n d ( i mod n d ) ( j mod n d ) = ∑ i 0 = 0 n 0 − 1 ⋯ ∑ i D − 1 = 0 n D − 1 − 1 a ∑ d = 0 D − 1 e d i d ∏ d = 0 D − 1 ω n d i d ( j mod n d ) . {\displaystyle {\hat {a}}_{j}=\sum _{i=0}^{n-1}a_{i}\omega _{n}^{ij}=\sum _{i=0}^{n-1}a_{i}\left(\prod _{d=0}^{D-1}\omega _{n_{d}}\right)^{ij}=\sum _{i=0}^{n-1}a_{i}\prod _{d=0}^{D-1}\omega _{n_{d}}^{(i{\bmod {n}}_{d})(j{\bmod {n}}_{d})}=\sum _{i_{0}=0}^{n_{0}-1}\cdots \sum _{i_{D-1}=0}^{n_{D-1}-1}a_{\sum _{d=0}^{D-1}e_{d}i_{d}}\prod _{d=0}^{D-1}\omega _{n_{d}}^{i_{d}(j{\bmod {n}}_{d})}.} Finally, define a i 0 , … , i D − 1 = a ∑ d = 0 D − 1 i d e d {\displaystyle a_{i_{0},\dots ,i_{D-1}}=a_{\sum _{d=0}^{D-1}i_{d}e_{d}}} and ⁠ a ^ j 0 , … , j D − 1 = a ^ ∑ d = 0 D − 1 j d e d {\displaystyle {\hat {a}}_{j_{0},\dots ,j_{D-1}}={\hat {a}}_{\sum _{d=0}^{D-1}j_{d}e_{d}}} ⁠, we have a ^ j 0 , … , j D − 1 = ∑ i 0 = 0 n 0 − 1 ⋯ ∑ i D − 1 = 0 n D − 1 − 1 a i 0 , … , i D − 1 ∏ d = 0 D − 1 ω n d i d j d . {\displaystyle {\hat {a}}_{j_{0},\dots ,j_{D-1}}=\sum _{i_{0}=0}^{n_{0}-1}\cdots \sum _{i_{D-1}=0}^{n_{D-1}-1}a_{i_{0},\dots ,i_{D-1}}\prod _{d=0}^{D-1}\omega _{n_{d}}^{i_{d}j_{d}}.} Therefore, we have the multi-dimensional DFT, ⁠ ⊗ d = 0 D − 1 DFT ω n d {\displaystyle \textstyle \otimes _{d=0}^{D-1}{\text{DFT}}_{\omega _{n_{d}}}} ⁠. === As algebra isomorphisms === PFA can be stated in a high-level way in terms of algebra isomorphisms. We first recall that for a commutative ring R {\displaystyle R} and a group isomorphism from G {\displaystyle G} to ⁠ ∏ d G d {\displaystyle \textstyle \prod _{d}G_{d}} ⁠, we have the following algebra isomorphism R [ G ] ≅ ⨂ d R [ G d ] , {\displaystyle R[G]\cong \bigotimes _{d}R[G_{d}],} where ⨂ {\displaystyle \bigotimes } refers to the tensor product of algebras. To see how PFA works, we choose G = ( Z n , + , 0 ) {\displaystyle G=(\mathbb {Z} _{n},+,0)} and G d = ( Z n d , + , 0 ) {\displaystyle G_{d}=(\mathbb {Z} _{n_{d}},+,0)} be additive groups. We also identify R [ G ] {\displaystyle R[G]} as R [ x ] ⟨ x n − 1 ⟩ {\textstyle {\frac {R[x]}{\langle x^{n}-1\rangle }}} and R [ G d ] {\displaystyle R[G_{d}]} as ⁠ R [ x d ] ⟨ x d n d − 1 ⟩ {\displaystyle \textstyle {\frac {R[x_{d}]}{\langle x_{d}^{n_{d}}-1\rangle }}} ⁠. Choosing η = a ↦ ( a mod n d ) {\displaystyle \eta =a\mapsto (a{\bmod {n}}_{d})} as the group isomorphism ⁠ G ≅ ∏ d G d {\displaystyle \textstyle G\cong \prod _{d}G_{d}} ⁠, we have the algebra isomorphism η ∗ : R [ G ] ≅ ⨂ d R [ G d ] {\textstyle \eta ^{*}:R[G]\cong \bigotimes _{d}R[G_{d}]} , or alternatively, η ∗ : R [ x ] ⟨ x n − 1 ⟩ ≅ ⨂ d R [ x d ] ⟨ x d n d − 1 ⟩ . {\displaystyle \eta ^{*}:{\frac {R[x]}{\langle x^{n}-1\rangle }}\cong \bigotimes _{d}{\frac {R[x_{d}]}{\langle x_{d}^{n_{d}}-1\rangle }}.} Now observe that DFT ω n {\displaystyle {\text{DFT}}_{\omega _{n}}} is actually an algebra isomorphism from R [ x ] ⟨ x n − 1 ⟩ {\textstyle {\frac {R[x]}{\langle x^{n}-1\rangle }}} to ∏ i R [ x ] ⟨ x − ω n i ⟩ {\textstyle \prod _{i}{\frac {R[x]}{\langle x-\omega _{n}^{i}\rangle }}} and each DFT ω n d {\displaystyle {\text{DFT}}_{\omega _{n_{d}}}} is an algebra isomorphism from R [ x ] ⟨ x d n d − 1 ⟩ {\textstyle {\frac {R[x]}{\langle {x_{d}}^{n_{d}}-1\rangle }}} to ∏ i d R [ x d ] ⟨ x d − ω n d i d ⟩ {\textstyle \prod _{i_{d}}{\frac {R[x_{d}]}{\langle x_{d}-\omega _{n_{d}}^{i_{d}}\rangle }}} , we have an algebra isomorphism η ′ {\displaystyle \eta '} from ⨂ d ∏ i d R [ x d ] ⟨ x d − ω n d i d ⟩ {\textstyle \bigotimes _{d}\prod _{i_{d}}{\frac {R[x_{d}]}{\langle x_{d}-\omega _{n_{d}}^{i_{d}}\rangle }}} to ∏ i R [ x ] ⟨ x − ω n i ⟩ {\textstyle \prod _{i}{\frac {R[x]}{\langle x-\omega _{n}^{i}\rangle }}} . What PFA tells us is that DFT ω n = η ′ ∘ ⨂ d DFT ω n d ∘ η ∗ {\textstyle {\text{DFT}}_{\omega _{n}}=\eta '\circ \bigotimes _{d}{\text{DFT}}_{\omega _{n_{d}}}\circ \eta ^{*}} where η ∗ {\displaystyle \eta ^{*}} and η ′ {\displaystyle \eta '} are re-indexing without actual arithmetic in R {\displaystyle R} . === Counting the number of multi-dimensional transformations === Notice that the condition for transforming DFT ω n {\displaystyle {\text{DFT}}_{\omega _{n}}} into η ′ ∘ ⨂ d DFT ω n d ∘ η ∗ {\textstyle \eta '\circ \bigotimes _{d}{\text{DFT}}_{\omega _{n_{d}}}\circ \eta ^{*}} relies on "an" additive group isomorphism η {\displaystyle \eta } from ( Z n , + , 0 ) {\displaystyle (\mathbb {Z} _{n},+,0)} to ⁠ ∏ d ( Z n d , + , 0 ) {\displaystyle \textstyle \prod _{d}(\mathbb {Z} _{n_{d}},+,0)} ⁠. Any additive group isomorphism will work. To count the number of ways transforming DFT ω n {\displaystyle {\text{DFT}}_{\omega _{n}}} into ⁠ η ′ ∘ ⨂ d DFT ω n d ∘ η ∗ {\displaystyle \textstyle \eta '\circ \bigotimes _{d}{\text{DFT}}_{\omega _{n_{d}}}\circ \eta ^{*}} ⁠, we only need to count the number of additive group isomorphisms from ( Z n , + , 0 ) {\displaystyle (\mathbb {Z} _{n},+,0)} to ∏ d ( Z n d , + , 0 ) {\textstyle \prod _{d}(\mathbb {Z} _{n_{d}},+,0)} , or alternative, the number of additive group automorphisms on ⁠ ( Z n , + , 0 ) {\displaystyle (\mathbb {Z} _{n},+,0)} ⁠. Since ( Z n , + , 0 ) {\displaystyle (\mathbb {Z} _{n},+,0)} is cyclic, any automorphism can be written as 1 ↦ g {\displaystyle 1\mapsto g} where g {\displaystyle g} is a generator of ( Z n , + , 0 ) {\displaystyle (\mathbb {Z} _{n},+,0)} . By the definition of ⁠ ( Z n , + , 0 ) {\displaystyle (\mathbb {Z} _{n},+,0)} ⁠, g {\displaystyle g} 's are exactly those coprime to n {\displaystyle n} . Therefore, there are exactly φ ( n ) {\displaystyle \varphi (n)} many such maps where φ {\displaystyle \varphi } is the Euler's totient function. The smallest example is n = 6 {\displaystyle n=6} where φ ( n ) = 2 {\displaystyle \varphi (n)=2} , demonstrating the two maps in the literature: the "CRT mapping" and the "Ruritanian mapping". == See also == Bluestein's FFT algorithm Rader's FFT algorithm == Notes == == References ==
Wikipedia/Prime-factor_FFT_algorithm
In mathematical analysis and applications, multidimensional transforms are used to analyze the frequency content of signals in a domain of two or more dimensions. == Multidimensional Fourier transform == One of the more popular multidimensional transforms is the Fourier transform, which converts a signal from a time/space domain representation to a frequency domain representation. The discrete-domain multidimensional Fourier transform (FT) can be computed as follows: F ( w 1 , w 2 , … , w m ) = ∑ n 1 = − ∞ ∞ ∑ n 2 = − ∞ ∞ ⋯ ∑ n m = − ∞ ∞ f ( n 1 , n 2 , … , n m ) e − i w 1 n 1 − i w 2 n 2 ⋯ − i w m n m {\displaystyle F(w_{1},w_{2},\dots ,w_{m})=\sum _{n_{1}=-\infty }^{\infty }\sum _{n_{2}=-\infty }^{\infty }\cdots \sum _{n_{m}=-\infty }^{\infty }f(n_{1},n_{2},\dots ,n_{m})e^{-iw_{1}n_{1}-iw_{2}n_{2}\cdots -iw_{m}n_{m}}} where F stands for the multidimensional Fourier transform, m stands for multidimensional dimension. Define f as a multidimensional discrete-domain signal. The inverse multidimensional Fourier transform is given by f ( n 1 , n 2 , … , n m ) = ( 1 2 π ) m ∫ − π π ⋯ ∫ − π π F ( w 1 , w 2 , … , w m ) e i w 1 n 1 + i w 2 n 2 + ⋯ + i w m n m d w 1 ⋯ d w m {\displaystyle f(n_{1},n_{2},\dots ,n_{m})=\left({\frac {1}{2\pi }}\right)^{m}\int _{-\pi }^{\pi }\cdots \int _{-\pi }^{\pi }F(w_{1},w_{2},\ldots ,w_{m})e^{iw_{1}n_{1}+iw_{2}n_{2}+\cdots +iw_{m}n_{m}}\,dw_{1}\cdots \,dw_{m}} The multidimensional Fourier transform for continuous-domain signals is defined as follows: F ( Ω 1 , Ω 2 , … , Ω m ) = ∫ − ∞ ∞ ⋯ ∫ − ∞ ∞ f ( t 1 , t 2 , … , t m ) e − i Ω 1 t 1 − i Ω 2 t 2 ⋯ − i Ω m t m d t 1 ⋯ d t m {\displaystyle F(\Omega _{1},\Omega _{2},\ldots ,\Omega _{m})=\int _{-\infty }^{\infty }\cdots \int _{-\infty }^{\infty }f(t_{1},t_{2},\ldots ,t_{m})e^{-i\Omega _{1}t_{1}-i\Omega _{2}t_{2}\cdots -i\Omega _{m}t_{m}}\,dt_{1}\cdots \,dt_{m}} === Properties of Fourier transform === Similar properties of the 1-D FT transform apply, but instead of the input parameter being just a single entry, it's a Multi-dimensional (MD) array or vector. Hence, it's x(n1,…,nM) instead of x(n). ==== Linearity ==== if x 1 ( n 1 , … , n M ) ⟷ F T X 1 ( ω 1 , … , ω M ) {\displaystyle x_{1}(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X_{1}(\omega _{1},\ldots ,\omega _{M})} , and x 2 ( n 1 , … , n M ) ⟷ F T X 2 ( ω 1 , … , ω M ) {\displaystyle x_{2}(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X_{2}(\omega _{1},\ldots ,\omega _{M})} then, a x 1 ( n 1 , … , n M ) + b x 2 ( n 1 , … , n M ) ⟷ F T a X 1 ( ω 1 , … , ω M ) + b X 2 ( ω 1 , … , ω M ) {\displaystyle ax_{1}(n_{1},\ldots ,n_{M})+bx_{2}(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}aX_{1}(\omega _{1},\ldots ,\omega _{M})+bX_{2}(\omega _{1},\ldots ,\omega _{M})} ==== Shift ==== if x ( n 1 , . . . , n M ) ⟷ F T X ( ω 1 , . . . , ω M ) {\displaystyle x(n_{1},...,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X(\omega _{1},...,\omega _{M})} , then x ( n 1 − a 1 , . . . , n M − a M ) ⟷ F T e − i ( ω 1 a 1 + , . . . , + ω M a M ) X ( ω 1 , . . . , ω M ) {\displaystyle x(n_{1}-a_{1},...,n_{M}-a_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}e^{-i(\omega _{1}a_{1}+,...,+\omega _{M}a_{M})}X(\omega _{1},...,\omega _{M})} ==== Modulation ==== if x ( n 1 , … , n M ) ⟷ F T X ( ω 1 , … , ω M ) {\displaystyle x(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X(\omega _{1},\ldots ,\omega _{M})} , then e i ( θ 1 n 1 + ⋯ + θ M n M ) x ( n 1 − a 1 , … , n M − a M ) ⟷ F T X ( ω 1 − θ 1 , … , ω M − θ M ) {\displaystyle e^{i(\theta _{1}n_{1}+\cdots +\theta _{M}n_{M})}x(n_{1}-a_{1},\ldots ,n_{M}-a_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X(\omega _{1}-\theta _{1},\ldots ,\omega _{M}-\theta _{M})} ==== Multiplication ==== if x 1 ( n 1 , … , n M ) ⟷ F T X 1 ( ω 1 , … , ω M ) {\displaystyle x_{1}(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X_{1}(\omega _{1},\ldots ,\omega _{M})} , and x 2 ( n 1 , … , n M ) ⟷ F T X 2 ( ω 1 , … , ω M ) {\displaystyle x_{2}(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X_{2}(\omega _{1},\ldots ,\omega _{M})} then, or, ==== Differentiation ==== If x ( n 1 , … , n M ) ⟷ F T X ( ω 1 , … , ω M ) {\displaystyle x(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X(\omega _{1},\ldots ,\omega _{M})} , then − i n 1 x ( n 1 , … , n M ) ⟷ F T ∂ ( ∂ ω 1 ) X ( ω 1 , … , ω M ) , {\displaystyle -in_{1}x(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}{\frac {\partial }{(\partial \omega _{1})}}X(\omega _{1},\ldots ,\omega _{M}),} − i n 2 x ( n 1 , … , n M ) ⟷ F T ∂ ( ∂ ω 2 ) X ( ω 1 , … , ω M ) , {\displaystyle -in_{2}x(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}{\frac {\partial }{(\partial \omega _{2})}}X(\omega _{1},\ldots ,\omega _{M}),} ( − i ) M ( n 1 n 2 ⋯ n M ) x ( n 1 , … , n M ) ⟷ F T ( ∂ ) M ( ∂ ω 1 ∂ ω 2 ⋯ ∂ ω M ) X ( ω 1 , … , ω M ) , {\displaystyle (-i)^{M}(n_{1}n_{2}\cdots n_{M})x(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}{\frac {(\partial )^{M}}{(\partial \omega _{1}\partial \omega _{2}\cdots \partial \omega _{M})}}X(\omega _{1},\ldots ,\omega _{M}),} ==== Transposition ==== If x ( n 1 , … , n M ) ⟷ F T X ( ω 1 , … , ω M ) {\displaystyle x(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X(\omega _{1},\ldots ,\omega _{M})} , then x ( n M , … , n 1 ) ⟷ F T X ( ω M , … , ω 1 ) {\displaystyle x(n_{M},\ldots ,n_{1}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X(\omega _{M},\ldots ,\omega _{1})} ==== Reflection ==== If x ( n 1 , … , n M ) ⟷ F T X ( ω 1 , … , ω M ) {\displaystyle x(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X(\omega _{1},\ldots ,\omega _{M})} , then x ( ± n 1 , … , ± n M ) ⟷ F T X ( ± ω 1 , … , ± ω M ) {\displaystyle x(\pm n_{1},\ldots ,\pm n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X(\pm \omega _{1},\ldots ,\pm \omega _{M})} ==== Complex conjugation ==== If x ( n 1 , … , n M ) ⟷ F T X ( ω 1 , … , ω M ) {\displaystyle x(n_{1},\ldots ,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X(\omega _{1},\ldots ,\omega _{M})} , then x ∗ ( ± n 1 , … , ± n M ) ⟷ F T X ∗ ( ∓ ω 1 , … , ∓ ω M ) {\displaystyle x^{*}(\pm n_{1},\ldots ,\pm n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X^{*}(\mp \omega _{1},\ldots ,\mp \omega _{M})} ==== Parseval's theorem (MD) ==== if x 1 ( n 1 , . . . , n M ) ⟷ F T X 1 ( ω 1 , . . . , ω M ) {\displaystyle x_{1}(n_{1},...,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X_{1}(\omega _{1},...,\omega _{M})} , and x 2 ( n 1 , . . . , n M ) ⟷ F T X 2 ( ω 1 , . . . , ω M ) {\displaystyle x_{2}(n_{1},...,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X_{2}(\omega _{1},...,\omega _{M})} then, ∑ n 1 = − ∞ ∞ . . . ∑ n M = − ∞ ∞ x 1 ( n 1 , . . . , n M ) x 2 ∗ ( n 1 , . . . , n M ) = 1 ( 2 π ) M ∫ − π π . . . ∫ − π π X 1 ( ω 1 , . . . , ω M ) X 2 ∗ ( ω 1 , . . . , ω M ) d ω 1 . . . d ω M {\displaystyle \sum _{n_{1}=-\infty }^{\infty }...\sum _{n_{M}=-\infty }^{\infty }x_{1}(n_{1},...,n_{M})x_{2}^{*}(n_{1},...,n_{M}){=}{\frac {1}{(2\pi )^{M}}}\int \limits _{-\pi }^{\pi }...\int \limits _{-\pi }^{\pi }X_{1}(\omega _{1},...,\omega _{M})X_{2}^{*}(\omega _{1},...,\omega _{M})d\omega _{1}...d\omega _{M}} if x 1 ( n 1 , . . . , n M ) = x 2 ( n 1 , . . . , n M ) {\displaystyle x_{1}(n_{1},...,n_{M}){=}x_{2}(n_{1},...,n_{M})} , then ∑ n 1 = − ∞ ∞ . . . ∑ n M = − ∞ ∞ | x 1 ( n 1 , . . . , n M ) | 2 = 1 ( 2 π ) M ∫ − π π . . . ∫ − π π | X 1 ( ω 1 , . . . , ω M ) | 2 d ω 1 . . . d ω M {\displaystyle \sum _{n_{1}=-\infty }^{\infty }...\sum _{n_{M}=-\infty }^{\infty }|x_{1}(n_{1},...,n_{M})|^{2}{=}{\frac {1}{(2\pi )^{M}}}\int \limits _{-\pi }^{\pi }...\int \limits _{-\pi }^{\pi }|X_{1}(\omega _{1},...,\omega _{M})|^{2}d\omega _{1}...d\omega _{M}} A special case of the Parseval's theorem is when the two multi-dimensional signals are the same. In this case, the theorem portrays the energy conservation of the signal and the term in the summation or integral is the energy-density of the signal. ==== Separability ==== A signal or system is said to be separable if it can be expressed as a product of 1-D functions with different independent variables. This phenomenon allows computing the FT transform as a product of 1-D FTs instead of multi-dimensional FT. if x ( n 1 , . . . , n M ) ⟷ F T X ( ω 1 , . . . , ω M ) {\displaystyle x(n_{1},...,n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}X(\omega _{1},...,\omega _{M})} , a ( n 1 ) ⟷ F T A ( ω 1 ) {\displaystyle a(n_{1}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}A(\omega _{1})} , b ( n 2 ) ⟷ F T B ( ω 2 ) {\displaystyle b(n_{2}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}B(\omega _{2})} ... y ( n M ) ⟷ F T Y ( ω M ) {\displaystyle y(n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}Y(\omega _{M})} , and if x ( n 1 , . . . , n M ) = a ( n 1 ) b ( n 2 ) . . . y ( n M ) {\displaystyle x(n_{1},...,n_{M}){=}a(n_{1})b(n_{2})...y(n_{M})} , then X ( ω 1 , . . . , ω M ) ⟷ F T x ( n 1 , . . . , n M ) = a ( n 1 ) b ( n 2 ) . . . y ( n M ) ⟷ F T A ( ω 1 ) B ( ω 2 ) . . . Y ( ω M ) {\displaystyle X(\omega _{1},...,\omega _{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}x(n_{1},...,n_{M}){=}a(n_{1})b(n_{2})...y(n_{M}){\overset {\underset {\mathrm {FT} }{}}{\longleftrightarrow }}A(\omega _{1})B(\omega _{2})...Y(\omega _{M})} , so X ( ω 1 , . . . , ω M ) = A ( ω 1 ) B ( ω 2 ) . . . Y ( ω M ) {\displaystyle X(\omega _{1},...,\omega _{M}){=}A(\omega _{1})B(\omega _{2})...Y(\omega _{M})} === MD FFT === A fast Fourier transform (FFT) is an algorithm to compute the discrete Fourier transform (DFT) and its inverse. An FFT computes the DFT and produces exactly the same result as evaluating the DFT definition directly; the only difference is that an FFT is much faster. (In the presence of round-off error, many FFT algorithms are also much more accurate than evaluating the DFT definition directly).There are many different FFT algorithms involving a wide range of mathematics, from simple complex-number arithmetic to group theory and number theory. See more in FFT. === MD DFT === The multidimensional discrete Fourier transform (DFT) is a sampled version of the discrete-domain FT by evaluating it at sample frequencies that are uniformly spaced. The N1 × N2 × ... Nm DFT is given by: F x ( K 1 , K 2 , … , K m ) = ∑ n 1 = 0 N 1 − 1 ⋯ ∑ n m = 0 N m − 1 f x ( n 1 , n 2 , … , n m ) e − i 2 π N 1 n 1 K 1 − i 2 π N 2 n 2 K 2 ⋯ − i 2 π N m n m K m {\displaystyle Fx(K_{1},K_{2},\ldots ,K_{m})=\sum _{n_{1}=0}^{N_{1}-1}\cdots \sum _{n_{m}=0}^{N_{m}-1}fx(n_{1},n_{2},\ldots ,n_{m})e^{-i{\frac {2\pi }{N_{1}}}n_{1}K_{1}-i{\frac {2\pi }{N_{2}}}n_{2}K_{2}\cdots -i{\frac {2\pi }{N_{m}}}n_{m}K_{m}}} for 0 ≤ Ki ≤ Ni − 1, i = 1, 2, ..., m. The inverse multidimensional DFT equation is f x ( n 1 , n 2 , … , n m ) = 1 N 1 ⋯ N m ∑ K 1 = 0 N 1 − 1 ⋯ ∑ K m = 0 N m − 1 F x ( K 1 , K 2 , … , K m ) e i 2 π N 1 n 1 K 1 + i 2 π N 2 n 2 K 2 ⋯ + i 2 π N m n m K m {\displaystyle fx(n_{1},n_{2},\ldots ,n_{m})={\frac {1}{N_{1}\cdots N_{m}}}\sum _{K_{1}=0}^{N_{1}-1}\cdots \sum _{K_{m}=0}^{N_{m}-1}Fx(K_{1},K_{2},\ldots ,K_{m})e^{i{\frac {2\pi }{N_{1}}}n_{1}K_{1}+i{\frac {2\pi }{N_{2}}}n_{2}K_{2}\cdots +i{\frac {2\pi }{N_{m}}}n_{m}K_{m}}} for 0 ≤ n1, n2, ... , nm ≤ N(1, 2, ... , m) – 1. == Multidimensional discrete cosine transform == The discrete cosine transform (DCT) is used in a wide range of applications such as data compression, feature extraction, Image reconstruction, multi-frame detection and so on. The multidimensional DCT is given by: F x ( K 1 , K 2 , … , K r ) = ∑ n 1 = 0 N 1 − 1 ∑ n 2 = 0 N 2 − 1 ⋯ ∑ n r = 0 N r − 1 f x ( n 1 , n 2 , … , n r ) cos ⁡ π ( 2 n 1 + 1 ) K 1 2 N 1 ⋯ cos ⁡ π ( 2 n r + 1 ) K r 2 N r {\displaystyle Fx(K_{1},K_{2},\ldots ,K_{r})=\sum _{n_{1}=0}^{N_{1}-1}\sum _{n_{2}=0}^{N_{2}-1}\cdots \sum _{n_{r}=0}^{N_{r}-1}fx(n_{1},n_{2},\ldots ,n_{r})\cos {\frac {\pi (2n_{1}+1)K_{1}}{2N_{1}}}\cdots \cos {\frac {\pi (2n_{r}+1)K_{r}}{2N_{r}}}} for ki = 0, 1, ..., Ni − 1, i = 1, 2, ..., r. == Multidimensional Laplace transform == The multidimensional Laplace transform is useful for the solution of boundary value problems. Boundary value problems in two or more variables characterized by partial differential equations can be solved by a direct use of the Laplace transform. The Laplace transform for an M-dimensional case is defined as F ( s 1 , s 2 , … , s n ) = ∫ 0 ∞ ⋯ ∫ 0 ∞ f ( t 1 , t 2 , … , t n ) e − s n t n − s n − 1 t n − 1 ⋯ ⋯ s 1 t 1 d t 1 ⋯ d t n {\displaystyle F(s_{1},s_{2},\ldots ,s_{n})=\int _{0}^{\infty }\cdots \int _{0}^{\infty }f(t_{1},t_{2},\ldots ,t_{n})e^{-s_{n}t_{n}-s_{n-1}t_{n-1}\cdots \cdots s_{1}t_{1}}\,dt_{1}\cdots \,dt_{n}} where F stands for the s-domain representation of the signal f(t). A special case (along 2 dimensions) of the multi-dimensional Laplace transform of function f(x,y) is defined as F ( s 1 , s 2 ) = ∫ 0 ∞ ∫ 0 ∞ f ( x , y ) e − s 1 x − s 2 y d x d y {\displaystyle F(s_{1},s_{2})=\int \limits _{0}^{\infty }\int \limits _{0}^{\infty }\ f(x,y)e^{-s_{1}x-s_{2}y}\,dxdy} F ( s 1 , s 2 ) {\displaystyle F(s_{1},s_{2})} is called the image of f ( x , y ) {\displaystyle f(x,y)} and f ( x , y ) {\displaystyle f(x,y)} is known as the original of F ( s 1 , s 2 ) {\displaystyle F(s_{1},s_{2})} . This special case can be used to solve the Telegrapher's equations.} == Multidimensional Z transform == Source: The multidimensional Z transform is used to map the discrete time domain multidimensional signal to the Z domain. This can be used to check the stability of filters. The equation of the multidimensional Z transform is given by F ( z 1 , z 2 , … , z m ) = ∑ n 1 = − ∞ ∞ ⋯ ∑ n m = − ∞ ∞ f ( n 1 , n 2 , … , n m ) z 1 − n 1 z 2 − n 2 … z m − n m {\displaystyle F(z_{1},z_{2},\ldots ,z_{m})=\sum _{n_{1}=-\infty }^{\infty }\cdots \sum _{n_{m}=-\infty }^{\infty }f(n_{1},n_{2},\ldots ,n_{m})z_{1}^{-n_{1}}z_{2}^{-n_{2}}\ldots z_{m}^{-n_{m}}} where F stands for the z-domain representation of the signal f(n). A special case of the multidimensional Z transform is the 2D Z transform which is given as F ( z 1 , z 2 ) = ∑ n 1 = − ∞ ∞ ∑ n 2 = − ∞ ∞ f ( n 1 , n 2 ) z 1 − n 1 z 2 − n 2 {\displaystyle F(z_{1},z_{2})=\sum _{n_{1}=-\infty }^{\infty }\sum _{n_{2}=-\infty }^{\infty }f(n_{1},n_{2})z_{1}^{-n_{1}}z_{2}^{-n_{2}}} The Fourier transform is a special case of the Z transform evaluated along the unit circle (in 1D) and unit bi-circle (in 2D). i.e. at z = e i w {\textstyle z=e^{iw}} where z and w are vectors. === Region of convergence === Points (z1,z2) for which F ( z 1 , z 2 ) = ∑ n 1 = − ∞ ∞ ∑ n 2 = − ∞ ∞ | f ( n 1 , n 2 ) | | z 1 | − n 1 | z 2 | − n 2 {\displaystyle F(z_{1},z_{2})=\sum _{n_{1}=-\infty }^{\infty }\sum _{n_{2}=-\infty }^{\infty }|f(n_{1},n_{2})||z_{1}|^{-n_{1}}|z_{2}|^{-n_{2}}} < ∞ {\displaystyle <\infty } are located in the ROC. An example: If a sequence has a support as shown in Figure 1.1a, then its ROC is shown in Figure 1.1b. This follows that |F(z1,z2)| < ∞ . ( z 01 , z 02 ) {\displaystyle (z_{01},z_{02})} lies in the ROC, then all points ( z 1 , z 2 ) {\displaystyle (z_{1},z_{2})} that satisfy |z1|≥|z01| and |z2|≥|z02| lie in the ROC. Therefore, for figure 1.1a and 1.1b, the ROC would be ln ⁡ | z 1 | ≥ ln ⁡ | z 01 | and ln ⁡ | z 2 | ≥ L ln ⁡ | z 1 | + { ln ⁡ | z 02 | − L ln ⁡ | z 01 | } {\displaystyle \ln |z_{1}|\geq \ln |z_{01}|{\text{ and }}\ln |z_{2}|\geq L\ln |z_{1}|+\{\ln |z_{02}|-L\ln |z_{01}|\}} where L is the slope. The 2D Z-transform, similar to the Z-transform, is used in multidimensional signal processing to relate a two-dimensional discrete-time signal to the complex frequency domain in which the 2D surface in 4D space that the Fourier transform lies on is known as the unit surface or unit bicircle. == Applications == The DCT and DFT are often used in signal processing and image processing, and they are also used to efficiently solve partial differential equations by spectral methods. The DFT can also be used to perform other operations such as convolutions or multiplying large integers. The DFT and DCT have seen wide usage across a large number of fields, we only sketch a few examples below. === Image processing === The DCT is used in JPEG image compression, MJPEG, MPEG, DV, Daala, and Theora video compression. There, the two-dimensional DCT-II of NxN blocks are computed and the results are quantized and entropy coded. In this case, N is typically 8 and the DCT-II formula is applied to each row and column of the block. The result is an 8x8 transform coefficient array in which the: (0,0) element (top-left) is the DC (zero-frequency) component and entries with increasing vertical and horizontal index values represent higher vertical and horizontal spatial frequencies, as shown in the picture on the right. In image processing, one can also analyze and describe unconventional cryptographic methods based on 2D DCTs, for inserting non-visible binary watermarks into the 2D image plane, and According to different orientations, the 2-D directional DCT-DWT hybrid transform can be applied in denoising ultrasound images. 3-D DCT can also be used to transform video data or 3-D image data in watermark embedding schemes in transform domain. === Spectral analysis === When the DFT is used for spectral analysis, the {xn} sequence usually represents a finite set of uniformly spaced time-samples of some signal x(t) where t represents time. The conversion from continuous time to samples (discrete-time) changes the underlying Fourier transform of x(t) into a discrete-time Fourier transform (DTFT), which generally entails a type of distortion called aliasing. Choice of an appropriate sample-rate (see Nyquist rate) is the key to minimizing that distortion. Similarly, the conversion from a very long (or infinite) sequence to a manageable size entails a type of distortion called leakage, which is manifested as a loss of detail (aka resolution) in the DTFT. Choice of an appropriate sub-sequence length is the primary key to minimizing that effect. When the available data (and time to process it) is more than the amount needed to attain the desired frequency resolution, a standard technique is to perform multiple DFTs, for example to create a spectrogram. If the desired result is a power spectrum and noise or randomness is present in the data, averaging the magnitude components of the multiple DFTs is a useful procedure to reduce the variance of the spectrum (also called a periodogram in this context); two examples of such techniques are the Welch method and the Bartlett method; the general subject of estimating the power spectrum of a noisy signal is called spectral estimation. A final source of distortion (or perhaps illusion) is the DFT itself, because it is just a discrete sampling of the DTFT, which is a function of a continuous frequency domain. That can be mitigated by increasing the resolution of the DFT. That procedure is illustrated at § Sampling the DTFT. The procedure is sometimes referred to as zero-padding, which is a particular implementation used in conjunction with the fast Fourier transform (FFT) algorithm. The inefficiency of performing multiplications and additions with zero-valued "samples" is more than offset by the inherent efficiency of the FFT. As already noted, leakage imposes a limit on the inherent resolution of the DTFT. So there is a practical limit to the benefit that can be obtained from a fine-grained DFT. === Partial differential equations === Discrete Fourier transforms are often used to solve partial differential equations, where again the DFT is used as an approximation for the Fourier series (which is recovered in the limit of infinite N). The advantage of this approach is that it expands the signal in complex exponentials einx, which are eigenfunctions of differentiation: d/dx einx = in einx. Thus, in the Fourier representation, differentiation is simple—we just multiply by i n. (Note, however, that the choice of n is not unique due to aliasing; for the method to be convergent, a choice similar to that in the trigonometric interpolation section above should be used.) A linear differential equation with constant coefficients is transformed into an easily solvable algebraic equation. One then uses the inverse DFT to transform the result back into the ordinary spatial representation. Such an approach is called a spectral method. DCTs are also widely employed in solving partial differential equations by spectral methods, where the different variants of the DCT correspond to slightly different even/odd boundary conditions at the two ends of the array. Laplace transforms are used to solve partial differential equations. The general theory for obtaining solutions in this technique is developed by theorems on Laplace transform in n dimensions. The multidimensional Z transform can also be used to solve partial differential equations. === Image processing for arts surface analysis by FFT === One very important factor is that we must apply a non-destructive method to obtain those rare valuables information (from the HVS viewing point, is focused in whole colorimetric and spatial information) about works of art and zero-damage on them. We can understand the arts by looking at a color change or by measuring the surface uniformity change. Since the whole image will be very huge, so we use a double raised cosine window to truncate the image: w ( x , y ) = 1 4 ( 1 + cos ⁡ x π N ) ( 1 + cos ⁡ y π N ) {\displaystyle w(x,y)={\frac {1}{4}}\left(1+\cos {\frac {x\pi }{N}}\right)\left(1+\cos {\frac {y\pi }{N}}\right)} where N is the image dimension and x, y are the coordinates from the center of image spans from 0 to N/2. The author wanted to compute an equal value for spatial frequency such as: A m ( f ) 2 = [ ∑ i = − f f FFT ⁡ ( − f , i ) 2 + ∑ i = − f f FFT ⁡ ( f , i ) 2 + ∑ i = − f + 1 f − 1 FFT ⁡ ( i , − f ) 2 + ∑ i = − f + 1 f − 1 FFT ⁡ ( i , f ) 2 ] {\displaystyle {\begin{aligned}A_{m}(f)^{2}=\left[\sum _{i=-f}^{f}\right.&\operatorname {FFT} (-f,i)^{2}+\sum _{i=-f}^{f}\operatorname {FFT} (f,i)^{2}\\[5pt]&\left.{}+\sum _{i=-f+1}^{f-1}\operatorname {FFT} (i,-f)^{2}+\sum _{i=-f+1}^{f-1}\operatorname {FFT} (i,f)^{2}\right]\end{aligned}}} where "FFT" denotes the fast Fourier transform, and f is the spatial frequency spans from 0 to N/2 – 1. The proposed FFT-based imaging approach is diagnostic technology to ensure a long life and stable to culture arts. This is a simple, cheap which can be used in museums without affecting their daily use. But this method doesn’t allow a quantitative measure of the corrosion rate. === Application to weakly nonlinear circuit simulation === Source: The inverse multidimensional Laplace transform can be applied to simulate nonlinear circuits. This is done so by formulating a circuit as a state-space and expanding the Inverse Laplace Transform based on Laguerre function expansion. The Laguerre method can be used to simulate a weakly nonlinear circuit and the Laguerre method can invert a multidimensional Laplace transform efficiently with a high accuracy. It is observed that a high accuracy and significant speedup can be achieved for simulating large nonlinear circuits using multidimensional Laplace transforms. == See also == Discrete cosine transform List of Fourier-related transforms List of Fourier analysis topics Multidimensional discrete convolution 2D Z-transform Multidimensional empirical mode decomposition Multidimensional signal reconstruction == References ==
Wikipedia/Multidimensional_transform
The fast multipole method (FMM) is a numerical technique that was developed to speed up the calculation of long-ranged forces in the n-body problem. It does this by expanding the system Green's function using a multipole expansion, which allows one to group sources that lie close together and treat them as if they are a single source. The FMM has also been applied in accelerating the iterative solver in the method of moments (MOM) as applied to computational electromagnetics problems, and in particular in computational bioelectromagnetism. The FMM was first introduced in this manner by Leslie Greengard and Vladimir Rokhlin Jr. and is based on the multipole expansion of the vector Helmholtz equation. By treating the interactions between far-away basis functions using the FMM, the corresponding matrix elements do not need to be explicitly stored, resulting in a significant reduction in required memory. If the FMM is then applied in a hierarchical manner, it can improve the complexity of matrix-vector products in an iterative solver from O ( N 2 ) {\displaystyle {\mathcal {O}}(N^{2})} to O ( N ) {\displaystyle {\mathcal {O}}(N)} in finite arithmetic, i.e., given a tolerance ε {\displaystyle \varepsilon } , the matrix-vector product is guaranteed to be within a tolerance ε . {\displaystyle \varepsilon .} The dependence of the complexity on the tolerance ε {\displaystyle \varepsilon } is O ( log ⁡ ( 1 / ε ) ) {\displaystyle {\mathcal {O}}(\log(1/\varepsilon ))} , i.e., the complexity of FMM is O ( N log ⁡ ( 1 / ε ) ) {\displaystyle {\mathcal {O}}(N\log(1/\varepsilon ))} . This has expanded the area of applicability of the MOM to far greater problems than were previously possible. The FMM, introduced by Rokhlin Jr. and Greengard has been said to be one of the top ten algorithms of the 20th century. The FMM algorithm reduces the complexity of matrix-vector multiplication involving a certain type of dense matrix which can arise out of many physical systems. The FMM has also been applied for efficiently treating the Coulomb interaction in the Hartree–Fock method and density functional theory calculations in quantum chemistry. == Sketch of the Algorithm == In its simplest form, the fast multipole method seeks to evaluate the following function: f ( y ) = ∑ α = 1 N ϕ α y − x α {\displaystyle f(y)=\sum _{\alpha =1}^{N}{\frac {\phi _{\alpha }}{y-x_{\alpha }}}} , where x α ∈ [ − 1 , 1 ] {\displaystyle x_{\alpha }\in [-1,1]} are a set of poles and ϕ α ∈ C {\displaystyle \phi _{\alpha }\in \mathbb {C} } are the corresponding pole weights on a set of points { y 1 , … , y M } {\displaystyle \{y_{1},\ldots ,y_{M}\}} with y β ∈ [ − 1 , 1 ] {\displaystyle y_{\beta }\in [-1,1]} . This is the one-dimensional form of the problem, but the algorithm can be easily generalized to multiple dimensions and kernels other than ( y − x ) − 1 {\displaystyle (y-x)^{-1}} . Naively, evaluating f ( y ) {\displaystyle f(y)} on M {\displaystyle M} points requires O ( M N ) {\displaystyle {\mathcal {O}}(MN)} operations. The crucial observation behind the fast multipole method is that if the distance between y {\displaystyle y} and x {\displaystyle x} is large enough, then ( y − x ) − 1 {\displaystyle (y-x)^{-1}} is well-approximated by a polynomial. Specifically, let − 1 < t 1 < … < t p < 1 {\displaystyle -1<t_{1}<\ldots <t_{p}<1} be the Chebyshev nodes of order p ≥ 2 {\displaystyle p\geq 2} and let u 1 ( y ) , … , u p ( y ) {\displaystyle u_{1}(y),\ldots ,u_{p}(y)} be the corresponding Lagrange basis polynomials. One can show that the interpolating polynomial: 1 y − x = ∑ i = 1 p 1 t i − x u i ( y ) + ϵ p ( y ) {\displaystyle {\frac {1}{y-x}}=\sum _{i=1}^{p}{\frac {1}{t_{i}-x}}u_{i}(y)+\epsilon _{p}(y)} converges quickly with polynomial order, | ϵ p ( y ) | < 5 − p {\displaystyle |\epsilon _{p(y)}|<5^{-p}} , provided that the pole is far enough away from the region of interpolation, | x | ≥ 3 {\displaystyle |x|\geq 3} and | y | < 1 {\displaystyle |y|<1} . This is known as the "local expansion". The speed-up of the fast multipole method derives from this interpolation: provided that all the poles are "far away", we evaluate the sum only on the Chebyshev nodes at a cost of O ( N p ) {\displaystyle {\mathcal {O}}(Np)} , and then interpolate it onto all the desired points at a cost of O ( M p ) {\displaystyle {\mathcal {O}}(Mp)} : ∑ α = 1 N ϕ α y β − x α = ∑ i = 1 p u i ( y β ) ∑ α = 1 N 1 t i − x α ϕ α {\displaystyle \sum _{\alpha =1}^{N}{\frac {\phi _{\alpha }}{y_{\beta }-x_{\alpha }}}=\sum _{i=1}^{p}u_{i}(y_{\beta })\sum _{\alpha =1}^{N}{\frac {1}{t_{i}-x_{\alpha }}}\phi _{\alpha }} Since p = − log 5 ⁡ ϵ {\displaystyle p=-\log _{5}\epsilon } , where ϵ {\displaystyle \epsilon } is the numerical tolerance, the total cost is O ( ( M + N ) log ⁡ ( 1 / ϵ ) ) {\displaystyle {\mathcal {O}}((M+N)\log(1/\epsilon ))} . To ensure that the poles are indeed well-separated, one recursively subdivides the unit interval such that only O ( p ) {\displaystyle {\mathcal {O}}(p)} poles end up in each interval. One then uses the explicit formula within each interval and interpolation for all intervals that are well-separated. This does not spoil the scaling, since one needs at most log ⁡ ( 1 / ϵ ) {\displaystyle \log(1/\epsilon )} levels within the given tolerance. == See also == Barnes–Hut simulation Multipole expansion n-body simulation == References == == Further readings == Yijun Liu: Fast Multipole Boundary Element Method: Theory and Applications in Engineering, Cambridge Univ. Press, ISBN 978-0-521-11659-6 (2009). == External links == Gibson, Walton C. The Method of Moments in Electromagnetics (3rd ed.), Chapman & Hall/CRC, 2021. ISBN 9780367365066. Abstract of Greengard and Rokhlin's original paper A short course on fast multipole methods by Rick Beatson and Leslie Greengard. JAVA Animation of the Fast Multipole Method Nice animation of the Fast Multipole Method with different adaptations. === Free software === Puma-EM A high performance, parallelized, open source Method of Moments / Multilevel Fast Multipole Method electromagnetics code. KIFMM3d The Kernel-Independent Fast Multipole 3d Method (kifmm3d) is a new FMM implementation which does not require the explicit multipole expansions of the underlying kernel, and it is based on kernel evaluations. PVFMM An optimized parallel implementation of KIFMM for computing potentials from particle and volume sources. FastBEM Free fast multipole boundary element programs for solving 2D/3D potential, elasticity, stokes flow and acoustic problems. FastFieldSolvers maintains the distribution of the tools, called FastHenry and FastCap, developed at M.I.T. for the solution of Maxwell equations and extraction of circuit parasites (inductance and capacitance) using the FMM. ExaFMM ExaFMM is a CPU/GPU capable 3D FMM code for Laplace/Helmholtz kernels that focuses on parallel scalability. ScalFMM Archived 2017-05-02 at the Wayback Machine ScalFMM is a C++ software library developed at Inria Bordeaux with high emphasis on genericity and parallelization (using OpenMP/MPI). DASHMM DASHMM is a C++ Software library developed at Indiana University using Asynchronous Multi-Tasking HPX-5 runtime system. It provides a unified execution on shared and distributed memory computers and provides 3D Laplace, Yukawa, and Helmholtz kernels. RECFMM Adaptive FMM with dynamic parallelism on multicores. FMM3D A library for efficient 3D N-body interaction computation on multicore machines.
Wikipedia/Fast_multipole_method
The Cooley–Tukey algorithm, named after J. W. Cooley and John Tukey, is the most common fast Fourier transform (FFT) algorithm. It re-expresses the discrete Fourier transform (DFT) of an arbitrary composite size N = N 1 N 2 {\displaystyle N=N_{1}N_{2}} in terms of N1 smaller DFTs of sizes N2, recursively, to reduce the computation time to O(N log N) for highly composite N (smooth numbers). Because of the algorithm's importance, specific variants and implementation styles have become known by their own names, as described below. Because the Cooley–Tukey algorithm breaks the DFT into smaller DFTs, it can be combined arbitrarily with any other algorithm for the DFT. For example, Rader's or Bluestein's algorithm can be used to handle large prime factors that cannot be decomposed by Cooley–Tukey, or the prime-factor algorithm can be exploited for greater efficiency in separating out relatively prime factors. The algorithm, along with its recursive application, was invented by Carl Friedrich Gauss. Cooley and Tukey independently rediscovered and popularized it 160 years later. == History == This algorithm, including its recursive application, was invented around 1805 by Carl Friedrich Gauss, who used it to interpolate the trajectories of the asteroids Pallas and Juno, but his work was not widely recognized (being published only posthumously and in Neo-Latin). Gauss did not analyze the asymptotic computational time, however. Various limited forms were also rediscovered several times throughout the 19th and early 20th centuries. FFTs became popular after James Cooley of IBM and John Tukey of Princeton published a paper in 1965 reinventing the algorithm and describing how to perform it conveniently on a computer. Tukey reportedly came up with the idea during a meeting of President Kennedy's Science Advisory Committee discussing ways to detect nuclear-weapon tests in the Soviet Union by employing seismometers located outside the country. These sensors would generate seismological time series. However, analysis of this data would require fast algorithms for computing DFTs due to the number of sensors and length of time. This task was critical for the ratification of the proposed nuclear test ban so that any violations could be detected without need to visit Soviet facilities. Another participant at that meeting, Richard Garwin of IBM, recognized the potential of the method and put Tukey in touch with Cooley. However, Garwin made sure that Cooley did not know the original purpose. Instead, Cooley was told that this was needed to determine periodicities of the spin orientations in a 3-D crystal of helium-3. Cooley and Tukey subsequently published their joint paper, and wide adoption quickly followed due to the simultaneous development of Analog-to-digital converters capable of sampling at rates up to 300 kHz. The fact that Gauss had described the same algorithm (albeit without analyzing its asymptotic cost) was not realized until several years after Cooley and Tukey's 1965 paper. Their paper cited as inspiration only the work by I. J. Good on what is now called the prime-factor FFT algorithm (PFA); although Good's algorithm was initially thought to be equivalent to the Cooley–Tukey algorithm, it was quickly realized that PFA is a quite different algorithm (working only for sizes that have relatively prime factors and relying on the Chinese remainder theorem, unlike the support for any composite size in Cooley–Tukey). == The radix-2 DIT case == A radix-2 decimation-in-time (DIT) FFT is the simplest and most common form of the Cooley–Tukey algorithm, although highly optimized Cooley–Tukey implementations typically use other forms of the algorithm as described below. Radix-2 DIT divides a DFT of size N into two interleaved DFTs (hence the name "radix-2") of size N/2 with each recursive stage. The discrete Fourier transform (DFT) is defined by the formula: X k = ∑ n = 0 N − 1 x n e − 2 π i N n k , {\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}e^{-{\frac {2\pi i}{N}}nk},} where k {\displaystyle k} is an integer ranging from 0 to N − 1 {\displaystyle N-1} . Radix-2 DIT first computes the DFTs of the even-indexed inputs ( x 2 m = x 0 , x 2 , … , x N − 2 ) {\displaystyle (x_{2m}=x_{0},x_{2},\ldots ,x_{N-2})} and of the odd-indexed inputs ( x 2 m + 1 = x 1 , x 3 , … , x N − 1 ) {\displaystyle (x_{2m+1}=x_{1},x_{3},\ldots ,x_{N-1})} , and then combines those two results to produce the DFT of the whole sequence. This idea can then be performed recursively to reduce the overall runtime to O(N log N). This simplified form assumes that N is a power of two; since the number of sample points N can usually be chosen freely by the application (e.g. by changing the sample rate or window, zero-padding, etcetera), this is often not an important restriction. The radix-2 DIT algorithm rearranges the DFT of the function x n {\displaystyle x_{n}} into two parts: a sum over the even-numbered indices n = 2 m {\displaystyle n={2m}} and a sum over the odd-numbered indices n = 2 m + 1 {\displaystyle n={2m+1}} : X k = ∑ m = 0 N / 2 − 1 x 2 m e − 2 π i N ( 2 m ) k + ∑ m = 0 N / 2 − 1 x 2 m + 1 e − 2 π i N ( 2 m + 1 ) k {\displaystyle X_{k}=\sum _{m=0}^{N/2-1}x_{2m}e^{-{\frac {2\pi i}{N}}(2m)k}+\sum _{m=0}^{N/2-1}x_{2m+1}e^{-{\frac {2\pi i}{N}}(2m+1)k}} One can factor a common multiplier e − 2 π i N k {\displaystyle e^{-{\frac {2\pi i}{N}}k}} out of the second sum, as shown in the equation below. It is then clear that the two sums are the DFT of the even-indexed part x 2 m {\displaystyle x_{2m}} and the DFT of odd-indexed part x 2 m + 1 {\displaystyle x_{2m+1}} of the function x n {\displaystyle x_{n}} . Denote the DFT of the Even-indexed inputs x 2 m {\displaystyle x_{2m}} by E k {\displaystyle E_{k}} and the DFT of the Odd-indexed inputs x 2 m + 1 {\displaystyle x_{2m+1}} by O k {\displaystyle O_{k}} and we obtain: X k = ∑ m = 0 N / 2 − 1 x 2 m e − 2 π i N / 2 m k ⏟ DFT of even-indexed part of x n + e − 2 π i N k ∑ m = 0 N / 2 − 1 x 2 m + 1 e − 2 π i N / 2 m k ⏟ DFT of odd-indexed part of x n = E k + e − 2 π i N k O k for k = 0 , … , N 2 − 1. {\displaystyle X_{k}=\underbrace {\sum \limits _{m=0}^{N/2-1}x_{2m}e^{-{\frac {2\pi i}{N/2}}mk}} _{{\text{DFT of even-indexed part of }}x_{n}}{}+e^{-{\frac {2\pi i}{N}}k}\underbrace {\sum \limits _{m=0}^{N/2-1}x_{2m+1}e^{-{\frac {2\pi i}{N/2}}mk}} _{{\text{DFT of odd-indexed part of }}x_{n}}=E_{k}+e^{-{\frac {2\pi i}{N}}k}O_{k}\qquad {\text{ for }}k=0,\dots ,{\frac {N}{2}}-1.} Note that the equalities hold for k = 0 , … , N − 1 {\displaystyle k=0,\dots ,N-1} , but the crux is that E k {\displaystyle E_{k}} and O k {\displaystyle O_{k}} are calculated in this way for k = 0 , … , N 2 − 1 {\displaystyle k=0,\dots ,{\frac {N}{2}}-1} only. Thanks to the periodicity of the complex exponential, X k + N 2 {\displaystyle X_{k+{\frac {N}{2}}}} is also obtained from E k {\displaystyle E_{k}} and O k {\displaystyle O_{k}} : X k + N 2 = ∑ m = 0 N / 2 − 1 x 2 m e − 2 π i N / 2 m ( k + N 2 ) + e − 2 π i N ( k + N 2 ) ∑ m = 0 N / 2 − 1 x 2 m + 1 e − 2 π i N / 2 m ( k + N 2 ) = ∑ m = 0 N / 2 − 1 x 2 m e − 2 π i N / 2 m k e − 2 π m i + e − 2 π i N k e − π i ∑ m = 0 N / 2 − 1 x 2 m + 1 e − 2 π i N / 2 m k e − 2 π m i = ∑ m = 0 N / 2 − 1 x 2 m e − 2 π i N / 2 m k − e − 2 π i N k ∑ m = 0 N / 2 − 1 x 2 m + 1 e − 2 π i N / 2 m k = E k − e − 2 π i N k O k {\displaystyle {\begin{aligned}X_{k+{\frac {N}{2}}}&=\sum \limits _{m=0}^{N/2-1}x_{2m}e^{-{\frac {2\pi i}{N/2}}m(k+{\frac {N}{2}})}+e^{-{\frac {2\pi i}{N}}(k+{\frac {N}{2}})}\sum _{m=0}^{N/2-1}x_{2m+1}e^{-{\frac {2\pi i}{N/2}}m(k+{\frac {N}{2}})}\\&=\sum _{m=0}^{N/2-1}x_{2m}e^{-{\frac {2\pi i}{N/2}}mk}e^{-2\pi mi}+e^{-{\frac {2\pi i}{N}}k}e^{-\pi i}\sum _{m=0}^{N/2-1}x_{2m+1}e^{-{\frac {2\pi i}{N/2}}mk}e^{-2\pi mi}\\&=\sum _{m=0}^{N/2-1}x_{2m}e^{-{\frac {2\pi i}{N/2}}mk}-e^{-{\frac {2\pi i}{N}}k}\sum _{m=0}^{N/2-1}x_{2m+1}e^{-{\frac {2\pi i}{N/2}}mk}\\&=E_{k}-e^{-{\frac {2\pi i}{N}}k}O_{k}\end{aligned}}} We can rewrite X k {\displaystyle X_{k}} and X k + N 2 {\displaystyle X_{k+{\frac {N}{2}}}} as: X k = E k + e − 2 π i N k O k X k + N 2 = E k − e − 2 π i N k O k {\displaystyle {\begin{aligned}X_{k}&=E_{k}+e^{-{\frac {2\pi i}{N}}{k}}O_{k}\\X_{k+{\frac {N}{2}}}&=E_{k}-e^{-{\frac {2\pi i}{N}}{k}}O_{k}\end{aligned}}} This result, expressing the DFT of length N recursively in terms of two DFTs of size N/2, is the core of the radix-2 DIT fast Fourier transform. The algorithm gains its speed by re-using the results of intermediate computations to compute multiple DFT outputs. Note that final outputs are obtained by a +/− combination of E k {\displaystyle E_{k}} and O k exp ⁡ ( − 2 π i k / N ) {\displaystyle O_{k}\exp(-2\pi ik/N)} , which is simply a size-2 DFT (sometimes called a butterfly in this context); when this is generalized to larger radices below, the size-2 DFT is replaced by a larger DFT (which itself can be evaluated with an FFT). This process is an example of the general technique of divide and conquer algorithms; in many conventional implementations, however, the explicit recursion is avoided, and instead one traverses the computational tree in breadth-first fashion. The above re-expression of a size-N DFT as two size-N/2 DFTs is sometimes called the Danielson–Lanczos lemma, since the identity was noted by those two authors in 1942 (influenced by Runge's 1903 work). They applied their lemma in a "backwards" recursive fashion, repeatedly doubling the DFT size until the transform spectrum converged (although they apparently didn't realize the linearithmic [i.e., order N log N] asymptotic complexity they had achieved). The Danielson–Lanczos work predated widespread availability of mechanical or electronic computers and required manual calculation (possibly with mechanical aids such as adding machines); they reported a computation time of 140 minutes for a size-64 DFT operating on real inputs to 3–5 significant digits. Cooley and Tukey's 1965 paper reported a running time of 0.02 minutes for a size-2048 complex DFT on an IBM 7094 (probably in 36-bit single precision, ~8 digits). Rescaling the time by the number of operations, this corresponds roughly to a speedup factor of around 800,000. (To put the time for the hand calculation in perspective, 140 minutes for size 64 corresponds to an average of at most 16 seconds per floating-point operation, around 20% of which are multiplications.) === Pseudocode === In pseudocode, the below procedure could be written: X0,...,N−1 ← ditfft2(x, N, s): DFT of (x0, xs, x2s, ..., x(N-1)s): if N = 1 then X0 ← x0 trivial size-1 DFT base case else X0,...,N/2−1 ← ditfft2(x, N/2, 2s) DFT of (x0, x2s, x4s, ..., x(N-2)s) XN/2,...,N−1 ← ditfft2(x+s, N/2, 2s) DFT of (xs, xs+2s, xs+4s, ..., x(N-1)s) for k = 0 to (N/2)-1 do combine DFTs of two halves: p ← Xk q ← exp(−2πi/N k) Xk+N/2 Xk ← p + q Xk+N/2 ← p − q end for end if Here, ditfft2(x,N,1), computes X=DFT(x) out-of-place by a radix-2 DIT FFT, where N is an integer power of 2 and s=1 is the stride of the input x array. x+s denotes the array starting with xs. (The results are in the correct order in X and no further bit-reversal permutation is required; the often-mentioned necessity of a separate bit-reversal stage only arises for certain in-place algorithms, as described below.) High-performance FFT implementations make many modifications to the implementation of such an algorithm compared to this simple pseudocode. For example, one can use a larger base case than N=1 to amortize the overhead of recursion, the twiddle factors exp ⁡ [ − 2 π i k / N ] {\displaystyle \exp[-2\pi ik/N]} can be precomputed, and larger radices are often used for cache reasons; these and other optimizations together can improve the performance by an order of magnitude or more. (In many textbook implementations the depth-first recursion is eliminated in favor of a nonrecursive breadth-first approach, although depth-first recursion has been argued to have better memory locality.) Several of these ideas are described in further detail below. == Idea == More generally, Cooley–Tukey algorithms recursively re-express a DFT of a composite size N = N1N2 as: Perform N1 DFTs of size N2. Multiply by complex roots of unity (often called the twiddle factors). Perform N2 DFTs of size N1. Typically, either N1 or N2 is a small factor (not necessarily prime), called the radix (which can differ between stages of the recursion). If N1 is the radix, it is called a decimation in time (DIT) algorithm, whereas if N2 is the radix, it is decimation in frequency (DIF, also called the Sande–Tukey algorithm). The version presented above was a radix-2 DIT algorithm; in the final expression, the phase multiplying the odd transform is the twiddle factor, and the +/- combination (butterfly) of the even and odd transforms is a size-2 DFT. (The radix's small DFT is sometimes known as a butterfly, so-called because of the shape of the dataflow diagram for the radix-2 case.) == Variations == There are many other variations on the Cooley–Tukey algorithm. Mixed-radix implementations handle composite sizes with a variety of (typically small) factors in addition to two, usually (but not always) employing the O(N2) algorithm for the prime base cases of the recursion (it is also possible to employ an N log N algorithm for the prime base cases, such as Rader's or Bluestein's algorithm). Split radix merges radices 2 and 4, exploiting the fact that the first transform of radix 2 requires no twiddle factor, in order to achieve what was long the lowest known arithmetic operation count for power-of-two sizes, although recent variations achieve an even lower count. (On present-day computers, performance is determined more by cache and CPU pipeline considerations than by strict operation counts; well-optimized FFT implementations often employ larger radices and/or hard-coded base-case transforms of significant size.). Another way of looking at the Cooley–Tukey algorithm is that it re-expresses a size N one-dimensional DFT as an N1 by N2 two-dimensional DFT (plus twiddles), where the output matrix is transposed. The net result of all of these transpositions, for a radix-2 algorithm, corresponds to a bit reversal of the input (DIF) or output (DIT) indices. If, instead of using a small radix, one employs a radix of roughly √N and explicit input/output matrix transpositions, it is called a four-step FFT algorithm (or six-step, depending on the number of transpositions), initially proposed to improve memory locality, e.g. for cache optimization or out-of-core operation, and was later shown to be an optimal cache-oblivious algorithm. The general Cooley–Tukey factorization rewrites the indices k and n as k = N 2 k 1 + k 2 {\displaystyle k=N_{2}k_{1}+k_{2}} and n = N 1 n 2 + n 1 {\displaystyle n=N_{1}n_{2}+n_{1}} , respectively, where the indices ka and na run from 0..Na-1 (for a of 1 or 2). That is, it re-indexes the input (n) and output (k) as N1 by N2 two-dimensional arrays in column-major and row-major order, respectively; the difference between these indexings is a transposition, as mentioned above. When this re-indexing is substituted into the DFT formula for nk, the N 1 n 2 N 2 k 1 {\displaystyle N_{1}n_{2}N_{2}k_{1}} cross term vanishes (its exponential is unity), and the remaining terms give X N 2 k 1 + k 2 = ∑ n 1 = 0 N 1 − 1 ∑ n 2 = 0 N 2 − 1 x N 1 n 2 + n 1 e − 2 π i N 1 N 2 ⋅ ( N 1 n 2 + n 1 ) ⋅ ( N 2 k 1 + k 2 ) {\displaystyle X_{N_{2}k_{1}+k_{2}}=\sum _{n_{1}=0}^{N_{1}-1}\sum _{n_{2}=0}^{N_{2}-1}x_{N_{1}n_{2}+n_{1}}e^{-{\frac {2\pi i}{N_{1}N_{2}}}\cdot (N_{1}n_{2}+n_{1})\cdot (N_{2}k_{1}+k_{2})}} = ∑ n 1 = 0 N 1 − 1 [ e − 2 π i N 1 N 2 n 1 k 2 ] ( ∑ n 2 = 0 N 2 − 1 x N 1 n 2 + n 1 e − 2 π i N 2 n 2 k 2 ) e − 2 π i N 1 n 1 k 1 {\displaystyle =\sum _{n_{1}=0}^{N_{1}-1}\left[e^{-{\frac {2\pi i}{N_{1}N_{2}}}n_{1}k_{2}}\right]\left(\sum _{n_{2}=0}^{N_{2}-1}x_{N_{1}n_{2}+n_{1}}e^{-{\frac {2\pi i}{N_{2}}}n_{2}k_{2}}\right)e^{-{\frac {2\pi i}{N_{1}}}n_{1}k_{1}}} = ∑ n 1 = 0 N 1 − 1 ( ∑ n 2 = 0 N 2 − 1 x N 1 n 2 + n 1 e − 2 π i N 2 n 2 k 2 ) e − 2 π i N 1 N 2 n 1 ( N 2 k 1 + k 2 ) {\displaystyle =\sum _{n_{1}=0}^{N_{1}-1}\left(\sum _{n_{2}=0}^{N_{2}-1}x_{N_{1}n_{2}+n_{1}}e^{-{\frac {2\pi i}{N_{2}}}n_{2}k_{2}}\right)e^{-{\frac {2\pi i}{N_{1}N_{2}}}n_{1}(N_{2}k_{1}+k_{2})}} . where each inner sum is a DFT of size N2, each outer sum is a DFT of size N1, and the [...] bracketed term is the twiddle factor. An arbitrary radix r (as well as mixed radices) can be employed, as was shown by both Cooley and Tukey as well as Gauss (who gave examples of radix-3 and radix-6 steps). Cooley and Tukey originally assumed that the radix butterfly required O(r2) work and hence reckoned the complexity for a radix r to be O(r2 N/r logrN) = O(N log2(N) r/log2r); from calculation of values of r/log2r for integer values of r from 2 to 12 the optimal radix is found to be 3 (the closest integer to e, which minimizes r/log2r). This analysis was erroneous, however: the radix-butterfly is also a DFT and can be performed via an FFT algorithm in O(r log r) operations, hence the radix r actually cancels in the complexity O(r log(r) N/r logrN), and the optimal r is determined by more complicated considerations. In practice, quite large r (32 or 64) are important in order to effectively exploit e.g. the large number of processor registers on modern processors, and even an unbounded radix r=√N also achieves O(N log N) complexity and has theoretical and practical advantages for large N as mentioned above. == Data reordering, bit reversal, and in-place algorithms == Although the abstract Cooley–Tukey factorization of the DFT, above, applies in some form to all implementations of the algorithm, much greater diversity exists in the techniques for ordering and accessing the data at each stage of the FFT. Of special interest is the problem of devising an in-place algorithm that overwrites its input with its output data using only O(1) auxiliary storage. The best-known reordering technique involves explicit bit reversal for in-place radix-2 algorithms. Bit reversal is the permutation where the data at an index n, written in binary with digits b4b3b2b1b0 (e.g. 5 digits for N=32 inputs), is transferred to the index with reversed digits b0b1b2b3b4 . Consider the last stage of a radix-2 DIT algorithm like the one presented above, where the output is written in-place over the input: when E k {\displaystyle E_{k}} and O k {\displaystyle O_{k}} are combined with a size-2 DFT, those two values are overwritten by the outputs. However, the two output values should go in the first and second halves of the output array, corresponding to the most significant bit b4 (for N=32); whereas the two inputs E k {\displaystyle E_{k}} and O k {\displaystyle O_{k}} are interleaved in the even and odd elements, corresponding to the least significant bit b0. Thus, in order to get the output in the correct place, b0 should take the place of b4 and the index becomes b0b4b3b2b1. And for next recursive stage, those 4 least significant bits will become b1b4b3b2, If you include all of the recursive stages of a radix-2 DIT algorithm, all the bits must be reversed and thus one must pre-process the input (or post-process the output) with a bit reversal to get in-order output. (If each size-N/2 subtransform is to operate on contiguous data, the DIT input is pre-processed by bit-reversal.) Correspondingly, if you perform all of the steps in reverse order, you obtain a radix-2 DIF algorithm with bit reversal in post-processing (or pre-processing, respectively). The logarithm (log) used in this algorithm is a base 2 logarithm. The following is pseudocode for iterative radix-2 FFT algorithm implemented using bit-reversal permutation. algorithm iterative-fft is input: Array a of n complex values where n is a power of 2. output: Array A the DFT of a. bit-reverse-copy(a, A) n ← a.length for s = 1 to log(n) do m ← 2s ωm ← exp(−2πi/m) for k = 0 to n-1 by m do ω ← 1 for j = 0 to m/2 – 1 do t ← ω A[k + j + m/2] u ← A[k + j] A[k + j] ← u + t A[k + j + m/2] ← u – t ω ← ω ωm return A The bit-reverse-copy procedure can be implemented as follows. algorithm bit-reverse-copy(a,A) is input: Array a of n complex values where n is a power of 2. output: Array A of size n. n ← a.length for k = 0 to n – 1 do A[rev(k)] := a[k] Alternatively, some applications (such as convolution) work equally well on bit-reversed data, so one can perform forward transforms, processing, and then inverse transforms all without bit reversal to produce final results in the natural order. Many FFT users, however, prefer natural-order outputs, and a separate, explicit bit-reversal stage can have a non-negligible impact on the computation time, even though bit reversal can be done in O(N) time and has been the subject of much research. Also, while the permutation is a bit reversal in the radix-2 case, it is more generally an arbitrary (mixed-base) digit reversal for the mixed-radix case, and the permutation algorithms become more complicated to implement. Moreover, it is desirable on many hardware architectures to re-order intermediate stages of the FFT algorithm so that they operate on consecutive (or at least more localized) data elements. To these ends, a number of alternative implementation schemes have been devised for the Cooley–Tukey algorithm that do not require separate bit reversal and/or involve additional permutations at intermediate stages. The problem is greatly simplified if it is out-of-place: the output array is distinct from the input array or, equivalently, an equal-size auxiliary array is available. The Stockham auto-sort algorithm performs every stage of the FFT out-of-place, typically writing back and forth between two arrays, transposing one "digit" of the indices with each stage, and has been especially popular on SIMD architectures. Even greater potential SIMD advantages (more consecutive accesses) have been proposed for the Pease algorithm, which also reorders out-of-place with each stage, but this method requires separate bit/digit reversal and O(N log N) storage. One can also directly apply the Cooley–Tukey factorization definition with explicit (depth-first) recursion and small radices, which produces natural-order out-of-place output with no separate permutation step (as in the pseudocode above) and can be argued to have cache-oblivious locality benefits on systems with hierarchical memory. A typical strategy for in-place algorithms without auxiliary storage and without separate digit-reversal passes involves small matrix transpositions (which swap individual pairs of digits) at intermediate stages, which can be combined with the radix butterflies to reduce the number of passes over the data. == References == == External links == "Fast Fourier transform - FFT". Cooley-Tukey technique. Article. 10. A simple, pedagogical radix-2 algorithm in C++ "KISSFFT". GitHub. 11 February 2022. A simple mixed-radix Cooley–Tukey implementation in C Dsplib on GitHub "Radix-2 Decimation in Time FFT Algorithm". Archived from the original on October 31, 2017. "Алгоритм БПФ по основанию два с прореживанием по времени" (in Russian). "Radix-2 Decimation in Frequency FFT Algorithm". Archived from the original on November 14, 2017. "Алгоритм БПФ по основанию два с прореживанием по частоте" (in Russian).
Wikipedia/Cooley–Tukey_FFT_algorithm
A discrete Hartley transform (DHT) is a Fourier-related transform of discrete, periodic data similar to the discrete Fourier transform (DFT), with analogous applications in signal processing and related fields. Its main distinction from the DFT is that it transforms real inputs to real outputs, with no intrinsic involvement of complex numbers. Just as the DFT is the discrete analogue of the continuous Fourier transform (FT), the DHT is the discrete analogue of the continuous Hartley transform (HT), introduced by Ralph V. L. Hartley in 1942. Because there are fast algorithms for the DHT analogous to the fast Fourier transform (FFT), the DHT was originally proposed by Ronald N. Bracewell in 1983 as a more efficient computational tool in the common case where the data are purely real. It was subsequently argued, however, that specialized FFT algorithms for real inputs or outputs can ordinarily be found with slightly fewer operations than any corresponding algorithm for the DHT. == Definition == Formally, the discrete Hartley transform is a linear, invertible function H: Rn → Rn (where R denotes the set of real numbers). The N real numbers x0, ..., xN−1 are transformed into the N real numbers H0, ..., HN−1 according to the formula H k = ∑ n = 0 N − 1 x n cas ⁡ ( 2 π N n k ) = ∑ n = 0 N − 1 x n [ cos ⁡ ( 2 π N n k ) + sin ⁡ ( 2 π N n k ) ] k = 0 , … , N − 1. {\displaystyle H_{k}=\sum _{n=0}^{N-1}x_{n}\operatorname {cas} \left({\frac {2\pi }{N}}nk\right)=\sum _{n=0}^{N-1}x_{n}\left[\cos \left({\frac {2\pi }{N}}nk\right)+\sin \left({\frac {2\pi }{N}}nk\right)\right]\quad \quad k=0,\dots ,N-1.} The combination cos ⁡ ( z ) + sin ⁡ ( z ) {\displaystyle \cos(z)+\sin(z)} = 2 cos ⁡ ( z − π 4 ) {\displaystyle ={\sqrt {2}}\cos \left(z-{\frac {\pi }{4}}\right)} is sometimes denoted cas(z), and should not be confused with cis(z) = eiz = cos(z) + i sin(z), or e−iz = cis(−z) which appears in the DFT definition (where i is the imaginary unit). As with the DFT, the overall scale factor in front of the transform and the sign of the sine term are a matter of convention. Although these conventions occasionally vary between authors, they do not affect the essential properties of the transform. == Properties == The transform can be interpreted as the multiplication of the vector (x0, ...., xN−1) by an N-by-N matrix; therefore, the discrete Hartley transform is a linear operator. The matrix is invertible; the inverse transformation, which allows one to recover the xn from the Hk, is simply the DHT of Hk multiplied by 1/N. That is, the DHT is its own inverse (involutory), up to an overall scale factor. The DHT can be used to compute the DFT, and vice versa. For real inputs xn, the DFT output Xk has a real part (Hk + HN−k)/2 and an imaginary part (HN−k − Hk)/2. Conversely, the DHT is equivalent to computing the DFT of xn multiplied by 1 + i, then taking the real part of the result. As with the DFT, a cyclic convolution z = x∗y of two vectors x = (xn) and y = (yn) to produce a vector z = (zn), all of length N, becomes a simple operation after the DHT. In particular, suppose that the vectors X, Y, and Z denote the DHT of x, y, and z respectively. Then the elements of Z are given by: Z k = [ X k ( Y k + Y N − k ) + X N − k ( Y k − Y N − k ) ] / 2 Z N − k = [ X N − k ( Y k + Y N − k ) − X k ( Y k − Y N − k ) ] / 2 {\displaystyle {\begin{matrix}Z_{k}&=&\left[X_{k}\left(Y_{k}+Y_{N-k}\right)+X_{N-k}\left(Y_{k}-Y_{N-k}\right)\right]/2\\Z_{N-k}&=&\left[X_{N-k}\left(Y_{k}+Y_{N-k}\right)-X_{k}\left(Y_{k}-Y_{N-k}\right)\right]/2\end{matrix}}} where we take all of the vectors to be periodic in N (XN = X0, et cetera). Thus, just as the DFT transforms a convolution into a pointwise multiplication of complex numbers (pairs of real and imaginary parts), the DHT transforms a convolution into a simple combination of pairs of real frequency components. The inverse DHT then yields the desired vector z. In this way, a fast algorithm for the DHT (see below) yields a fast algorithm for convolution. (This is slightly more expensive than the corresponding procedure for the DFT, not including the costs of the transforms below, because the pairwise operation above requires 8 real-arithmetic operations compared to the 6 of a complex multiplication. This count doesn't include the division by 2, which can be absorbed e.g. into the 1/N normalization of the inverse DHT.) == Fast algorithms == Just as for the DFT, evaluating the DHT definition directly would require O(N2) arithmetical operations (see Big O notation). There are fast algorithms similar to the FFT, however, that compute the same result in only O(N log N) operations. Nearly every FFT algorithm, from Cooley–Tukey to prime-factor to Winograd (1985) to Bruun's (1993), has a direct analogue for the discrete Hartley transform. (However, a few of the more exotic FFT algorithms, such as the QFT, have not yet been investigated in the context of the DHT.) In particular, the DHT analogue of the Cooley–Tukey algorithm is commonly known as the fast Hartley transform (FHT) algorithm, and was first described by Bracewell in 1984. This FHT algorithm, at least when applied to power-of-two sizes N, is the subject of the United States patent number 4,646,256, issued in 1987 to Stanford University. Stanford placed this patent in the public domain in 1994 (Bracewell, 1995). As mentioned above, DHT algorithms are typically slightly less efficient (in terms of the number of floating-point operations) than the corresponding DFT algorithm (FFT) specialized for real inputs (or outputs). This was first argued by Sorensen et al. (1987) and Duhamel & Vetterli (1987). The latter authors obtained what appears to be the lowest published operation count for the DHT of power-of-two sizes, employing a split-radix algorithm (similar to the split-radix FFT) that breaks a DHT of length N into a DHT of length N/2 and two real-input DFTs (not DHTs) of length N/4. In this way, they argued that a DHT of power-of-two length can be computed with, at best, 2 more additions than the corresponding number of arithmetic operations for the real-input DFT. On present-day computers, performance is determined more by cache and CPU pipeline considerations than by strict operation counts, and a slight difference in arithmetic cost is unlikely to be significant. Since FHT and real-input FFT algorithms have similar computational structures, neither appears to have a substantial a priori speed advantage (Popović and Šević, 1994). As a practical matter, highly optimized real-input FFT libraries are available from many sources (e.g. from CPU vendors such as Intel), whereas highly optimized DHT libraries are less common. On the other hand, the redundant computations in FFTs due to real inputs are more difficult to eliminate for large prime N, despite the existence of O(N log N) complex-data algorithms for such cases, because the redundancies are hidden behind intricate permutations and/or phase rotations in those algorithms. In contrast, a standard prime-size FFT algorithm, Rader's algorithm, can be directly applied to the DHT of real data for roughly a factor of two less computation than that of the equivalent complex FFT (Frigo and Johnson, 2005). On the other hand, a non-DHT-based adaptation of Rader's algorithm for real-input DFTs is also possible (Chu & Burrus, 1982). == Multi-Dimensional Discrete Hartley Transform (MD-DHT) == The rD-DHT (MD-DHT with "r" dimensions) is given by X ( k 1 , k 2 , . . . , k r ) = ∑ n 1 = 0 N 1 − 1 ∑ n 2 = 0 N 2 − 1 … ∑ n r = 0 N r − 1 x ( n 1 , n 2 , . . . , n r ) c a s ( 2 π n 1 k 1 N 1 + ⋯ + 2 π n r k r N r ) , {\displaystyle X(k_{1},k_{2},...,k_{r})=\sum _{n_{1}=0}^{N_{1}-1}\sum _{n_{2}=0}^{N_{2}-1}\dots \sum _{n_{r}=0}^{N_{r}-1}x(n_{1},n_{2},...,n_{r}){\rm {cas}}({\frac {2\pi n_{1}k_{1}}{N_{1}}}+\dots +{\frac {2\pi n_{r}k_{r}}{N_{r}}}),} with k i = 0 , 1 , … , N i − 1 {\displaystyle k_{i}=0,1,\ldots ,N_{i}-1} and where c a s ( x ) = cos ⁡ ( x ) + sin ⁡ ( x ) . {\displaystyle {\rm {cas}}(x)=\cos(x)+\sin(x).} Similar to the 1-D case, as a real and symmetric transform, the MD-DHT is simpler than the MD-DFT. For one, the inverse DHT is identical to the forward transform, with the addition of a scaling factor; and second, since the kernel is real, it avoids the computational complexity of complex numbers. Additionally, the DFT is directly obtainable from the DHT by a simple additive operation (Bracewell, 1983). The MD-DHT is widely used in areas like image and optical signal processing. Specific applications include computer vision, high-definition television, and teleconferencing, areas that process or analyze motion images (Zeng, 2000). === Fast algorithms for the MD-DHT === As computing speed keeps increasing, bigger multidimensional problems become computationally feasible, requiring the need for fast multidimensional algorithms. Three such algorithms follow. In pursuit of separability for efficiency, we consider the following transform (Bracewell, 1983), X ^ ( k 1 , k 2 , . . . , k r ) = ∑ n 1 = 0 N 1 − 1 ∑ n 2 = 0 N 2 − 1 … ∑ n r = 0 N r − 1 x ( n 1 , n 2 , . . . , n r ) c a s ( 2 π n 1 k 1 N 1 ) … c a s ( 2 π n r k r N r ) . {\displaystyle {\hat {X}}(k_{1},k_{2},...,k_{r})=\sum _{n_{1}=0}^{N_{1}-1}\sum _{n_{2}=0}^{N_{2}-1}\dots \sum _{n_{r}=0}^{N_{r}-1}x(n_{1},n_{2},...,n_{r}){\rm {cas}}({\frac {2\pi n_{1}k_{1}}{N_{1}}})\dots {\rm {cas}}({\frac {2\pi n_{r}k_{r}}{N_{r}}}).} It was shown in Bortfeld (1995), that the two can be related by a few additions. For example, in 3-D, X ( k 1 , k 2 , k 3 ) = 1 2 [ X ^ ( k 1 , k 2 , − k 3 ) + X ^ ( k 1 , − k 2 , k 3 ) + X ^ ( − k 1 , k 2 , k 3 ) − X ^ ( − k 1 , − k 2 , − k 3 ) ] . {\displaystyle X(k_{1},k_{2},k_{3})={\frac {1}{2}}[{\hat {X}}(k_{1},k_{2},-k_{3})+{\hat {X}}(k_{1},-k_{2},k_{3})+{\hat {X}}(-k_{1},k_{2},k_{3})-{\hat {X}}(-k_{1},-k_{2},-k_{3})].} For X ^ {\displaystyle {\hat {X}}} , row-column algorithms can then be implemented. This technique is commonly used due to the simplicity of such R-C algorithms, but they are not optimized for general M-D spaces. Other fast algorithms have been developed, such as radix-2, radix-4, and split radix. For example, Boussakta (2000) developed the 3-D vector radix, X ( k 1 , k 2 , . . . , k r ) = ∑ n 1 = 0 N − 1 ∑ n 2 = 0 N − 1 ∑ n r = 0 N − 1 x ( n 1 , n 2 , n 3 ) c a s ( 2 π N ( n 1 k 1 + n 2 k 2 + n 3 k 3 ) ) {\displaystyle X(k_{1},k_{2},...,k_{r})=\sum _{n_{1}=0}^{N-1}\sum _{n_{2}=0}^{N-1}\sum _{n_{r}=0}^{N-1}x(n_{1},n_{2},n_{3}){\rm {cas}}({\frac {2\pi }{N}}(n_{1}k_{1}+n_{2}k_{2}+n_{3}k_{3}))} = ∑ n 1 : e v e n ∑ n 2 : e v e n ∑ n 3 : e v e n + ∑ n 1 : e v e n ∑ n 2 : e v e n ∑ n 3 : o d d + ∑ n 1 : e v e n ∑ n 2 : o d d ∑ n 3 : e v e n {\displaystyle =\sum _{n_{1}:even}\sum _{n_{2}:even}\sum _{n_{3}:even}+\sum _{n_{1}:even}\sum _{n_{2}:even}\sum _{n_{3}:odd}+\sum _{n_{1}:even}\sum _{n_{2}:odd}\sum _{n_{3}:even}} + ∑ n 1 : e v e n ∑ n 2 : o d d ∑ n 3 : o d d + ∑ n 1 : o d d ∑ n 2 : e v e n ∑ n 3 : e v e n + ∑ n 1 : o d d ∑ n 2 : e v e n ∑ n 3 : o d d {\displaystyle +\sum _{n_{1}:even}\sum _{n_{2}:odd}\sum _{n_{3}:odd}+\sum _{n_{1}:odd}\sum _{n_{2}:even}\sum _{n_{3}:even}+\sum _{n_{1}:odd}\sum _{n_{2}:even}\sum _{n_{3}:odd}} + ∑ n 1 : o d d ∑ n 2 : o d d ∑ n 3 : e v e n + ∑ n 1 : o d d ∑ n 2 : o d d ∑ n 3 : o d d . {\displaystyle +\sum _{n_{1}:odd}\sum _{n_{2}:odd}\sum _{n_{3}:even}+\sum _{n_{1}:odd}\sum _{n_{2}:odd}\sum _{n_{3}:odd}.} It was also presented in Boussakta (2000) that this 3D-vector radix algorithm takes ( 7 4 ) N 3 log 2 ⁡ N {\displaystyle ({\frac {7}{4}})N^{3}\log _{2}N} multiplications and ( 31 8 ) N 3 log 2 ⁡ N {\displaystyle ({\frac {31}{8}})N^{3}\log _{2}N} additions compared to 3 N 3 log 2 ⁡ N {\displaystyle 3N^{3}\log _{2}N} multiplications and ( 9 2 ) N 3 log 2 ⁡ N + 3 N 2 {\displaystyle ({\frac {9}{2}})N^{3}\log _{2}N+3N^{2}} additions from the row-column approach. The drawback is that the implementation of these radix-type of algorithms is hard to generalize for signals of arbitrary dimensions. Number theoretic transforms have also been used for solving the MD-DHT, since they perform extremely fast convolutions. In Boussakta (1988), it was shown how to decompose the MD-DHT transform into a form consisting of convolutions: For the 2-D case (the 3-D case is also covered in the stated reference), X ( k , l ) = ∑ n = 0 N − 1 ∑ m = 0 M − 1 x ( n , m ) c a s ( 2 π n k N + 2 π m l M ) , {\displaystyle X(k,l)=\sum _{n=0}^{N-1}\sum _{m=0}^{M-1}x(n,m){\rm {cas}}({\frac {2\pi nk}{N}}+{\frac {2\pi ml}{M}}),\;} k = 0 , 1 , … , N − 1 {\displaystyle k=0,1,\ldots ,N-1} , l = 0 , 1 , … , M − 1 {\displaystyle l=0,1,\ldots ,M-1} can be decomposed into 1-D and 2-D circular convolutions as follows, X ( k , l ) = { X 1 ( k , 0 ) X 2 ( 0 , l ) X 3 ( k , l ) {\displaystyle X(k,l)={\begin{cases}X_{1}(k,0)\\X_{2}(0,l)\\X_{3}(k,l)\end{cases}}} where X 1 ( k , 0 ) = ∑ n = 0 N − 1 ( ∑ m = 0 M − 1 x ( n , m ) ) c a s ( 2 π n k N ) , {\displaystyle X_{1}(k,0)=\sum _{n=0}^{N-1}(\sum _{m=0}^{M-1}x(n,m)){\rm {cas}}({\frac {2\pi nk}{N}}),\;} k = 0 , 1 , … , N − 1 {\displaystyle k=0,1,\ldots ,N-1} X 2 ( 0 , l ) = ∑ m = 0 M − 1 ( ∑ n = 0 N − 1 x ( n , m ) ) c a s ( 2 π m l M ) , {\displaystyle X_{2}(0,l)=\sum _{m=0}^{M-1}(\sum _{n=0}^{N-1}x(n,m)){\rm {cas}}({\frac {2\pi ml}{M}}),\;} l = 1 , 2 , … , M − 1 {\displaystyle l=1,2,\dots ,M-1} X 3 ( k , l ) = ∑ n = 0 N − 1 ∑ m = 0 M − 1 x ( n , m ) c a s ( 2 π n k N + 2 π m l M ) , {\displaystyle X_{3}(k,l)=\sum _{n=0}^{N-1}\sum _{m=0}^{M-1}x(n,m){\rm {cas}}({\frac {2\pi nk}{N}}+{\frac {2\pi ml}{M}})\;,} k = 1 , 2 , … , N − 1 {\displaystyle k=1,2,\ldots ,N-1} l = 1 , 2 , … , M − 1. {\displaystyle l=1,2,\ldots ,M-1.} Developing X 3 {\displaystyle X_{3}} further, X 3 ( k , l ) = ∑ n = 0 N − 1 x ( n , 0 ) c a s ( 2 π n k N ) + ∑ m = 1 M − 1 x ( 0 , m ) c a s ( 2 π m l M ) {\displaystyle X_{3}(k,l)=\sum _{n=0}^{N-1}x(n,0){\rm {cas}}({\frac {2\pi nk}{N}})+\sum _{m=1}^{M-1}x(0,m){\rm {cas}}({\frac {2\pi ml}{M}})} + ∑ n = 1 N − 1 ∑ m = 1 M − 1 x ( n , m ) c a s ( 2 π n k N + 2 π m l M ) . {\displaystyle +\sum _{n=1}^{N-1}\sum _{m=1}^{M-1}x(n,m){\rm {cas}}({\frac {2\pi nk}{N}}+{\frac {2\pi ml}{M}}).} At this point we present the Fermat number transform (FNT). The tth Fermat number is given by F t = 2 b + 1 {\displaystyle F_{t}=2^{b}+1} , with b = 2 t {\displaystyle b=2^{t}} . The well known Fermat numbers are for t = 0 , 1 , 2 , 3 , 4 , 5 , 6 {\displaystyle t=0,1,2,3,4,5,6} ( F t {\displaystyle F_{t}} is prime for 0 ≤ t ≤ 4 {\displaystyle 0\leq t\leq 4} ), (Boussakta, 1988). The Fermat number transform is given by X ( k , l ) = ∑ n = 0 N − 1 ∑ m = 0 M − 1 x ( n , m ) α 1 n k α 2 m l mod F t {\displaystyle X(k,l)=\sum _{n=0}^{N-1}\sum _{m=0}^{M-1}x(n,m)\alpha _{1}^{nk}\alpha _{2}^{ml}\mod F_{t}} with k = 0 , … , N − 1 , l = 0 , … , M − 1 {\displaystyle k=0,\ldots ,N-1,l=0,\ldots ,M-1} . α 1 {\displaystyle \alpha _{1}} and α 2 {\displaystyle \alpha _{2}} are roots of unity of order N {\displaystyle N} and M {\displaystyle M} respectively ( α 1 N = α 2 M = 1 mod F t ) {\displaystyle (\alpha _{1}^{N}=\alpha _{2}^{M}=1\mod F_{t})} . Going back to the decomposition, the last term for X 3 ( k , l ) {\displaystyle X_{3}(k,l)} will be denoted as X 4 ( k , l ) {\displaystyle X_{4}(k,l)} , then X 4 ( k , l ) = ∑ n = 1 N − 1 ∑ m = 1 M − 1 x ( n , m ) c a s ( 2 π n k N + 2 π m l M ) , {\displaystyle X_{4}(k,l)=\sum _{n=1}^{N-1}\sum _{m=1}^{M-1}x(n,m){\rm {cas}}({\frac {2\pi nk}{N}}+{\frac {2\pi ml}{M}}),} k = 1 , 2 , … , N − 1 {\displaystyle k=1,2,\ldots ,N-1} l = 1 , 2 , … , M − 1. {\displaystyle l=1,2,\ldots ,M-1.} If g 1 {\displaystyle g_{1}} and g 2 {\displaystyle g_{2}} are primitive roots of N {\displaystyle N} and M {\displaystyle M} (which are guaranteed to exist if M {\displaystyle M} and N {\displaystyle N} are prime) then g 1 {\displaystyle g_{1}} and g 2 {\displaystyle g_{2}} map ( n , m ) {\displaystyle (n,m)} to ( g 1 n mod N , g 2 m mod M ) . {\displaystyle (g_{1}^{n}\mod N,g_{2}^{m}\mod M).} So, mapping n , m , k {\displaystyle n,m,k} and l {\displaystyle l} to g 1 − n , g 2 − m , g 1 k {\displaystyle g_{1}^{-n},g_{2}^{-m},g_{1}^{k}} and g 2 l {\displaystyle g_{2}^{l}} , one gets the following, X 4 ( g 1 k , g 2 l ) = ∑ n = 0 N − 2 ∑ m = 0 M − 2 x ( g 1 − n , g 2 − m ) c a s ( 2 π g 1 ( − n + k ) N + 2 π g 2 ( − m + l ) M ) , {\displaystyle X_{4}(g_{1}^{k},g_{2}^{l})=\sum _{n=0}^{N-2}\sum _{m=0}^{M-2}x(g_{1}^{-n},g_{2}^{-m}){\rm {cas}}({\frac {2\pi g_{1}^{(-n+k)}}{N}}+{\frac {2\pi g_{2}^{(-m+l)}}{M}}),} k = 0 , 1 , … , N − 2 {\displaystyle k=0,1,\ldots ,N-2} l = 0 , 1 , … , M − 2 {\displaystyle l=0,1,\ldots ,M-2} . Which is now a circular convolution. With Y ( k , l ) = X 4 ( g 1 k , g 2 l ) {\displaystyle Y(k,l)=X_{4}(g_{1}^{k},g_{2}^{l})} , y ( n , m ) = x ( g 1 − n , g 2 − m ) {\displaystyle y(n,m)=x(g_{1}^{-n},g_{2}^{-m})} , and h ( n , m ) = c a s ( 2 π g 1 n N + 2 π g 2 m M ) {\displaystyle h(n,m)={\rm {cas}}({\frac {2\pi g_{1}^{n}}{N}}+{\frac {2\pi g_{2}^{m}}{M}})} , one has Y ( k , l ) = ∑ n = 0 N − 2 ∑ m = 0 M − 2 y ( n , m ) h ( < k − n > N , < l − m > M ) {\displaystyle Y(k,l)=\sum _{n=0}^{N-2}\sum _{m=0}^{M-2}y(n,m)h(<k-n>_{N},<l-m>_{M})} Y ( k , l ) = F N T − 1 { F N T [ y ( n , m ) ] ⊗ F N T [ h ( n , m ) ] {\displaystyle Y(k,l)=FNT^{-1}\{FNT[y(n,m)]\otimes FNT[h(n,m)]} where ⊗ {\displaystyle \otimes } denotes term by term multiplication. It was also stated in (Boussakta, 1988) that this algorithm reduces the number of multiplications by a factor of 8–20 over other DHT algorithms at a cost of a slight increase in the number of shift and add operations, which are assumed to be simpler than multiplications. The drawback of this algorithm is the constraint that each dimension of the transform has a primitive root. == References == == Further reading == Bracewell, Ronald N. (1986). The Hartley Transform (1 ed.). Oxford University Press. ISBN 978-0-19503969-6. Boussakta, Said; Holt, Alan G. J. (1988). "Fast Multidimensional Discrete Hartley Transform using Fermat Number Transform". IEE Proceedings G - Electronic Circuits and Systems. 135 (6): 235–237. doi:10.1049/ip-g-1.1988.0036. Hong, Jonathan; Vetterli, Martin; Duhamel, Pierre (1994). "Basefield transforms with the convolution property" (PDF). Proceedings of the IEEE. 82 (3): 400–412. doi:10.1109/5.272145. O'Neill, Mark A. (1988). "Faster than Fast Fourier". BYTE. 13 (4): 293–300. Olnejniczak, Kraig J.; Heydt, Gerald T. (March 1994). "Scanning the Special Section on the Hartley transform". Proceedings of the IEEE. 82: 372–380. (NB. Contains extensive bibliography.)
Wikipedia/Discrete_Hartley_transform
In signal processing, overlap–save is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal x [ n ] {\displaystyle x[n]} and a finite impulse response (FIR) filter h [ n ] {\displaystyle h[n]} : where h[m] = 0 for m outside the region [1, M]. This article uses common abstract notations, such as y ( t ) = x ( t ) ∗ h ( t ) , {\textstyle y(t)=x(t)*h(t),} or y ( t ) = H { x ( t ) } , {\textstyle y(t)={\mathcal {H}}\{x(t)\},} in which it is understood that the functions should be thought of in their totality, rather than at specific instants t {\textstyle t} (see Convolution#Notation). The concept is to compute short segments of y[n] of an arbitrary length L, and concatenate the segments together. That requires longer input segments that overlap the next input segment. The overlapped data gets "saved" and used a second time. First we describe that process with just conventional convolution for each output segment. Then we describe how to replace that convolution with a more efficient method. Consider a segment that begins at n = kL + M, for any integer k, and define: x k [ n ] ≜ { x [ n + k L ] , 1 ≤ n ≤ L + M − 1 0 , otherwise . {\displaystyle x_{k}[n]\ \triangleq {\begin{cases}x[n+kL],&1\leq n\leq L+M-1\\0,&{\textrm {otherwise}}.\end{cases}}} y k [ n ] ≜ x k [ n ] ∗ h [ n ] = ∑ m = 1 M h [ m ] ⋅ x k [ n − m ] . {\displaystyle y_{k}[n]\ \triangleq \ x_{k}[n]*h[n]=\sum _{m=1}^{M}h[m]\cdot x_{k}[n-m].} Then, for k L + M + 1 ≤ n ≤ k L + L + M {\displaystyle kL+M+1\leq n\leq kL+L+M} , and equivalently M + 1 ≤ n − k L ≤ L + M {\displaystyle M+1\leq n-kL\leq L+M} , we can write: y [ n ] = ∑ m = 1 M h [ m ] ⋅ x k [ n − k L − m ] ≜ y k [ n − k L ] . {\displaystyle y[n]=\sum _{m=1}^{M}h[m]\cdot x_{k}[n-kL-m]\ \ \triangleq \ \ y_{k}[n-kL].} With the substitution j = n − k L {\displaystyle j=n-kL} , the task is reduced to computing y k [ j ] {\displaystyle y_{k}[j]} for M + 1 ≤ j ≤ L + M {\displaystyle M+1\leq j\leq L+M} . These steps are illustrated in the first 3 traces of Figure 1, except that the desired portion of the output (third trace) corresponds to 1 ≤ j ≤ L. If we periodically extend xk[n] with period N ≥ L + M − 1, according to: x k , N [ n ] ≜ ∑ ℓ = − ∞ ∞ x k [ n − ℓ N ] , {\displaystyle x_{k,N}[n]\ \triangleq \ \sum _{\ell =-\infty }^{\infty }x_{k}[n-\ell N],} the convolutions ( x k , N ) ∗ h {\displaystyle (x_{k,N})*h\,} and x k ∗ h {\displaystyle x_{k}*h\,} are equivalent in the region M + 1 ≤ n ≤ L + M {\displaystyle M+1\leq n\leq L+M} . It is therefore sufficient to compute the N-point circular (or cyclic) convolution of x k [ n ] {\displaystyle x_{k}[n]\,} with h [ n ] {\displaystyle h[n]\,} in the region [1, N]. The subregion [M + 1, L + M] is appended to the output stream, and the other values are discarded. The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem: where: DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over N discrete points, and L is customarily chosen such that N = L+M-1 is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency. The leading and trailing edge-effects of circular convolution are overlapped and added, and subsequently discarded. == Pseudocode == (Overlap-save algorithm for linear convolution) h = FIR_impulse_response M = length(h) overlap = M − 1 N = 8 × overlap (see next section for a better choice) step_size = N − overlap H = DFT(h, N) position = 0 while position + N ≤ length(x) yt = IDFT(DFT(x(position+(1:N))) × H) y(position+(1:step_size)) = yt(M : N) (discard M−1 y-values) position = position + step_size end == Efficiency considerations == When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about N (log2(N) + 1) complex multiplications for the FFT, product of arrays, and IFFT. Each iteration produces N-M+1 output samples, so the number of complex multiplications per output sample is about: For example, when M = 201 {\displaystyle M=201} and N = 1024 , {\displaystyle N=1024,} Eq.3 equals 13.67 , {\displaystyle 13.67,} whereas direct evaluation of Eq.1 would require up to 201 {\displaystyle 201} complex multiplications per output sample, the worst case being when both x {\displaystyle x} and h {\displaystyle h} are complex-valued. Also note that for any given M , {\displaystyle M,} Eq.3 has a minimum with respect to N . {\displaystyle N.} Figure 2 is a graph of the values of N {\displaystyle N} that minimize Eq.3 for a range of filter lengths ( M {\displaystyle M} ). Instead of Eq.1, we can also consider applying Eq.2 to a long sequence of length N x {\displaystyle N_{x}} samples. The total number of complex multiplications would be: N x ⋅ ( log 2 ⁡ ( N x ) + 1 ) . {\displaystyle N_{x}\cdot (\log _{2}(N_{x})+1).} Comparatively, the number of complex multiplications required by the pseudocode algorithm is: N x ⋅ ( log 2 ⁡ ( N ) + 1 ) ⋅ N N − M + 1 . {\displaystyle N_{x}\cdot (\log _{2}(N)+1)\cdot {\frac {N}{N-M+1}}.} Hence the cost of the overlap–save method scales almost as O ( N x log 2 ⁡ N ) {\displaystyle O\left(N_{x}\log _{2}N\right)} while the cost of a single, large circular convolution is almost O ( N x log 2 ⁡ N x ) {\displaystyle O\left(N_{x}\log _{2}N_{x}\right)} . == Overlap–discard == Overlap–discard and Overlap–scrap are less commonly used labels for the same method described here. However, these labels are actually better (than overlap–save) to distinguish from overlap–add, because both methods "save", but only one discards. "Save" merely refers to the fact that M − 1 input (or output) samples from segment k are needed to process segment k + 1. === Extending overlap–save === The overlap–save algorithm can be extended to include other common operations of a system: additional IFFT channels can be processed more cheaply than the first by reusing the forward FFT sampling rates can be changed by using different sized forward and inverse FFTs frequency translation (mixing) can be accomplished by rearranging frequency bins == See also == Overlap–add method Circular convolution#Example == Notes == == References == == External links == Dr. Deepa Kundur, Overlap Add and Overlap Save, University of Toronto
Wikipedia/Overlap–save_method
Rader's algorithm (1968), named for Charles M. Rader of MIT Lincoln Laboratory, is a fast Fourier transform (FFT) algorithm that computes the discrete Fourier transform (DFT) of prime sizes by re-expressing the DFT as a cyclic convolution (the other algorithm for FFTs of prime sizes, Bluestein's algorithm, also works by rewriting the DFT as a convolution). Since Rader's algorithm only depends upon the periodicity of the DFT kernel, it is directly applicable to any other transform (of prime order) with a similar property, such as a number-theoretic transform or the discrete Hartley transform. The algorithm can be modified to gain a factor of two savings for the case of DFTs of real data, using a slightly modified re-indexing/permutation to obtain two half-size cyclic convolutions of real data; an alternative adaptation for DFTs of real data uses the discrete Hartley transform. Winograd extended Rader's algorithm to include prime-power DFT sizes p m {\displaystyle p^{m}} , and today Rader's algorithm is sometimes described as a special case of Winograd's FFT algorithm, also called the multiplicative Fourier transform algorithm (Tolimieri et al., 1997), which applies to an even larger class of sizes. However, for composite sizes such as prime powers, the Cooley–Tukey FFT algorithm is much simpler and more practical to implement, so Rader's algorithm is typically only used for large-prime base cases of Cooley–Tukey's recursive decomposition of the DFT. == Algorithm == Begin with the definition of the discrete Fourier transform: X k = ∑ n = 0 N − 1 x n e − 2 π i N n k k = 0 , … , N − 1. {\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}e^{-{\frac {2\pi i}{N}}nk}\qquad k=0,\dots ,N-1.} If N is a prime number, then the set of non-zero indices n ∈ { 1 , … , N − 1 } {\displaystyle n\in {}\{1,\dots ,N-1\}} forms a group under multiplication modulo N. One consequence of the number theory of such groups is that there exists a generator of the group (sometimes called a primitive root, which can be found by exhaustive search or slightly better algorithms). This generator is an integer g such that n = g q ( mod N ) {\displaystyle n=g^{q}{\pmod {N}}} for any non-zero index n and for a unique q ∈ { 0 , … , N − 2 } {\displaystyle q\in {}\{0,\dots ,N-2\}} (forming a bijection from q to non-zero n). Similarly, k = g − p ( mod N ) {\displaystyle k=g^{-p}{\pmod {N}}} for any non-zero index k and for a unique p ∈ { 0 , … , N − 2 } {\displaystyle p\in {}\{0,\dots ,N-2\}} , where the negative exponent denotes the multiplicative inverse of g p mod N {\displaystyle g^{p}\mod N} . That means that we can rewrite the DFT using these new indices p and q as: X 0 = ∑ n = 0 N − 1 x n , {\displaystyle X_{0}=\sum _{n=0}^{N-1}x_{n},} X g − p = x 0 + ∑ q = 0 N − 2 x g q e − 2 π i N g − ( p − q ) p = 0 , … , N − 2. {\displaystyle X_{g^{-p}}=x_{0}+\sum _{q=0}^{N-2}x_{g^{q}}e^{-{\frac {2\pi i}{N}}g^{-(p-q)}}\qquad p=0,\dots ,N-2.} (Recall that xn and Xk are implicitly periodic in N, and also that e 2 π i = 1 {\displaystyle e^{2\pi i}=1} (Euler's identity). Thus, all indices and exponents are taken modulo N as required by the group arithmetic.) The final summation, above, is precisely a cyclic convolution of the two sequences aq and bq (of length N–1, because q ∈ { 0 , … , N − 2 } {\displaystyle q\in {}\{0,\dots ,N-2\}} ) defined by: a q = x g q {\displaystyle a_{q}=x_{g^{q}}} b q = e − 2 π i N g − q . {\displaystyle b_{q}=e^{-{\frac {2\pi i}{N}}g^{-q}}.} === Evaluating the convolution === Since N–1 is composite, this convolution can be performed directly via the convolution theorem and more conventional FFT algorithms. However, that may not be efficient if N–1 itself has large prime factors, requiring recursive use of Rader's algorithm. Instead, one can compute a length-(N–1) cyclic convolution exactly by zero-padding it to a length of at least 2(N–1)–1, say to a power of two, which can then be evaluated in O(N log N) time without the recursive application of Rader's algorithm. This algorithm, then, requires O(N) additions plus O(N log N) time for the convolution. In practice, the O(N) additions can often be performed by absorbing the additions into the convolution: if the convolution is performed by a pair of FFTs, then the sum of xn is given by the DC (0th) output of the FFT of aq plus x0, and x0 can be added to all the outputs by adding it to the DC term of the convolution prior to the inverse FFT. Still, this algorithm requires intrinsically more operations than FFTs of nearby composite sizes, and typically takes 3–10 times as long in practice. If Rader's algorithm is performed by using FFTs of size N–1 to compute the convolution, rather than by zero padding as mentioned above, the efficiency depends strongly upon N and the number of times that Rader's algorithm must be applied recursively. The worst case would be if N–1 were 2N2 where N2 is prime, with N2–1 = 2N3 where N3 is prime, and so on. Such Nj are called Sophie Germain primes, and such a sequence of them is called a Cunningham chain of the first kind. However, the alternative of zero padding can always be employed if N–1 has a large prime factor. == References ==
Wikipedia/Rader's_FFT_algorithm
In the mathematical field of representation theory, group representations describe abstract groups in terms of bijective linear transformations of a vector space to itself (i.e. vector space automorphisms); in particular, they can be used to represent group elements as invertible matrices so that the group operation can be represented by matrix multiplication. In chemistry, a group representation can relate mathematical group elements to symmetric rotations and reflections of molecules. Representations of groups allow many group-theoretic problems to be reduced to problems in linear algebra. In physics, they describe how the symmetry group of a physical system affects the solutions of equations describing that system. The term representation of a group is also used in a more general sense to mean any "description" of a group as a group of transformations of some mathematical object. More formally, a "representation" means a homomorphism from the group to the automorphism group of an object. If the object is a vector space we have a linear representation. Some people use realization for the general notion and reserve the term representation for the special case of linear representations. The bulk of this article describes linear representation theory; see the last section for generalizations. == Branches of group representation theory == The representation theory of groups divides into subtheories depending on the kind of group being represented. The various theories are quite different in detail, though some basic definitions and concepts are similar. The most important divisions are: Finite groups — Group representations are a very important tool in the study of finite groups. They also arise in the applications of finite group theory to crystallography and to geometry. If the field of scalars of the vector space has characteristic p, and if p divides the order of the group, then this is called modular representation theory; this special case has very different properties. See Representation theory of finite groups. Compact groups or locally compact groups — Many of the results of finite group representation theory are proved by averaging over the group. These proofs can be carried over to infinite groups by replacement of the average with an integral, provided that an acceptable notion of integral can be defined. This can be done for locally compact groups, using the Haar measure. The resulting theory is a central part of harmonic analysis. The Pontryagin duality describes the theory for commutative groups, as a generalised Fourier transform. See also: Peter–Weyl theorem. Lie groups — Many important Lie groups are compact, so the results of compact representation theory apply to them. Other techniques specific to Lie groups are used as well. Most of the groups important in physics and chemistry are Lie groups, and their representation theory is crucial to the application of group theory in those fields. See Representations of Lie groups and Representations of Lie algebras. Linear algebraic groups (or more generally affine group schemes) — These are the analogues of Lie groups, but over more general fields than just R or C. Although linear algebraic groups have a classification that is very similar to that of Lie groups, and give rise to the same families of Lie algebras, their representations are rather different (and much less well understood). The analytic techniques used for studying Lie groups must be replaced by techniques from algebraic geometry, where the relatively weak Zariski topology causes many technical complications. Non-compact topological groups — The class of non-compact groups is too broad to construct any general representation theory, but specific special cases have been studied, sometimes using ad hoc techniques. The semisimple Lie groups have a deep theory, building on the compact case. The complementary solvable Lie groups cannot be classified in the same way. The general theory for Lie groups deals with semidirect products of the two types, by means of general results called Mackey theory, which is a generalization of Wigner's classification methods. Representation theory also depends heavily on the type of vector space on which the group acts. One distinguishes between finite-dimensional representations and infinite-dimensional ones. In the infinite-dimensional case, additional structures are important (e.g. whether or not the space is a Hilbert space, Banach space, etc.). One must also consider the type of field over which the vector space is defined. The most important case is the field of complex numbers. The other important cases are the field of real numbers, finite fields, and fields of p-adic numbers. In general, algebraically closed fields are easier to handle than non-algebraically closed ones. The characteristic of the field is also significant; many theorems for finite groups depend on the characteristic of the field not dividing the order of the group. == Definitions == A representation of a group G on a vector space V over a field K is a group homomorphism from G to GL(V), the general linear group on V. That is, a representation is a map ρ : G → G L ( V ) {\displaystyle \rho \colon G\to \mathrm {GL} \left(V\right)} such that ρ ( g 1 g 2 ) = ρ ( g 1 ) ρ ( g 2 ) , for all g 1 , g 2 ∈ G . {\displaystyle \rho (g_{1}g_{2})=\rho (g_{1})\rho (g_{2}),\qquad {\text{for all }}g_{1},g_{2}\in G.} Here V is called the representation space and the dimension of V is called the dimension or degree of the representation. It is common practice to refer to V itself as the representation when the homomorphism is clear from the context. In the case where V is of finite dimension n it is common to choose a basis for V and identify GL(V) with GL(n, K), the group of n × n {\displaystyle n\times n} invertible matrices on the field K. If G is a topological group and V is a topological vector space, a continuous representation of G on V is a representation ρ such that the application Φ : G × V → V defined by Φ(g, v) = ρ(g)(v) is continuous. The kernel of a representation ρ of a group G is defined as the normal subgroup of G whose image under ρ is the identity transformation: ker ⁡ ρ = { g ∈ G ∣ ρ ( g ) = i d } . {\displaystyle \ker \rho =\left\{g\in G\mid \rho (g)=\mathrm {id} \right\}.} A faithful representation is one in which the homomorphism G → GL(V) is injective; in other words, one whose kernel is the trivial subgroup {e} consisting only of the group's identity element. Given two K vector spaces V and W, two representations ρ : G → GL(V) and π : G → GL(W) are said to be equivalent or isomorphic if there exists a vector space isomorphism α : V → W so that for all g in G, α ∘ ρ ( g ) ∘ α − 1 = π ( g ) . {\displaystyle \alpha \circ \rho (g)\circ \alpha ^{-1}=\pi (g).} == Examples == Consider the complex number u = e2πi / 3 which has the property u3 = 1. The set C3 = {1, u, u2} forms a cyclic group under multiplication. This group has a representation ρ on C 2 {\displaystyle \mathbb {C} ^{2}} given by: ρ ( 1 ) = [ 1 0 0 1 ] ρ ( u ) = [ 1 0 0 u ] ρ ( u 2 ) = [ 1 0 0 u 2 ] . {\displaystyle \rho \left(1\right)={\begin{bmatrix}1&0\\0&1\\\end{bmatrix}}\qquad \rho \left(u\right)={\begin{bmatrix}1&0\\0&u\\\end{bmatrix}}\qquad \rho \left(u^{2}\right)={\begin{bmatrix}1&0\\0&u^{2}\\\end{bmatrix}}.} This representation is faithful because ρ is a one-to-one map. Another representation for C3 on C 2 {\displaystyle \mathbb {C} ^{2}} , isomorphic to the previous one, is σ given by: σ ( 1 ) = [ 1 0 0 1 ] σ ( u ) = [ u 0 0 1 ] σ ( u 2 ) = [ u 2 0 0 1 ] . {\displaystyle \sigma \left(1\right)={\begin{bmatrix}1&0\\0&1\\\end{bmatrix}}\qquad \sigma \left(u\right)={\begin{bmatrix}u&0\\0&1\\\end{bmatrix}}\qquad \sigma \left(u^{2}\right)={\begin{bmatrix}u^{2}&0\\0&1\\\end{bmatrix}}.} The group C3 may also be faithfully represented on R 2 {\displaystyle \mathbb {R} ^{2}} by τ given by: τ ( 1 ) = [ 1 0 0 1 ] τ ( u ) = [ a − b b a ] τ ( u 2 ) = [ a b − b a ] {\displaystyle \tau \left(1\right)={\begin{bmatrix}1&0\\0&1\\\end{bmatrix}}\qquad \tau \left(u\right)={\begin{bmatrix}a&-b\\b&a\\\end{bmatrix}}\qquad \tau \left(u^{2}\right)={\begin{bmatrix}a&b\\-b&a\\\end{bmatrix}}} where a = Re ( u ) = − 1 2 , b = Im ( u ) = 3 2 . {\displaystyle a={\text{Re}}(u)=-{\tfrac {1}{2}},\qquad b={\text{Im}}(u)={\tfrac {\sqrt {3}}{2}}.} A possible representation on R 3 {\displaystyle \mathbb {R} ^{3}} is given by the set of cyclic permutation matrices v: υ ( 1 ) = [ 1 0 0 0 1 0 0 0 1 ] υ ( u ) = [ 0 1 0 0 0 1 1 0 0 ] υ ( u 2 ) = [ 0 0 1 1 0 0 0 1 0 ] . {\displaystyle \upsilon \left(1\right)={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\\\end{bmatrix}}\qquad \upsilon \left(u\right)={\begin{bmatrix}0&1&0\\0&0&1\\1&0&0\\\end{bmatrix}}\qquad \upsilon \left(u^{2}\right)={\begin{bmatrix}0&0&1\\1&0&0\\0&1&0\\\end{bmatrix}}.} Another example: Let V {\displaystyle V} be the space of homogeneous degree-3 polynomials over the complex numbers in variables x 1 , x 2 , x 3 . {\displaystyle x_{1},x_{2},x_{3}.} Then S 3 {\displaystyle S_{3}} acts on V {\displaystyle V} by permutation of the three variables. For instance, ( 12 ) {\displaystyle (12)} sends x 1 3 {\displaystyle x_{1}^{3}} to x 2 3 {\displaystyle x_{2}^{3}} . == Reducibility == A subspace W of V that is invariant under the group action is called a subrepresentation. If V has exactly two subrepresentations, namely the zero-dimensional subspace and V itself, then the representation is said to be irreducible; if it has a proper subrepresentation of nonzero dimension, the representation is said to be reducible. The representation of dimension zero is considered to be neither reducible nor irreducible, just as the number 1 is considered to be neither composite nor prime. Under the assumption that the characteristic of the field K does not divide the size of the group, representations of finite groups can be decomposed into a direct sum of irreducible subrepresentations (see Maschke's theorem). This holds in particular for any representation of a finite group over the complex numbers, since the characteristic of the complex numbers is zero, which never divides the size of a group. In the example above, the first two representations given (ρ and σ) are both decomposable into two 1-dimensional subrepresentations (given by span{(1,0)} and span{(0,1)}), while the third representation (τ) is irreducible. == Generalizations == === Set-theoretical representations === A set-theoretic representation (also known as a group action or permutation representation) of a group G on a set X is given by a function ρ : G → XX, the set of functions from X to X, such that for all g1, g2 in G and all x in X: ρ ( 1 ) [ x ] = x {\displaystyle \rho (1)[x]=x} ρ ( g 1 g 2 ) [ x ] = ρ ( g 1 ) [ ρ ( g 2 ) [ x ] ] , {\displaystyle \rho (g_{1}g_{2})[x]=\rho (g_{1})[\rho (g_{2})[x]],} where 1 {\displaystyle 1} is the identity element of G. This condition and the axioms for a group imply that ρ(g) is a bijection (or permutation) for all g in G. Thus we may equivalently define a permutation representation to be a group homomorphism from G to the symmetric group SX of X. For more information on this topic see the article on group action. === Representations in other categories === Every group G can be viewed as a category with a single object; morphisms in this category are just the elements of G. Given an arbitrary category C, a representation of G in C is a functor from G to C. Such a functor selects an object X in C and a group homomorphism from G to Aut(X), the automorphism group of X. In the case where C is VectK, the category of vector spaces over a field K, this definition is equivalent to a linear representation. Likewise, a set-theoretic representation is just a representation of G in the category of sets. When C is Ab, the category of abelian groups, the objects obtained are called G-modules. For another example consider the category of topological spaces, Top. Representations in Top are homomorphisms from G to the homeomorphism group of a topological space X. Two types of representations closely related to linear representations are: projective representations: in the category of projective spaces. These can be described as "linear representations up to scalar transformations". affine representations: in the category of affine spaces. For example, the Euclidean group acts affinely upon Euclidean space. == See also == Irreducible representations Character table Character theory Molecular symmetry List of harmonic analysis topics List of representation theory topics Representation theory of finite groups Semisimple representation == Notes == == References == Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103.. Introduction to representation theory with emphasis on Lie groups. Yurii I. Lyubich. Introduction to the Theory of Banach Representations of Groups. Translated from the 1985 Russian-language edition (Kharkov, Ukraine). Birkhäuser Verlag. 1988.
Wikipedia/Group_representation_theory
In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. What is meant by best and simpler will depend on the application. A closely related topic is the approximation of functions by generalized Fourier series, that is, approximations based upon summation of a series of terms based upon orthogonal polynomials. One problem of particular interest is that of approximating a function in a computer mathematical library, using operations that can be performed on the computer or calculator (e.g. addition and multiplication), such that the result is as close to the actual function as possible. This is typically done with polynomial or rational (ratio of polynomials) approximations. The objective is to make the approximation as close as possible to the actual function, typically with an accuracy close to that of the underlying computer's floating point arithmetic. This is accomplished by using a polynomial of high degree, and/or narrowing the domain over which the polynomial has to approximate the function. Narrowing the domain can often be done through the use of various addition or scaling formulas for the function being approximated. Modern mathematical libraries often reduce the domain into many tiny segments and use a low-degree polynomial for each segment. == Optimal polynomials == Once the domain (typically an interval) and degree of the polynomial are chosen, the polynomial itself is chosen in such a way as to minimize the worst-case error. That is, the goal is to minimize the maximum value of ∣ P ( x ) − f ( x ) ∣ {\displaystyle \mid P(x)-f(x)\mid } , where P(x) is the approximating polynomial, f(x) is the actual function, and x varies over the chosen interval. For well-behaved functions, there exists an Nth-degree polynomial that will lead to an error curve that oscillates back and forth between + ε {\displaystyle +\varepsilon } and − ε {\displaystyle -\varepsilon } a total of N+2 times, giving a worst-case error of ε {\displaystyle \varepsilon } . It is seen that there exists an Nth-degree polynomial that can interpolate N+1 points in a curve. That such a polynomial is always optimal is asserted by the equioscillation theorem. It is possible to make contrived functions f(x) for which no such polynomial exists, but these occur rarely in practice. For example, the graphs shown to the right show the error in approximating log(x) and exp(x) for N = 4. The red curves, for the optimal polynomial, are level, that is, they oscillate between + ε {\displaystyle +\varepsilon } and − ε {\displaystyle -\varepsilon } exactly. In each case, the number of extrema is N+2, that is, 6. Two of the extrema are at the end points of the interval, at the left and right edges of the graphs. To prove this is true in general, suppose P is a polynomial of degree N having the property described, that is, it gives rise to an error function that has N + 2 extrema, of alternating signs and equal magnitudes. The red graph to the right shows what this error function might look like for N = 4. Suppose Q(x) (whose error function is shown in blue to the right) is another N-degree polynomial that is a better approximation to f than P. In particular, Q is closer to f than P for each value xi where an extreme of P−f occurs, so | Q ( x i ) − f ( x i ) | < | P ( x i ) − f ( x i ) | . {\displaystyle |Q(x_{i})-f(x_{i})|<|P(x_{i})-f(x_{i})|.} When a maximum of P−f occurs at xi, then Q ( x i ) − f ( x i ) ≤ | Q ( x i ) − f ( x i ) | < | P ( x i ) − f ( x i ) | = P ( x i ) − f ( x i ) , {\displaystyle Q(x_{i})-f(x_{i})\leq |Q(x_{i})-f(x_{i})|<|P(x_{i})-f(x_{i})|=P(x_{i})-f(x_{i}),} And when a minimum of P−f occurs at xi, then f ( x i ) − Q ( x i ) ≤ | Q ( x i ) − f ( x i ) | < | P ( x i ) − f ( x i ) | = f ( x i ) − P ( x i ) . {\displaystyle f(x_{i})-Q(x_{i})\leq |Q(x_{i})-f(x_{i})|<|P(x_{i})-f(x_{i})|=f(x_{i})-P(x_{i}).} So, as can be seen in the graph, [P(x) − f(x)] − [Q(x) − f(x)] must alternate in sign for the N + 2 values of xi. But [P(x) − f(x)] − [Q(x) − f(x)] reduces to P(x) − Q(x) which is a polynomial of degree N. This function changes sign at least N+1 times so, by the Intermediate value theorem, it has N+1 zeroes, which is impossible for a polynomial of degree N. == Chebyshev approximation == One can obtain polynomials very close to the optimal one by expanding the given function in terms of Chebyshev polynomials and then cutting off the expansion at the desired degree. This is similar to the Fourier analysis of the function, using the Chebyshev polynomials instead of the usual trigonometric functions. If one calculates the coefficients in the Chebyshev expansion for a function: f ( x ) ∼ ∑ i = 0 ∞ c i T i ( x ) {\displaystyle f(x)\sim \sum _{i=0}^{\infty }c_{i}T_{i}(x)} and then cuts off the series after the T N {\displaystyle T_{N}} term, one gets an Nth-degree polynomial approximating f(x). The reason this polynomial is nearly optimal is that, for functions with rapidly converging power series, if the series is cut off after some term, the total error arising from the cutoff is close to the first term after the cutoff. That is, the first term after the cutoff dominates all later terms. The same is true if the expansion is in terms of bucking polynomials. If a Chebyshev expansion is cut off after T N {\displaystyle T_{N}} , the error will take a form close to a multiple of T N + 1 {\displaystyle T_{N+1}} . The Chebyshev polynomials have the property that they are level – they oscillate between +1 and −1 in the interval [−1, 1]. T N + 1 {\displaystyle T_{N+1}} has N+2 level extrema. This means that the error between f(x) and its Chebyshev expansion out to T N {\displaystyle T_{N}} is close to a level function with N+2 extrema, so it is close to the optimal Nth-degree polynomial. In the graphs above, the blue error function is sometimes better than (inside of) the red function, but sometimes worse, meaning that it is not quite the optimal polynomial. The discrepancy is less serious for the exp function, which has an extremely rapidly converging power series, than for the log function. Chebyshev approximation is the basis for Clenshaw–Curtis quadrature, a numerical integration technique. == Remez's algorithm == The Remez algorithm (sometimes spelled Remes) is used to produce an optimal polynomial P(x) approximating a given function f(x) over a given interval. It is an iterative algorithm that converges to a polynomial that has an error function with N+2 level extrema. By the theorem above, that polynomial is optimal. Remez's algorithm uses the fact that one can construct an Nth-degree polynomial that leads to level and alternating error values, given N+2 test points. Given N+2 test points x 1 {\displaystyle x_{1}} , x 2 {\displaystyle x_{2}} , ... x N + 2 {\displaystyle x_{N+2}} (where x 1 {\displaystyle x_{1}} and x N + 2 {\displaystyle x_{N+2}} are presumably the end points of the interval of approximation), these equations need to be solved: P ( x 1 ) − f ( x 1 ) = + ε P ( x 2 ) − f ( x 2 ) = − ε P ( x 3 ) − f ( x 3 ) = + ε ⋮ P ( x N + 2 ) − f ( x N + 2 ) = ± ε . {\displaystyle {\begin{aligned}P(x_{1})-f(x_{1})&=+\varepsilon \\P(x_{2})-f(x_{2})&=-\varepsilon \\P(x_{3})-f(x_{3})&=+\varepsilon \\&\ \ \vdots \\P(x_{N+2})-f(x_{N+2})&=\pm \varepsilon .\end{aligned}}} The right-hand sides alternate in sign. That is, P 0 + P 1 x 1 + P 2 x 1 2 + P 3 x 1 3 + ⋯ + P N x 1 N − f ( x 1 ) = + ε P 0 + P 1 x 2 + P 2 x 2 2 + P 3 x 2 3 + ⋯ + P N x 2 N − f ( x 2 ) = − ε ⋮ {\displaystyle {\begin{aligned}P_{0}+P_{1}x_{1}+P_{2}x_{1}^{2}+P_{3}x_{1}^{3}+\dots +P_{N}x_{1}^{N}-f(x_{1})&=+\varepsilon \\P_{0}+P_{1}x_{2}+P_{2}x_{2}^{2}+P_{3}x_{2}^{3}+\dots +P_{N}x_{2}^{N}-f(x_{2})&=-\varepsilon \\&\ \ \vdots \end{aligned}}} Since x 1 {\displaystyle x_{1}} , ..., x N + 2 {\displaystyle x_{N+2}} were given, all of their powers are known, and f ( x 1 ) {\displaystyle f(x_{1})} , ..., f ( x N + 2 ) {\displaystyle f(x_{N+2})} are also known. That means that the above equations are just N+2 linear equations in the N+2 variables P 0 {\displaystyle P_{0}} , P 1 {\displaystyle P_{1}} , ..., P N {\displaystyle P_{N}} , and ε {\displaystyle \varepsilon } . Given the test points x 1 {\displaystyle x_{1}} , ..., x N + 2 {\displaystyle x_{N+2}} , one can solve this system to get the polynomial P and the number ε {\displaystyle \varepsilon } . The graph below shows an example of this, producing a fourth-degree polynomial approximating e x {\displaystyle e^{x}} over [−1, 1]. The test points were set at −1, −0.7, −0.1, +0.4, +0.9, and 1. Those values are shown in green. The resultant value of ε {\displaystyle \varepsilon } is 4.43 × 10−4 The error graph does indeed take on the values ± ε {\displaystyle \pm \varepsilon } at the six test points, including the end points, but that those points are not extrema. If the four interior test points had been extrema (that is, the function P(x)f(x) had maxima or minima there), the polynomial would be optimal. The second step of Remez's algorithm consists of moving the test points to the approximate locations where the error function had its actual local maxima or minima. For example, one can tell from looking at the graph that the point at −0.1 should have been at about −0.28. The way to do this in the algorithm is to use a single round of Newton's method. Since one knows the first and second derivatives of P(x) − f(x), one can calculate approximately how far a test point has to be moved so that the derivative will be zero. Calculating the derivatives of a polynomial is straightforward. One must also be able to calculate the first and second derivatives of f(x). Remez's algorithm requires an ability to calculate f ( x ) {\displaystyle f(x)\,} , f ′ ( x ) {\displaystyle f'(x)\,} , and f ″ ( x ) {\displaystyle f''(x)\,} to extremely high precision. The entire algorithm must be carried out to higher precision than the desired precision of the result. After moving the test points, the linear equation part is repeated, getting a new polynomial, and Newton's method is used again to move the test points again. This sequence is continued until the result converges to the desired accuracy. The algorithm converges very rapidly. Convergence is quadratic for well-behaved functions—if the test points are within 10 − 15 {\displaystyle 10^{-15}} of the correct result, they will be approximately within 10 − 30 {\displaystyle 10^{-30}} of the correct result after the next round. Remez's algorithm is typically started by choosing the extrema of the Chebyshev polynomial T N + 1 {\displaystyle T_{N+1}} as the initial points, since the final error function will be similar to that polynomial. == Main journals == Journal of Approximation Theory Constructive Approximation East Journal on Approximations == See also == == References == Achiezer (Akhiezer), N.I. (2013) [1956]. Theory of approximation. Translated by Hyman, C.J. Dover. ISBN 978-0-486-15313-1. OCLC 1067500225. Timan, A.F. (2014) [1963]. Theory of approximation of functions of a real variable. International Series in Pure and Applied Mathematics. Vol. 34. Elsevier. ISBN 978-1-4831-8481-4. Hastings, Jr., C. (2015) [1955]. Approximations for Digital Computers. Princeton University Press. ISBN 978-1-4008-7559-7. Hart, J.F.; Cheney, E.W.; Lawson, C.L.; Maehly, H.J.; Mesztenyi, C.K.; Rice, Jr., J.R.; Thacher, H.C.; Witzgall, C. (1968). Computer Approximations. Wiley. OCLC 0471356301. Fox, L.; Parker, I.B. (1968). Chebyshev Polynomials in Numerical Analysis. Oxford mathematical handbooks. Oxford University Press. ISBN 978-0-19-859614-1. OCLC 9036207. Press, WH; Teukolsky, S.A.; Vetterling, W.T.; Flannery, B.P. (2007). "§5.8 Chebyshev Approximation". Numerical Recipes: The Art of Scientific Computing (3rd ed.). Cambridge University Press. ISBN 978-0-521-88068-8. Cody, Jr., W.J.; Waite, W. (1980). Software Manual for the Elementary Functions. Prentice-Hall. ISBN 0-13-822064-6. OCLC 654695035. Remes (Remez), E. (1934). "Sur le calcul effectif des polynomes d'approximation de Tschebyschef". C. R. Acad. Sci. (in French). 199: 337–340. Steffens, K.-G. (2006). Anastassiou, George A. (ed.). The History of Approximation Theory: From Euler to Bernstein. Birkhauser. doi:10.1007/0-8176-4475-X. ISBN 0-8176-4353-2. Erdélyi, T. (2008). "Extensions of the Bloch-Pólya theorem on the number of distinct real zeros of polynomials". Journal de théorie des nombres de Bordeaux. 20: 281–7. doi:10.5802/jtnb.627. Erdélyi, T. (2009). "The Remez inequality for linear combinations of shifted Gaussians". Mathematical Proceedings of the Cambridge Philosophical Society. 146 (3): 523–530. doi:10.1017/S0305004108001849 (inactive February 25, 2025).{{cite journal}}: CS1 maint: DOI inactive as of February 2025 (link) Trefethen, L.N. (2020). Approximation theory and approximation practice. SIAM. ISBN 978-1-61197-594-9. Ch. 1–6 of 2013 edition == External links == History of Approximation Theory (HAT) Surveys in Approximation Theory (SAT)
Wikipedia/Chebyshev_approximation
The split-radix FFT is a fast Fourier transform (FFT) algorithm for computing the discrete Fourier transform (DFT), and was first described in an initially little-appreciated paper by R. Yavne (1968)[1] and subsequently rediscovered simultaneously by various authors in 1984. (The name "split radix" was coined by two of these reinventors, P. Duhamel and H. Hollmann.) In particular, split radix is a variant of the Cooley–Tukey FFT algorithm that uses a blend of radices 2 and 4: it recursively expresses a DFT of length N in terms of one smaller DFT of length N/2 and two smaller DFTs of length N/4. The split-radix FFT, along with its variations, long had the distinction of achieving the lowest published arithmetic operation count (total exact number of required real additions and multiplications) to compute a DFT of power-of-two sizes N. The arithmetic count of the original split-radix algorithm was improved upon in 2004 (with the initial gains made in unpublished work by J. Van Buskirk via hand optimization for N=64 [2] [3]), but it turns out that one can still achieve the new lowest count by a modification of split radix (Johnson and Frigo, 2007). Although the number of arithmetic operations is not the sole factor (or even necessarily the dominant factor) in determining the time required to compute a DFT on a computer, the question of the minimum possible count is of longstanding theoretical interest. (No tight lower bound on the operation count has currently been proven.) The split-radix algorithm can only be applied when N is a multiple of 4, but since it breaks a DFT into smaller DFTs it can be combined with any other FFT algorithm as desired. == Split-radix decomposition == Recall that the DFT is defined by the formula: X k = ∑ n = 0 N − 1 x n ω N n k {\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}\omega _{N}^{nk}} where k {\displaystyle k} is an integer ranging from 0 {\displaystyle 0} to N − 1 {\displaystyle N-1} and ω N {\displaystyle \omega _{N}} denotes the primitive root of unity: ω N = e − 2 π i N , {\displaystyle \omega _{N}=e^{-{\frac {2\pi i}{N}}},} and thus: ω N N = 1 {\displaystyle \omega _{N}^{N}=1} . The split-radix algorithm works by expressing this summation in terms of three smaller summations. (Here, we give the "decimation in time" version of the split-radix FFT; the dual decimation in frequency version is essentially just the reverse of these steps.) First, a summation over the even indices x 2 n 2 {\displaystyle x_{2n_{2}}} . Second, a summation over the odd indices broken into two pieces: x 4 n 4 + 1 {\displaystyle x_{4n_{4}+1}} and x 4 n 4 + 3 {\displaystyle x_{4n_{4}+3}} , according to whether the index is 1 or 3 modulo 4. Here, n m {\displaystyle n_{m}} denotes an index that runs from 0 to N / m − 1 {\displaystyle N/m-1} . The resulting summations look like: X k = ∑ n 2 = 0 N / 2 − 1 x 2 n 2 ω N / 2 n 2 k + ω N k ∑ n 4 = 0 N / 4 − 1 x 4 n 4 + 1 ω N / 4 n 4 k + ω N 3 k ∑ n 4 = 0 N / 4 − 1 x 4 n 4 + 3 ω N / 4 n 4 k {\displaystyle X_{k}=\sum _{n_{2}=0}^{N/2-1}x_{2n_{2}}\omega _{N/2}^{n_{2}k}+\omega _{N}^{k}\sum _{n_{4}=0}^{N/4-1}x_{4n_{4}+1}\omega _{N/4}^{n_{4}k}+\omega _{N}^{3k}\sum _{n_{4}=0}^{N/4-1}x_{4n_{4}+3}\omega _{N/4}^{n_{4}k}} where we have used the fact that ω N m n k = ω N / m n k {\displaystyle \omega _{N}^{mnk}=\omega _{N/m}^{nk}} . These three sums correspond to portions of radix-2 (size N/2) and radix-4 (size N/4) Cooley–Tukey steps, respectively. (The underlying idea is that the even-index subtransform of radix-2 has no multiplicative factor in front of it, so it should be left as-is, while the odd-index subtransform of radix-2 benefits by combining a second recursive subdivision.) These smaller summations are now exactly DFTs of length N/2 and N/4, which can be performed recursively and then recombined. More specifically, let U k {\displaystyle U_{k}} denote the result of the DFT of length N/2 (for k = 0 , … , N / 2 − 1 {\displaystyle k=0,\ldots ,N/2-1} ), and let Z k {\displaystyle Z_{k}} and Z k ′ {\displaystyle Z'_{k}} denote the results of the DFTs of length N/4 (for k = 0 , … , N / 4 − 1 {\displaystyle k=0,\ldots ,N/4-1} ). Then the output X k {\displaystyle X_{k}} is simply: X k = U k + ω N k Z k + ω N 3 k Z k ′ . {\displaystyle X_{k}=U_{k}+\omega _{N}^{k}Z_{k}+\omega _{N}^{3k}Z'_{k}.} This, however, performs unnecessary calculations, since k ≥ N / 4 {\displaystyle k\geq N/4} turn out to share many calculations with k < N / 4 {\displaystyle k<N/4} . In particular, if we add N/4 to k, the size-N/4 DFTs are not changed (because they are periodic in N/4), while the size-N/2 DFT is unchanged if we add N/2 to k. So, the only things that change are the ω N k {\displaystyle \omega _{N}^{k}} and ω N 3 k {\displaystyle \omega _{N}^{3k}} terms, known as twiddle factors. Here, we use the identities: ω N k + N / 4 = − i ω N k {\displaystyle \omega _{N}^{k+N/4}=-i\omega _{N}^{k}} ω N 3 ( k + N / 4 ) = i ω N 3 k {\displaystyle \omega _{N}^{3(k+N/4)}=i\omega _{N}^{3k}} to finally arrive at: X k = U k + ( ω N k Z k + ω N 3 k Z k ′ ) , {\displaystyle X_{k}=U_{k}+\left(\omega _{N}^{k}Z_{k}+\omega _{N}^{3k}Z'_{k}\right),} X k + N / 2 = U k − ( ω N k Z k + ω N 3 k Z k ′ ) , {\displaystyle X_{k+N/2}=U_{k}-\left(\omega _{N}^{k}Z_{k}+\omega _{N}^{3k}Z'_{k}\right),} X k + N / 4 = U k + N / 4 − i ( ω N k Z k − ω N 3 k Z k ′ ) , {\displaystyle X_{k+N/4}=U_{k+N/4}-i\left(\omega _{N}^{k}Z_{k}-\omega _{N}^{3k}Z'_{k}\right),} X k + 3 N / 4 = U k + N / 4 + i ( ω N k Z k − ω N 3 k Z k ′ ) , {\displaystyle X_{k+3N/4}=U_{k+N/4}+i\left(\omega _{N}^{k}Z_{k}-\omega _{N}^{3k}Z'_{k}\right),} which gives all of the outputs X k {\displaystyle X_{k}} if we let k {\displaystyle k} range from 0 {\displaystyle 0} to N / 4 − 1 {\displaystyle N/4-1} in the above four expressions. Notice that these expressions are arranged so that we need to combine the various DFT outputs by pairs of additions and subtractions, which are known as butterflies. In order to obtain the minimal operation count for this algorithm, one needs to take into account special cases for k = 0 {\displaystyle k=0} (where the twiddle factors are unity) and for k = N / 8 {\displaystyle k=N/8} (where the twiddle factors are ( 1 ± i ) / 2 {\displaystyle (1\pm i)/{\sqrt {2}}} and can be multiplied more quickly); see, e.g. Sorensen et al. (1986). Multiplications by ± 1 {\displaystyle \pm 1} and ± i {\displaystyle \pm i} are ordinarily counted as free (all negations can be absorbed by converting additions into subtractions or vice versa). This decomposition is performed recursively when N is a power of two. The base cases of the recursion are N=1, where the DFT is just a copy X 0 = x 0 {\displaystyle X_{0}=x_{0}} , and N=2, where the DFT is an addition X 0 = x 0 + x 1 {\displaystyle X_{0}=x_{0}+x_{1}} and a subtraction X 1 = x 0 − x 1 {\displaystyle X_{1}=x_{0}-x_{1}} . These considerations result in a count: 4 N log 2 ⁡ N − 6 N + 8 {\displaystyle 4N\log _{2}N-6N+8} real additions and multiplications, for N>1 a power of two. This count assumes that, for odd powers of 2, the leftover factor of 2 (after all the split-radix steps, which divide N by 4) is handled directly by the DFT definition (4 real additions and multiplications), or equivalently by a radix-2 Cooley–Tukey FFT step. == References == R. Yavne, "An economical method for calculating the discrete Fourier transform," in Proc. AFIPS Fall Joint Computer Conf. 33, 115–125 (1968). P. Duhamel and H. Hollmann, "Split-radix FFT algorithm," Electron. Lett. 20 (1), 14–16 (1984). M. Vetterli and H. J. Nussbaumer, "Simple FFT and DCT algorithms with reduced number of operations," Signal Processing 6 (4), 267–278 (1984). J. B. Martens, "Recursive cyclotomic factorization—a new algorithm for calculating the discrete Fourier transform," IEEE Trans. Acoust., Speech, Signal Processing 32 (4), 750–761 (1984). P. Duhamel and M. Vetterli, "Fast Fourier transforms: a tutorial review and a state of the art," Signal Processing 19, 259–299 (1990). S. G. Johnson and M. Frigo, "A modified split-radix FFT with fewer arithmetic operations," IEEE Trans. Signal Process. 55 (1), 111–119 (2007). Douglas L. Jones, "Split-radix FFT algorithms," Connexions web site (Nov. 2, 2006). H. V. Sorensen, M. T. Heideman, and C. S. Burrus, "On computing the split-radix FFT", IEEE Trans. Acoust., Speech, Signal Processing 34 (1), 152–156 (1986).
Wikipedia/Split-radix_FFT_algorithm
In mathematics, the discrete Fourier transform over a ring generalizes the discrete Fourier transform (DFT), of a function whose values are commonly complex numbers, over an arbitrary ring. == Definition == Let R be any ring, let n ≥ 1 {\displaystyle n\geq 1} be an integer, and let α ∈ R {\displaystyle \alpha \in R} be a principal nth root of unity, defined by: The discrete Fourier transform maps an n-tuple ( v 0 , … , v n − 1 ) {\displaystyle (v_{0},\ldots ,v_{n-1})} of elements of R to another n-tuple ( f 0 , … , f n − 1 ) {\displaystyle (f_{0},\ldots ,f_{n-1})} of elements of R according to the following formula: By convention, the tuple ( v 0 , … , v n − 1 ) {\displaystyle (v_{0},\ldots ,v_{n-1})} is said to be in the time domain and the index j is called time. The tuple ( f 0 , … , f n − 1 ) {\displaystyle (f_{0},\ldots ,f_{n-1})} is said to be in the frequency domain and the index k is called frequency. The tuple ( f 0 , … , f n − 1 ) {\displaystyle (f_{0},\ldots ,f_{n-1})} is also called the spectrum of ( v 0 , … , v n − 1 ) {\displaystyle (v_{0},\ldots ,v_{n-1})} . This terminology derives from the applications of Fourier transforms in signal processing. If R is an integral domain (which includes fields), it is sufficient to choose α {\displaystyle \alpha } as a primitive nth root of unity, which replaces the condition (1) by: α k ≠ 1 {\displaystyle \alpha ^{k}\neq 1} for 1 ≤ k < n {\displaystyle 1\leq k<n} Another simple condition applies in the case where n is a power of two: (1) may be replaced by α n / 2 = − 1 {\displaystyle \alpha ^{n/2}=-1} . == Inverse == The inverse of the discrete Fourier transform is given as: where 1 / n {\displaystyle 1/n} is the multiplicative inverse of n in R (if this inverse does not exist, the DFT cannot be inverted). == Matrix formulation == Since the discrete Fourier transform is a linear operator, it can be described by matrix multiplication. In matrix notation, the discrete Fourier transform is expressed as follows: [ f 0 f 1 ⋮ f n − 1 ] = [ 1 1 1 ⋯ 1 1 α α 2 ⋯ α n − 1 1 α 2 α 4 ⋯ α 2 ( n − 1 ) ⋮ ⋮ ⋮ ⋱ ⋮ 1 α n − 1 α 2 ( n − 1 ) ⋯ α ( n − 1 ) ( n − 1 ) ] [ v 0 v 1 ⋮ v n − 1 ] . {\displaystyle {\begin{bmatrix}f_{0}\\f_{1}\\\vdots \\f_{n-1}\end{bmatrix}}={\begin{bmatrix}1&1&1&\cdots &1\\1&\alpha &\alpha ^{2}&\cdots &\alpha ^{n-1}\\1&\alpha ^{2}&\alpha ^{4}&\cdots &\alpha ^{2(n-1)}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&\alpha ^{n-1}&\alpha ^{2(n-1)}&\cdots &\alpha ^{(n-1)(n-1)}\\\end{bmatrix}}{\begin{bmatrix}v_{0}\\v_{1}\\\vdots \\v_{n-1}\end{bmatrix}}.} The matrix for this transformation is called the DFT matrix. Similarly, the matrix notation for the inverse Fourier transform is [ v 0 v 1 ⋮ v n − 1 ] = 1 n [ 1 1 1 ⋯ 1 1 α − 1 α − 2 ⋯ α − ( n − 1 ) 1 α − 2 α − 4 ⋯ α − 2 ( n − 1 ) ⋮ ⋮ ⋮ ⋱ ⋮ 1 α − ( n − 1 ) α − 2 ( n − 1 ) ⋯ α − ( n − 1 ) ( n − 1 ) ] [ f 0 f 1 ⋮ f n − 1 ] . {\displaystyle {\begin{bmatrix}v_{0}\\v_{1}\\\vdots \\v_{n-1}\end{bmatrix}}={\frac {1}{n}}{\begin{bmatrix}1&1&1&\cdots &1\\1&\alpha ^{-1}&\alpha ^{-2}&\cdots &\alpha ^{-(n-1)}\\1&\alpha ^{-2}&\alpha ^{-4}&\cdots &\alpha ^{-2(n-1)}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&\alpha ^{-(n-1)}&\alpha ^{-2(n-1)}&\cdots &\alpha ^{-(n-1)(n-1)}\end{bmatrix}}{\begin{bmatrix}f_{0}\\f_{1}\\\vdots \\f_{n-1}\end{bmatrix}}.} == Polynomial formulation == Sometimes it is convenient to identify an n-tuple ( v 0 , … , v n − 1 ) {\displaystyle (v_{0},\ldots ,v_{n-1})} with a formal polynomial p v ( x ) = v 0 + v 1 x + v 2 x 2 + ⋯ + v n − 1 x n − 1 . {\displaystyle p_{v}(x)=v_{0}+v_{1}x+v_{2}x^{2}+\cdots +v_{n-1}x^{n-1}.\,} By writing out the summation in the definition of the discrete Fourier transform (2), we obtain: f k = v 0 + v 1 α k + v 2 α 2 k + ⋯ + v n − 1 α ( n − 1 ) k . {\displaystyle f_{k}=v_{0}+v_{1}\alpha ^{k}+v_{2}\alpha ^{2k}+\cdots +v_{n-1}\alpha ^{(n-1)k}.\,} This means that f k {\displaystyle f_{k}} is just the value of the polynomial p v ( x ) {\displaystyle p_{v}(x)} for x = α k {\displaystyle x=\alpha ^{k}} , i.e., The Fourier transform can therefore be seen to relate the coefficients and the values of a polynomial: the coefficients are in the time-domain, and the values are in the frequency domain. Here, of course, it is important that the polynomial is evaluated at the nth roots of unity, which are exactly the powers of α {\displaystyle \alpha } . Similarly, the definition of the inverse Fourier transform (3) can be written: With p f ( x ) = f 0 + f 1 x + f 2 x 2 + ⋯ + f n − 1 x n − 1 , {\displaystyle p_{f}(x)=f_{0}+f_{1}x+f_{2}x^{2}+\cdots +f_{n-1}x^{n-1},} this means that v j = 1 n p f ( α − j ) . {\displaystyle v_{j}={\frac {1}{n}}p_{f}(\alpha ^{-j}).} We can summarize this as follows: if the values of p v ( x ) {\displaystyle p_{v}(x)} are the coefficients of p f ( x ) {\displaystyle p_{f}(x)} , then the values of p f ( x ) {\displaystyle p_{f}(x)} are the coefficients of p v ( x ) {\displaystyle p_{v}(x)} , up to a scalar factor and reordering. == Special cases == === Complex numbers === If F = C {\displaystyle F={\mathbb {C} }} is the field of complex numbers, then the n {\displaystyle n} th roots of unity can be visualized as points on the unit circle of the complex plane. In this case, one usually takes α = e − 2 π i n , {\displaystyle \alpha =e^{\frac {-2\pi i}{n}},} which yields the usual formula for the complex discrete Fourier transform: f k = ∑ j = 0 n − 1 v j e − 2 π i n j k . {\displaystyle f_{k}=\sum _{j=0}^{n-1}v_{j}e^{{\frac {-2\pi i}{n}}jk}.} Over the complex numbers, it is often customary to normalize the formulas for the DFT and inverse DFT by using the scalar factor 1 n {\displaystyle {\frac {1}{\sqrt {n}}}} in both formulas, rather than 1 {\displaystyle 1} in the formula for the DFT and 1 n {\displaystyle {\frac {1}{n}}} in the formula for the inverse DFT. With this normalization, the DFT matrix is then unitary. Note that n {\displaystyle {\sqrt {n}}} does not make sense in an arbitrary field. === Finite fields === If F = G F ( q ) {\displaystyle F=\mathrm {GF} (q)} is a finite field, where q is a prime power, then the existence of a primitive nth root automatically implies that n divides q − 1 {\displaystyle q-1} , because the multiplicative order of each element must divide the size of the multiplicative group of F, which is q − 1 {\displaystyle q-1} . This in particular ensures that n = 1 + 1 + ⋯ + 1 ⏟ n t i m e s {\displaystyle n=\underbrace {1+1+\cdots +1} _{n\ {\rm {times}}}} is invertible, so that the notation 1 n {\displaystyle {\frac {1}{n}}} in (3) makes sense. An application of the discrete Fourier transform over G F ( q ) {\displaystyle \mathrm {GF} (q)} is the reduction of Reed–Solomon codes to BCH codes in coding theory. Such transform can be carried out efficiently with proper fast algorithms, for example, cyclotomic fast Fourier transform. ==== Polynomial formulation without nth root ==== Suppose F = G F ( p ) {\displaystyle F=\mathrm {GF} (p)} . If p ∤ n {\displaystyle p\nmid n} , it may be the case that n ∤ p − 1 {\displaystyle n\nmid p-1} . This means we cannot find an n t h {\displaystyle n^{th}} root of unity in F {\displaystyle F} . We may view the Fourier transform as an isomorphism F [ C n ] = F [ x ] / ( x n − 1 ) ≅ ⨁ i F [ x ] / ( P i ( x ) ) {\displaystyle \mathrm {F} [C_{n}]=\mathrm {F} [x]/(x^{n}-1)\cong \bigoplus _{i}\mathrm {F} [x]/(P_{i}(x))} for some polynomials P i ( x ) {\displaystyle P_{i}(x)} , in accordance with Maschke's theorem. The map is given by the Chinese remainder theorem, and the inverse is given by applying Bézout's identity for polynomials. x n − 1 = ∏ d | n Φ d ( x ) {\displaystyle x^{n}-1=\prod _{d|n}\Phi _{d}(x)} , a product of cyclotomic polynomials. Factoring Φ d ( x ) {\displaystyle \Phi _{d}(x)} in F [ x ] {\displaystyle F[x]} is equivalent to factoring the prime ideal ( p ) {\displaystyle (p)} in Z [ ζ ] = Z [ x ] / ( Φ d ( x ) ) {\displaystyle \mathrm {Z} [\zeta ]=\mathrm {Z} [x]/(\Phi _{d}(x))} . We obtain g {\displaystyle g} polynomials P 1 … P g {\displaystyle P_{1}\ldots P_{g}} of degree f {\displaystyle f} where f g = φ ( d ) {\displaystyle fg=\varphi (d)} and f {\displaystyle f} is the order of p mod d {\displaystyle p{\text{ mod }}d} . As above, we may extend the base field to G F ( q ) {\displaystyle \mathrm {GF} (q)} in order to find a primitive root, i.e. a splitting field for x n − 1 {\displaystyle x^{n}-1} . Now x n − 1 = ∏ k ( x − α k ) {\displaystyle x^{n}-1=\prod _{k}(x-\alpha ^{k})} , so an element ∑ j = 0 n − 1 v j x j ∈ F [ x ] / ( x n − 1 ) {\displaystyle \sum _{j=0}^{n-1}v_{j}x^{j}\in F[x]/(x^{n}-1)} maps to ∑ j = 0 n − 1 v j x j mod ( x − α k ) ≡ ∑ j = 0 n − 1 v j ( α k ) j {\displaystyle \sum _{j=0}^{n-1}v_{j}x^{j}\mod (x-\alpha ^{k})\equiv \sum _{j=0}^{n-1}v_{j}(\alpha ^{k})^{j}} for each k {\displaystyle k} . ==== When p divides n ==== When p | n {\displaystyle p|n} , we may still define an F p {\displaystyle F_{p}} -linear isomorphism as above. Note that ( x n − 1 ) = ( x m − 1 ) p s {\displaystyle (x^{n}-1)=(x^{m}-1)^{p^{s}}} where n = m p s {\displaystyle n=mp^{s}} and p ∤ m {\displaystyle p\nmid m} . We apply the above factorization to x m − 1 {\displaystyle x^{m}-1} , and now obtain the decomposition F [ x ] / ( x n − 1 ) ≅ ⨁ i F [ x ] / ( P i ( x ) p s ) {\displaystyle F[x]/(x^{n}-1)\cong \bigoplus _{i}F[x]/(P_{i}(x)^{p^{s}})} . The modules occurring are now indecomposable rather than irreducible. ==== Order of the DFT matrix ==== Suppose p ∤ n {\displaystyle p\nmid n} so we have an n t h {\displaystyle n^{th}} root of unity α {\displaystyle \alpha } . Let A {\displaystyle A} be the above DFT matrix, a Vandermonde matrix with entries A i j = α i j {\displaystyle A_{ij}=\alpha ^{ij}} for 0 ≤ i , j < n {\displaystyle 0\leq i,j<n} . Recall that ∑ j = 0 n − 1 α ( k − l ) j = n δ k , l {\displaystyle \sum _{j=0}^{n-1}\alpha ^{(k-l)j}=n\delta _{k,l}} since if k = l {\displaystyle k=l} , then every entry is 1. If k ≠ l {\displaystyle k\neq l} , then we have a geometric series with common ratio α k − l {\displaystyle \alpha ^{k-l}} , so we obtain 1 − α n ( k − l ) 1 − α k − l {\displaystyle {\frac {1-\alpha ^{n(k-l)}}{1-\alpha ^{k-l}}}} . Since α n = 1 {\displaystyle \alpha ^{n}=1} the numerator is zero, but k − l ≠ 0 {\displaystyle k-l\neq 0} so the denominator is nonzero. First computing the square, ( A 2 ) i k = ∑ j = 0 n − 1 α j ( i + k ) = n δ i , − k {\displaystyle (A^{2})_{ik}=\sum _{j=0}^{n-1}\alpha ^{j(i+k)}=n\delta _{i,-k}} . Computing A 4 = ( A 2 ) 2 {\displaystyle A^{4}=(A^{2})^{2}} similarly and simplifying the deltas, we obtain ( A 4 ) i k = n 2 δ i , k {\displaystyle (A^{4})_{ik}=n^{2}\delta _{i,k}} . Thus, A 4 = n 2 I n {\displaystyle A^{4}=n^{2}I_{n}} and the order is 4 ⋅ ord ( n 2 ) {\displaystyle 4\cdot {\text{ord}}(n^{2})} . ==== Normalizing the DFT matrix ==== In order to align with the complex case and ensure the matrix is order 4 exactly, we can normalize the above DFT matrix A {\displaystyle A} with 1 n {\displaystyle {\frac {1}{\sqrt {n}}}} . Note that though n {\displaystyle {\sqrt {n}}} may not exist in the splitting field F q {\displaystyle F_{q}} of x n − 1 {\displaystyle x^{n}-1} , we may form a quadratic extension F q 2 ≅ F q [ x ] / ( x 2 − n ) {\displaystyle F_{q^{2}}\cong F_{q}[x]/(x^{2}-n)} in which the square root exists. We may then set U = 1 n A {\displaystyle U={\frac {1}{\sqrt {n}}}A} , and U 4 = I n {\displaystyle U^{4}=I_{n}} . ==== Unitarity ==== Suppose p ∤ n {\displaystyle p\nmid n} . One can ask whether the DFT matrix is unitary over a finite field. If the matrix entries are over F q {\displaystyle F_{q}} , then one must ensure q {\displaystyle q} is a perfect square or extend to F q 2 {\displaystyle F_{q^{2}}} in order to define the order two automorphism x ↦ x q {\displaystyle x\mapsto x^{q}} . Consider the above DFT matrix A i j = α i j {\displaystyle A_{ij}=\alpha ^{ij}} . Note that A {\displaystyle A} is symmetric. Conjugating and transposing, we obtain A i j ∗ = α q j i {\displaystyle A_{ij}^{*}=\alpha ^{qji}} . ( A A ∗ ) i k = ∑ j = 0 n − 1 α j ( i + q k ) = n δ i , − q k {\displaystyle (AA^{*})_{ik}=\sum _{j=0}^{n-1}\alpha ^{j(i+qk)}=n\delta _{i,-qk}} by a similar geometric series argument as above. We may remove the n {\displaystyle n} by normalizing so that U = 1 n A {\displaystyle U={\frac {1}{\sqrt {n}}}A} and ( U U ∗ ) i k = δ i , − q k {\displaystyle (UU^{*})_{ik}=\delta _{i,-qk}} . Thus U {\displaystyle U} is unitary iff q ≡ − 1 ( mod n ) {\displaystyle q\equiv -1\,({\text{mod}}\,n)} . Recall that since we have an n t h {\displaystyle n^{th}} root of unity, n | q 2 − 1 {\displaystyle n|q^{2}-1} . This means that q 2 − 1 ≡ ( q + 1 ) ( q − 1 ) ≡ 0 ( mod n ) {\displaystyle q^{2}-1\equiv (q+1)(q-1)\equiv 0\,({\text{mod}}\,n)} . Note if q {\displaystyle q} was not a perfect square to begin with, then n | q − 1 {\displaystyle n|q-1} and so q ≡ 1 ( mod n ) {\displaystyle q\equiv 1\,({\text{mod}}\,n)} . For example, when p = 3 , n = 5 {\displaystyle p=3,n=5} we need to extend to q 2 = 3 4 {\displaystyle q^{2}=3^{4}} to get a 5th root of unity. q = 9 ≡ − 1 ( mod 5 ) {\displaystyle q=9\equiv -1\,({\text{mod}}\,5)} . For a nonexample, when p = 3 , n = 8 {\displaystyle p=3,n=8} we extend to F 3 2 {\displaystyle F_{3^{2}}} to get an 8th root of unity. q 2 = 9 {\displaystyle q^{2}=9} , so q ≡ 3 ( mod 8 ) {\displaystyle q\equiv 3\,({\text{mod}}\,8)} , and in this case q + 1 ≢ 0 {\displaystyle q+1\not \equiv 0} and q − 1 ≢ 0 {\displaystyle q-1\not \equiv 0} . U U ∗ {\displaystyle UU^{*}} is a square root of the identity, so U {\displaystyle U} is not unitary. ==== Eigenvalues of the DFT matrix ==== When p ∤ n {\displaystyle p\nmid n} , we have an n t h {\displaystyle n^{th}} root of unity α {\displaystyle \alpha } in the splitting field F q ≅ F p [ x ] / ( x n − 1 ) {\displaystyle F_{q}\cong F_{p}[x]/(x^{n}-1)} . Note that the characteristic polynomial of the above DFT matrix may not split over F q {\displaystyle F_{q}} . The DFT matrix is order 4. We may need to go to a further extension F q ′ {\displaystyle F_{q'}} , the splitting extension of the characteristic polynomial of the DFT matrix, which at least contains fourth roots of unity. If a {\displaystyle a} is a generator of the multiplicative group of F q ′ {\displaystyle F_{q'}} , then the eigenvalues are { ± 1 , ± a ( q ′ − 1 ) / 4 } {\displaystyle \{\pm 1,\pm a^{(q'-1)/4}\}} , in exact analogy with the complex case. They occur with some nonnegative multiplicity. === Number-theoretic transform === The number-theoretic transform (NTT) is obtained by specializing the discrete Fourier transform to F = Z / p {\displaystyle F={\mathbb {Z} }/p} , the integers modulo a prime p. This is a finite field, and primitive nth roots of unity exist whenever n divides p − 1 {\displaystyle p-1} , so we have p = ξ n + 1 {\displaystyle p=\xi n+1} for a positive integer ξ. Specifically, let ω {\displaystyle \omega } be a primitive ( p − 1 ) {\displaystyle (p-1)} th root of unity, then an nth root of unity α {\displaystyle \alpha } can be found by letting α = ω ξ {\displaystyle \alpha =\omega ^{\xi }} . e.g. for p = 5 {\displaystyle p=5} , α = 2 {\displaystyle \alpha =2} 2 1 = 2 ( mod 5 ) 2 2 = 4 ( mod 5 ) 2 3 = 3 ( mod 5 ) 2 4 = 1 ( mod 5 ) {\displaystyle {\begin{aligned}2^{1}&=2{\pmod {5}}\\2^{2}&=4{\pmod {5}}\\2^{3}&=3{\pmod {5}}\\2^{4}&=1{\pmod {5}}\end{aligned}}} when N = 4 {\displaystyle N=4} [ F ( 0 ) F ( 1 ) F ( 2 ) F ( 3 ) ] = [ 1 1 1 1 1 2 4 3 1 4 1 4 1 3 4 2 ] [ f ( 0 ) f ( 1 ) f ( 2 ) f ( 3 ) ] {\displaystyle {\begin{bmatrix}F(0)\\F(1)\\F(2)\\F(3)\end{bmatrix}}={\begin{bmatrix}1&1&1&1\\1&2&4&3\\1&4&1&4\\1&3&4&2\end{bmatrix}}{\begin{bmatrix}f(0)\\f(1)\\f(2)\\f(3)\end{bmatrix}}} The number theoretic transform may be meaningful in the ring Z / m {\displaystyle \mathbb {Z} /m} , even when the modulus m is not prime, provided a principal root of order n exists. Special cases of the number theoretic transform such as the Fermat Number Transform (m = 2k+1), used by the Schönhage–Strassen algorithm, or Mersenne Number Transform (m = 2k − 1) use a composite modulus. In general, if m = ∏ i p i e i {\displaystyle m=\prod _{i}p_{i}^{e_{i}}} , then one may find an n t h {\displaystyle n^{th}} root of unity mod m by finding primitive n t h {\displaystyle n^{th}} roots of unity g i {\displaystyle g_{i}} mod p i e i {\displaystyle p_{i}^{e_{i}}} , yielding a tuple g = ( g i ) i ∈ ∏ i ( Z / p i e i Z ) ∗ {\displaystyle g=\left(g_{i}\right)_{i}\in \prod _{i}\left(\mathbb {Z} /p_{i}^{e_{i}}\mathbb {Z} \right)^{\ast }} . The preimage of g {\displaystyle g} under the Chinese remainder theorem isomorphism is an n t h {\displaystyle n^{th}} root of unity α {\displaystyle \alpha } such that α n / 2 = − 1 mod m {\displaystyle \alpha ^{n/2}=-1\mod m} . This ensures that the above summation conditions are satisfied. We must have that n | φ ( p i e i ) {\displaystyle n|\varphi (p_{i}^{e_{i}})} for each i {\displaystyle i} , where φ {\displaystyle \varphi } is the Euler's totient function function. === Discrete weighted transform === The discrete weighted transform (DWT) is a variation on the discrete Fourier transform over arbitrary rings involving weighting the input before transforming it by multiplying elementwise by a weight vector, then weighting the result by another vector. The Irrational base discrete weighted transform is a special case of this. == Properties == Most of the important attributes of the complex DFT, including the inverse transform, the convolution theorem, and most fast Fourier transform (FFT) algorithms, depend only on the property that the kernel of the transform is a principal root of unity. These properties also hold, with identical proofs, over arbitrary rings. In the case of fields, this analogy can be formalized by the field with one element, considering any field with a primitive nth root of unity as an algebra over the extension field F 1 n . {\displaystyle \mathbf {F} _{1^{n}}.} In particular, the applicability of O ( n log ⁡ n ) {\displaystyle O(n\log n)} fast Fourier transform algorithms to compute the NTT, combined with the convolution theorem, mean that the number-theoretic transform gives an efficient way to compute exact convolutions of integer sequences. While the complex DFT can perform the same task, it is susceptible to round-off error in finite-precision floating point arithmetic; the NTT has no round-off because it deals purely with fixed-size integers that can be exactly represented. == Fast algorithms == For the implementation of a "fast" algorithm (similar to how FFT computes the DFT), it is often desirable that the transform length is also highly composite, e.g., a power of two. However, there are specialized fast Fourier transform algorithms for finite fields, such as Wang and Zhu's algorithm, that are efficient regardless of the transform length factors. == See also == Discrete Fourier transform (complex) Fourier transform on finite groups Gauss sum Convolution Least-squares spectral analysis Multiplication algorithm == References == == External links == http://www.apfloat.org/ntt.html
Wikipedia/Number-theoretic_transform
The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers, published by Arnold Schönhage and Volker Strassen in 1971. It works by recursively applying fast Fourier transform (FFT) over the integers modulo 2 n + 1 {\displaystyle 2^{n}+1} . The run-time bit complexity to multiply two n-digit numbers using the algorithm is O ( n ⋅ log ⁡ n ⋅ log ⁡ log ⁡ n ) {\displaystyle O(n\cdot \log n\cdot \log \log n)} in big O notation. The Schönhage–Strassen algorithm was the asymptotically fastest multiplication method known from 1971 until 2007. It is asymptotically faster than older methods such as Karatsuba and Toom–Cook multiplication, and starts to outperform them in practice for numbers beyond about 10,000 to 100,000 decimal digits. In 2007, Martin Fürer published an algorithm with faster asymptotic complexity. In 2019, David Harvey and Joris van der Hoeven demonstrated that multi-digit multiplication has theoretical O ( n log ⁡ n ) {\displaystyle O(n\log n)} complexity; however, their algorithm has constant factors which make it impossibly slow for any conceivable practical problem (see galactic algorithm). Applications of the Schönhage–Strassen algorithm include large computations done for their own sake such as the Great Internet Mersenne Prime Search and approximations of π, as well as practical applications such as Lenstra elliptic curve factorization via Kronecker substitution, which reduces polynomial multiplication to integer multiplication. == Description == This section has a simplified version of the algorithm, showing how to compute the product a b {\displaystyle ab} of two natural numbers a , b {\displaystyle a,b} , modulo a number of the form 2 n + 1 {\displaystyle 2^{n}+1} , where n = 2 k M {\displaystyle n=2^{k}M} is some fixed number. The integers a , b {\displaystyle a,b} are to be divided into D = 2 k {\displaystyle D=2^{k}} blocks of M {\displaystyle M} bits, so in practical implementations, it is important to strike the right balance between the parameters M , k {\displaystyle M,k} . In any case, this algorithm will provide a way to multiply two positive integers, provided n {\displaystyle n} is chosen so that a b < 2 n + 1 {\displaystyle ab<2^{n}+1} . Let n = D M {\displaystyle n=DM} be the number of bits in the signals a {\displaystyle a} and b {\displaystyle b} , where D = 2 k {\displaystyle D=2^{k}} is a power of two. Divide the signals a {\displaystyle a} and b {\displaystyle b} into D {\displaystyle D} blocks of M {\displaystyle M} bits each, storing the resulting blocks as arrays A , B {\displaystyle A,B} (whose entries we shall consider for simplicity as arbitrary precision integers). We now select a modulus for the Fourier transform, as follows. Let M ′ {\displaystyle M'} be such that D M ′ ≥ 2 M + k {\displaystyle DM'\geq 2M+k} . Also put n ′ = D M ′ {\displaystyle n'=DM'} , and regard the elements of the arrays A , B {\displaystyle A,B} as (arbitrary precision) integers modulo 2 n ′ + 1 {\displaystyle 2^{n'}+1} . Observe that since 2 n ′ + 1 ≥ 2 2 M + k + 1 = D 2 2 M + 1 {\displaystyle 2^{n'}+1\geq 2^{2M+k}+1=D2^{2M}+1} , the modulus is large enough to accommodate any carries that can result from multiplying a {\displaystyle a} and b {\displaystyle b} . Thus, the product a b {\displaystyle ab} (modulo 2 n + 1 {\displaystyle 2^{n}+1} ) can be calculated by evaluating the convolution of A , B {\displaystyle A,B} . Also, with g = 2 2 M ′ {\displaystyle g=2^{2M'}} , we have g D / 2 ≡ − 1 ( mod 2 n ′ + 1 ) {\displaystyle g^{D/2}\equiv -1{\pmod {2^{n'}+1}}} , and so g {\displaystyle g} is a primitive D {\displaystyle D} th root of unity modulo 2 n ′ + 1 {\displaystyle 2^{n'}+1} . We now take the discrete Fourier transform of the arrays A , B {\displaystyle A,B} in the ring Z / ( 2 n ′ + 1 ) Z {\displaystyle \mathbb {Z} /(2^{n'}+1)\mathbb {Z} } , using the root of unity g {\displaystyle g} for the Fourier basis, giving the transformed arrays A ^ , B ^ {\displaystyle {\widehat {A}},{\widehat {B}}} . Because D = 2 k {\displaystyle D=2^{k}} is a power of two, this can be achieved in logarithmic time using a fast Fourier transform. Let C ^ i = A ^ i B ^ i {\displaystyle {\widehat {C}}_{i}={\widehat {A}}_{i}{\widehat {B}}_{i}} (pointwise product), and compute the inverse transform C {\displaystyle C} of the array C ^ {\displaystyle {\widehat {C}}} , again using the root of unity g {\displaystyle g} . The array C {\displaystyle C} is now the convolution of the arrays A , B {\displaystyle A,B} . Finally, the product a b ( mod 2 n + 1 ) {\displaystyle ab{\pmod {2^{n}+1}}} is given by evaluating a b ≡ ∑ j C j 2 M j mod 2 n + 1. {\displaystyle ab\equiv \sum _{j}C_{j}2^{Mj}\mod {2^{n}+1}.} This basic algorithm can be improved in several ways. Firstly, it is not necessary to store the digits of a , b {\displaystyle a,b} to arbitrary precision, but rather only up to n ′ + 1 {\displaystyle n'+1} bits, which gives a more efficient machine representation of the arrays A , B {\displaystyle A,B} . Secondly, it is clear that the multiplications in the forward transforms are simple bit shifts. With some care, it is also possible to compute the inverse transform using only shifts. Taking care, it is thus possible to eliminate any true multiplications from the algorithm except for where the pointwise product C ^ i = A ^ i B ^ i {\displaystyle {\widehat {C}}_{i}={\widehat {A}}_{i}{\widehat {B}}_{i}} is evaluated. It is therefore advantageous to select the parameters D , M {\displaystyle D,M} so that this pointwise product can be performed efficiently, either because it is a single machine word or using some optimized algorithm for multiplying integers of a (ideally small) number of words. Selecting the parameters D , M {\displaystyle D,M} is thus an important area for further optimization of the method. == Details == Every number in base B, can be written as a polynomial: X = ∑ i = 0 N x i B i {\displaystyle X=\sum _{i=0}^{N}{x_{i}B^{i}}} Furthermore, multiplication of two numbers could be thought of as a product of two polynomials: X Y = ( ∑ i = 0 N x i B i ) ( ∑ j = 0 N y i B j ) {\displaystyle XY=\left(\sum _{i=0}^{N}{x_{i}B^{i}}\right)\left(\sum _{j=0}^{N}{y_{i}B^{j}}\right)} Because, for B k {\displaystyle B^{k}} : c k = ∑ ( i , j ) : i + j = k a i b j = ∑ i = 0 k a i b k − i {\displaystyle c_{k}=\sum _{(i,j):i+j=k}{a_{i}b_{j}}=\sum _{i=0}^{k}{a_{i}b_{k-i}}} , we have a convolution. By using FFT (fast Fourier transform), used in the original version rather than NTT (Number-theoretic transform), with convolution rule; we get f ^ ( a ∗ b ) = f ^ ( ∑ i = 0 k a i b k − i ) = f ^ ( a ) ∙ f ^ ( b ) . {\displaystyle {\hat {f}}(a*b)={\hat {f}}\left(\sum _{i=0}^{k}a_{i}b_{k-i}\right)={\hat {f}}(a)\bullet {\hat {f}}(b).} That is; C k = a k ∙ b k {\displaystyle C_{k}=a_{k}\bullet b_{k}} , where C k {\displaystyle C_{k}} is the corresponding coefficient in Fourier space. This can also be written as: fft ( a ∗ b ) = fft ( a ) ∙ fft ( b ) {\displaystyle {\text{fft}}(a*b)={\text{fft}}(a)\bullet {\text{fft}}(b)} . We have the same coefficients due to linearity under the Fourier transform, and because these polynomials only consist of one unique term per coefficient: f ^ ( x n ) = ( i 2 π ) n δ ( n ) {\displaystyle {\hat {f}}(x^{n})=\left({\frac {i}{2\pi }}\right)^{n}\delta ^{(n)}} and f ^ ( a X ( ξ ) + b Y ( ξ ) ) = a X ^ ( ξ ) + b Y ^ ( ξ ) {\displaystyle {\hat {f}}(a\,X(\xi )+b\,Y(\xi ))=a\,{\hat {X}}(\xi )+b\,{\hat {Y}}(\xi )} Convolution rule: f ^ ( X ∗ Y ) = f ^ ( X ) ∙ f ^ ( Y ) {\displaystyle {\hat {f}}(X*Y)=\ {\hat {f}}(X)\bullet {\hat {f}}(Y)} We have reduced our convolution problem to product problem, through FFT. By finding the FFT of the polynomial interpolation of each C k {\displaystyle C_{k}} , one can determine the desired coefficients. This algorithm uses the divide-and-conquer method to divide the problem into subproblems. === Convolution under mod N === c k = ∑ ( i , j ) : i + j ≡ k ( mod N ( n ) ) a i b j {\displaystyle c_{k}=\sum _{(i,j):i+j\equiv k{\pmod {N(n)}}}a_{i}b_{j}} , where N ( n ) = 2 n + 1 {\displaystyle N(n)=2^{n}+1} . By letting: a i ′ = θ i a i {\displaystyle a_{i}'=\theta ^{i}a_{i}} and b j ′ = θ j b j , {\displaystyle b_{j}'=\theta ^{j}b_{j},} where θ N = − 1 {\displaystyle \theta ^{N}=-1} is the nth root, one sees that: C k = ∑ ( i , j ) : i + j = k ≡ ( mod N ( n ) ) a i b j = θ − k ∑ ( i , j ) : i + j ≡ k ( mod N ( n ) ) a i ′ b j ′ = θ − k ( ∑ ( i , j ) : i + j = k a i ′ b j ′ + ∑ ( i , j ) : i + j = k + n a i ′ b j ′ ) = θ − k ( ∑ ( i , j ) : i + j = k a i b j θ k + ∑ ( i , j ) : i + j = k + n a i b j θ n + k ) = ∑ ( i , j ) : i + j = k a i b j + θ n ∑ ( i , j ) : i + j = k + n a i b j . {\displaystyle {\begin{aligned}C_{k}&=\sum _{(i,j):i+j=k\equiv {\pmod {N(n)}}}a_{i}b_{j}=\theta ^{-k}\sum _{(i,j):i+j\equiv k{\pmod {N(n)}}}a_{i}'b_{j}'\\[6pt]&=\theta ^{-k}\left(\sum _{(i,j):i+j=k}a_{i}'b_{j}'+\sum _{(i,j):i+j=k+n}a_{i}'b_{j}'\right)\\[6pt]&=\theta ^{-k}\left(\sum _{(i,j):i+j=k}a_{i}b_{j}\theta ^{k}+\sum _{(i,j):i+j=k+n}a_{i}b_{j}\theta ^{n+k}\right)\\[6pt]&=\sum _{(i,j):i+j=k}a_{i}b_{j}+\theta ^{n}\sum _{(i,j):i+j=k+n}a_{i}b_{j}.\end{aligned}}} This mean, one can use weight θ i {\displaystyle \theta ^{i}} , and then multiply with θ − k {\displaystyle \theta ^{-k}} after. Instead of using weight, as θ N = − 1 {\displaystyle \theta ^{N}=-1} , in first step of recursion (when n = N {\displaystyle n=N} ), one can calculate: C k = ∑ ( i , j ) : i + j ≡ k ( mod N ( N ) ) = ∑ ( i , j ) : i + j = k a i b j − ∑ ( i , j ) : i + j = k + n a i b j {\displaystyle C_{k}=\sum _{(i,j):i+j\equiv k{\pmod {N(N)}}}=\sum _{(i,j):i+j=k}a_{i}b_{j}-\sum _{(i,j):i+j=k+n}a_{i}b_{j}} In a normal FFT which operates over complex numbers, one would use: exp ⁡ ( 2 k π i n ) = cos ⁡ 2 k π n + i sin ⁡ 2 k π n , k = 0 , 1 , … , n − 1. {\displaystyle \exp \left({\frac {2k\pi i}{n}}\right)=\cos {\frac {2k\pi }{n}}+i\sin {\frac {2k\pi }{n}},\qquad k=0,1,\dots ,n-1.} C k = θ − k ( ∑ ( i , j ) : i + j = k a i b j θ k + ∑ ( i , j ) : i + j = k + n a i b j θ n + k ) = e − i 2 π k / n ( ∑ ( i , j ) : i + j = k a i b j e i 2 π k / n + ∑ ( i , j ) : i + j = k + n a i b j e i 2 π ( n + k ) / n ) {\displaystyle {\begin{aligned}C_{k}&=\theta ^{-k}\left(\sum _{(i,j):i+j=k}a_{i}b_{j}\theta ^{k}+\sum _{(i,j):i+j=k+n}a_{i}b_{j}\theta ^{n+k}\right)\\[6pt]&=e^{-i2\pi k/n}\left(\sum _{(i,j):i+j=k}a_{i}b_{j}e^{i2\pi k/n}+\sum _{(i,j):i+j=k+n}a_{i}b_{j}e^{i2\pi (n+k)/n}\right)\end{aligned}}} However, FFT can also be used as a NTT (number theoretic transformation) in Schönhage–Strassen. This means that we have to use θ to generate numbers in a finite field (for example G F ( 2 n + 1 ) {\displaystyle \mathrm {GF} (2^{n}+1)} ). A root of unity under a finite field GF(r), is an element a such that θ r − 1 ≡ 1 {\displaystyle \theta ^{r-1}\equiv 1} or θ r ≡ θ {\displaystyle \theta ^{r}\equiv \theta } . For example GF(p), where p is a prime number, gives { 1 , 2 , … , p − 1 } {\displaystyle \{1,2,\ldots ,p-1\}} . Notice that 2 n ≡ − 1 {\displaystyle 2^{n}\equiv -1} in GF ⁡ ( 2 n + 1 ) {\displaystyle \operatorname {GF} (2^{n}+1)} and 2 ≡ − 1 {\displaystyle {\sqrt {2}}\equiv -1} in GF ⁡ ( 2 n + 2 + 1 ) {\displaystyle \operatorname {GF} (2^{n+2}+1)} . For these candidates, θ N ≡ − 1 {\displaystyle \theta ^{N}\equiv -1} under its finite field, and therefore act the way we want . Same FFT algorithms can still be used, though, as long as θ is a root of unity of a finite field. To find FFT/NTT transform, we do the following: C k ′ = f ^ ( k ) = f ^ ( θ − k ( ∑ ( i , j ) : i + j = k a i b j θ k + ∑ ( i , j ) : i + j = k + n a i b j θ n + k ) ) C k + k ′ = f ^ ( k + k ) = f ^ ( ∑ ( i , j ) : i + j = 2 k a i b j θ k + ∑ ( i , j ) : i + j = n + 2 k a i b j θ n + k ) = f ^ ( ∑ ( i , j ) : i + j = 2 k a i b j θ k + ∑ ( i , j ) : i + j = 2 k + n a i b j θ n + k ) = f ^ ( A k ← k ) ∙ f ^ ( B k ← k ) + f ^ ( A k ← k + n ) ∙ f ^ ( B k ← k + n ) {\displaystyle {\begin{aligned}C_{k}'&={\hat {f}}(k)={\hat {f}}\left(\theta ^{-k}\left(\sum _{(i,j):i+j=k}a_{i}b_{j}\theta ^{k}+\sum _{(i,j):i+j=k+n}a_{i}b_{j}\theta ^{n+k}\right)\right)\\[6pt]C_{k+k}'&={\hat {f}}(k+k)={\hat {f}}\left(\sum _{(i,j):i+j=2k}a_{i}b_{j}\theta ^{k}+\sum _{(i,j):i+j=n+2k}a_{i}b_{j}\theta ^{n+k}\right)\\[6pt]&={\hat {f}}\left(\sum _{(i,j):i+j=2k}a_{i}b_{j}\theta ^{k}+\sum _{(i,j):i+j=2k+n}a_{i}b_{j}\theta ^{n+k}\right)\\[6pt]&={\hat {f}}\left(A_{k\leftarrow k}\right)\bullet {\hat {f}}(B_{k\leftarrow k})+{\hat {f}}(A_{k\leftarrow k+n})\bullet {\hat {f}}(B_{k\leftarrow k+n})\end{aligned}}} First product gives contribution to c k {\displaystyle c_{k}} , for each k. Second gives contribution to c k {\displaystyle c_{k}} , due to ( i + j ) {\displaystyle (i+j)} mod N ( n ) {\displaystyle N(n)} . To do the inverse: C k = 2 − m f − 1 ^ ( θ − k C k + k ′ ) {\displaystyle C_{k}=2^{-m}{\hat {f^{-1}}}(\theta ^{-k}C_{k+k}')} or C k = f − 1 ^ ( θ − k C k + k ′ ) {\displaystyle C_{k}={\hat {f^{-1}}}(\theta ^{-k}C_{k+k}')} depending whether data needs to be normalized. One multiplies by 2 − m {\displaystyle 2^{-m}} to normalize FFT data into a specific range, where 1 n ≡ 2 − m mod N ( n ) {\displaystyle {\frac {1}{n}}\equiv 2^{-m}{\bmod {N}}(n)} , where m is found using the modular multiplicative inverse. == Implementation details == === Why N = 2M + 1 mod N === In Schönhage–Strassen algorithm, N = 2 M + 1 {\displaystyle N=2^{M}+1} . This should be thought of as a binary tree, where one have values in 0 ≤ index ≤ 2 M = 2 i + j {\displaystyle 0\leq {\text{index}}\leq 2^{M}=2^{i+j}} . By letting K ∈ [ 0 , M ] {\displaystyle K\in [0,M]} , for each K one can find all i + j = K {\displaystyle i+j=K} , and group all ( i , j ) {\displaystyle (i,j)} pairs into M different groups. Using i + j = k {\displaystyle i+j=k} to group ( i , j ) {\displaystyle (i,j)} pairs through convolution is a classical problem in algorithms. Having this in mind, N = 2 M + 1 {\displaystyle N=2^{M}+1} help us to group ( i , j ) {\displaystyle (i,j)} into M 2 k {\displaystyle {\frac {M}{2^{k}}}} groups for each group of subtasks in depth k in a tree with N = 2 M 2 k + 1 {\displaystyle N=2^{\frac {M}{2^{k}}}+1} Notice that N = 2 M + 1 = 2 2 L + 1 {\displaystyle N=2^{M}+1=2^{2^{L}}+1} , for some L. This makes N a Fermat number. When doing mod N = 2 M + 1 = 2 2 L + 1 {\displaystyle N=2^{M}+1=2^{2^{L}}+1} , we have a Fermat ring. Because some Fermat numbers are Fermat primes, one can in some cases avoid calculations. There are other N that could have been used, of course, with same prime number advantages. By letting N = 2 k − 1 {\displaystyle N=2^{k}-1} , one have the maximal number in a binary number with k + 1 {\displaystyle k+1} bits. N = 2 k − 1 {\displaystyle N=2^{k}-1} is a Mersenne number, that in some cases is a Mersenne prime. It is a natural candidate against Fermat number N = 2 2 L + 1 {\displaystyle N=2^{2^{L}}+1} ==== In search of another N ==== Doing several mod calculations against different N, can be helpful when it comes to solving integer product. By using the Chinese remainder theorem, after splitting M into smaller different types of N, one can find the answer of multiplication xy Fermat numbers and Mersenne numbers are just two types of numbers, in something called generalized Fermat Mersenne number (GSM); with formula: G q , p , n = ∑ i = 1 p q ( p − i ) n = q p n − 1 q n − 1 {\displaystyle G_{q,p,n}=\sum _{i=1}^{p}q^{(p-i)n}={\frac {q^{pn}-1}{q^{n}-1}}} M p , n = G 2 , p , n {\displaystyle M_{p,n}=G_{2,p,n}} In this formula, M 2 , 2 k {\displaystyle M_{2,2^{k}}} is a Fermat number, and M p , 1 {\displaystyle M_{p,1}} is a Mersenne number. This formula can be used to generate sets of equations, that can be used in CRT (Chinese remainder theorem): g ( M p , n − 1 ) 2 ≡ − 1 ( mod M p , n ) {\displaystyle g^{\frac {(M_{p,n}-1)}{2}}\equiv -1{\pmod {M_{p,n}}}} , where g is a number such that there exists an x where x 2 ≡ g ( mod M p , n ) {\displaystyle x^{2}\equiv g{\pmod {M_{p,n}}}} , assuming N = 2 n {\displaystyle N=2^{n}} Furthermore; g 2 ( p − 1 ) n − 1 ≡ a 2 n − 1 ( mod M p , n ) {\displaystyle g^{2^{(p-1)n}-1}\equiv a^{2^{n}-1}{\pmod {M_{p,n}}}} , where a is an element that generates elements in { 1 , 2 , 4 , . . .2 n − 1 , 2 n } {\displaystyle \{1,2,4,...2^{n-1},2^{n}\}} in a cyclic manner. If N = 2 t {\displaystyle N=2^{t}} , where 1 ≤ t ≤ n {\displaystyle 1\leq t\leq n} , then g t = a ( 2 n − 1 ) 2 n − t {\displaystyle g_{t}=a^{(2^{n}-1)2^{n-t}}} . === How to choose K for a specific N === The following formula is helpful, finding a proper K (number of groups to divide N bits into) given bit size N by calculating efficiency : E = 2 N K + k n {\displaystyle E={\frac {{\frac {2N}{K}}+k}{n}}} N is bit size (the one used in 2 N + 1 {\displaystyle 2^{N}+1} ) at outermost level. K gives N K {\displaystyle {\frac {N}{K}}} groups of bits, where K = 2 k {\displaystyle K=2^{k}} . n is found through N, K and k by finding the smallest x, such that 2 N / K + k ≤ n = K 2 x {\displaystyle 2N/K+k\leq n=K2^{x}} If one assume efficiency above 50%, n 2 ≤ 2 N K , K ≤ n {\displaystyle {\frac {n}{2}}\leq {\frac {2N}{K}},K\leq n} and k is very small compared to rest of formula; one get K ≤ 2 N {\displaystyle K\leq 2{\sqrt {N}}} This means: When something is very effective; K is bound above by 2 N {\displaystyle 2{\sqrt {N}}} or asymptotically bound above by N {\displaystyle {\sqrt {N}}} === Pseudocode === Following algorithm, the standard Modular Schönhage-Strassen Multiplication algorithm (with some optimizations), is found in overview through T3MUL = Toom–Cook multiplication SMUL = Schönhage–Strassen multiplication Evaluate = FFT/IFFT === Further study === For implemantion details, one can read the book Prime Numbers: A Computational Perspective. This variant differs somewhat from Schönhage's original method in that it exploits the discrete weighted transform to perform negacyclic convolutions more efficiently. Another source for detailed information is Knuth's The Art of Computer Programming. == Optimizations == This section explains a number of important practical optimizations, when implementing Schönhage–Strassen. === Use of other multiplications algorithm, inside algorithm === Below a certain cutoff point, it's more efficient to use other multiplication algorithms, such as Toom–Cook multiplication. === Square root of 2 trick === The idea is to use 2 {\displaystyle {\sqrt {2}}} as a root of unity of order 2 n + 2 {\displaystyle 2^{n+2}} in finite field G F ( 2 n + 2 + 1 ) {\displaystyle \mathrm {GF} (2^{n+2}+1)} ( it is a solution to equation θ 2 n + 2 ≡ 1 ( mod 2 n + 2 + 1 ) {\displaystyle \theta ^{2^{n+2}}\equiv 1{\pmod {2^{n+2}+1}}} ), when weighting values in NTT (number theoretic transformation) approach. It has been shown to save 10% in integer multiplication time. === Granlund's trick === By letting m = N + h {\displaystyle m=N+h} , one can compute u v mod 2 N + 1 {\displaystyle uv{\bmod {2^{N}+1}}} and ( u mod 2 h ) ( v mod 2 h ) {\displaystyle (u{\bmod {2^{h}}})(v{\bmod {2}}^{h})} in combination with CRT (Chinese Remainder Theorem) to find exact values of multiplication uv. == References ==
Wikipedia/Schönhage–Strassen_algorithm
The Fast-Folding Algorithm (FFA) is a computational method primarily utilized in the domain of astronomy for detecting periodic signals. FFA is designed to reveal repeating or cyclical patterns by "folding" data, which involves dividing the data set into numerous segments, aligning these segments to a common phase, and summing them together to enhance the signal of periodic events. This algorithm is particularly advantageous when dealing with non-uniformly sampled data or signals with a drifting period, which refer to signals that exhibit a frequency or period drifting over space and time, such cycles are not stable and consistent; rather, they are randomized. A quintessential application of FFA is in the detection and analysis of pulsars—highly magnetized, rotating neutron stars that emit beams of electromagnetic radiation. By employing FFA, astronomers can effectively distinguish noisy data to identify the regular pulses of radiation emitted by these celestial bodies. Moreover, the Fast-Folding Algorithm is instrumental in detecting long-period signals, which is often a challenge for other algorithms like the FFT (Fast-Fourier Transform) that operate under the assumption of a constant frequency. Through the process of folding and summing data segments, FFA provides a robust mechanism for unveiling periodicities despite noisy observational data, thereby playing a pivotal role in advancing our understanding of pulsar properties and behaviors. == History of the FFA == The Fast Folding Algorithm (FFA) has its roots dating back to 1969 when it was introduced by Professor David H. Staelin from the Massachusetts Institute of Technology (MIT). At the time, the scientific community was deeply involved in the study of pulsars, which are rapidly rotating neutron stars emitting beams of electromagnetic radiation. Professor Staelin recognized the potential of the FFA as a powerful instrument for detecting periodic signals within these pulsar surveys. These surveys were not just about understanding pulsars but held a much broader significance. They played a pivotal role in testing and validating Einstein's theory of general relativity, a cornerstone in the field of Astronomy. As the years progressed, the FFA saw various refinements, with researchers making tweaks and optimizations to enhance its efficiency and accuracy. Despite its potential, the FFA was mostly underutilized thanks to the dominance of Fast Fourier Transform (FFT)-based techniques, which were the preferred choice for many in signal processing during that era. As a result, while the FFA showed promise, its applications in the broader scientific community remained underutilized for several decades. == Technical Foundations of the FFA == The Fast Folding Algorithm (FFA) was initially developed as a method to search for periodic signals amidst noise in the time domain, contrasting with the FFT search technique that operates in the frequency domain. The primary advantage of the FFA is its efficiency in avoiding redundant summations (unnecessary additional computations). Specifically, the FFA is much faster than standard folding at all possible trial periods, achieving this by performing summations through N×log2(N/p−1) steps rather than N×(N/p−1). This efficiency arises because the logarithmic term log2(N/p−1) grows much slower than the linear term (N/p−1), making the number of steps more manageable as N increases, N represents the number of samples in the time series, and p is the trial folding period in units of samples. The FFA method involves folding each time series at multiple periods, performing partial summations in a series of log2(p) stages, and combining those sums to fold the data with a trial period between p and p+1. This approach retains all harmonic structures, making it especially effective for identifying narrow-pulsed signals in the long-period regime. One of the FFA's unique features is its hierarchical approach to folding, breaking the data down into smaller chunks, folding these chunks, and then combining them. This method, combined with its inherent tolerance to noise and adaptability for different types of data and hardware configurations, ensures the FFA remains a powerful tool for detecting periodic signals, especially in environments with significant noise or interference which makes it especially useful for Astronomical endavours. In signal processing, the fast folding algorithm is an efficient algorithm for the detection of approximately-periodic events within time series data. It computes superpositions of the signal modulo various window sizes simultaneously. The FFA is best known for its use in the detection of pulsars, as popularised by SETI@home and Astropulse. It was also used by the Breakthrough Listen Initiative during their 2023 Investigation for Periodic Spectral Signals campaign. == See also == Pulsar == References == == External links == The search for unknown pulsars
Wikipedia/Fast_folding_algorithm
In computational mathematics, the Hadamard ordered fast Walsh–Hadamard transform (FWHTh) is an efficient algorithm to compute the Walsh–Hadamard transform (WHT). A naive implementation of the WHT of order n = 2 m {\displaystyle n=2^{m}} would have a computational complexity of O( n 2 {\displaystyle n^{2}} ). The FWHTh requires only n log ⁡ n {\displaystyle n\log n} additions or subtractions. The FWHTh is a divide-and-conquer algorithm that recursively breaks down a WHT of size n {\displaystyle n} into two smaller WHTs of size n / 2 {\displaystyle n/2} . This implementation follows the recursive definition of the 2 m × 2 m {\displaystyle 2^{m}\times 2^{m}} Hadamard matrix H m {\displaystyle H_{m}} : H m = 1 2 ( H m − 1 H m − 1 H m − 1 − H m − 1 ) . {\displaystyle H_{m}={\frac {1}{\sqrt {2}}}{\begin{pmatrix}H_{m-1}&H_{m-1}\\H_{m-1}&-H_{m-1}\end{pmatrix}}.} The 1 / 2 {\displaystyle 1/{\sqrt {2}}} normalization factors for each stage may be grouped together or even omitted. The sequency-ordered, also known as Walsh-ordered, fast Walsh–Hadamard transform, FWHTw, is obtained by computing the FWHTh as above, and then rearranging the outputs. A simple fast nonrecursive implementation of the Walsh–Hadamard transform follows from decomposition of the Hadamard transform matrix as H m = A m {\displaystyle H_{m}=A^{m}} , where A is m-th root of H m {\displaystyle H_{m}} . == Python example code == == See also == Fast Fourier transform == References == == External links == Charles Constantine Gumas, A century old, the fast Hadamard transform proves useful in digital communications
Wikipedia/Fast_Walsh–Hadamard_transform
In mathematics, the Fourier transform on finite groups is a generalization of the discrete Fourier transform from cyclic to arbitrary finite groups. == Definitions == The Fourier transform of a function f : G → C {\displaystyle f:G\to \mathbb {C} } at a representation ϱ : G → G L d ϱ ( C ) {\displaystyle \varrho :G\to \mathrm {GL} _{d_{\varrho }}(\mathbb {C} )} of G {\displaystyle G} is f ^ ( ϱ ) = ∑ a ∈ G f ( a ) ϱ ( a ) . {\displaystyle {\widehat {f}}(\varrho )=\sum _{a\in G}f(a)\varrho (a).} For each representation ϱ {\displaystyle \varrho } of G {\displaystyle G} , f ^ ( ϱ ) {\displaystyle {\widehat {f}}(\varrho )} is a d ϱ × d ϱ {\displaystyle d_{\varrho }\times d_{\varrho }} matrix, where d ϱ {\displaystyle d_{\varrho }} is the degree of ϱ {\displaystyle \varrho } . Let G ^ {\displaystyle {\widehat {G}}} be the complete set of inequivalent irreducible representations of G {\displaystyle G} . Then the inverse Fourier transform at an element a {\displaystyle a} of G {\displaystyle G} is given by f ( a ) = 1 | G | ∑ ϱ ∈ G ^ d ϱ T r ( ϱ ( a − 1 ) f ^ ( ϱ ) ) . {\displaystyle f(a)={\frac {1}{|G|}}\sum _{\varrho \in {\widehat {G}}}d_{\varrho }\mathrm {Tr} \left(\varrho (a^{-1}){\widehat {f}}(\varrho )\right).} == Properties == === Transform of a convolution === The convolution of two functions f , g : G → C {\displaystyle f,g:G\to \mathbb {C} } is defined as ( f ∗ g ) ( a ) = ∑ b ∈ G f ( a b − 1 ) g ( b ) . {\displaystyle (f\ast g)(a)=\sum _{b\in G}f\!\left(ab^{-1}\right)g(b).} The Fourier transform of a convolution at any representation ϱ {\displaystyle \varrho } of G {\displaystyle G} is given by f ∗ g ^ ( ϱ ) = f ^ ( ϱ ) g ^ ( ϱ ) . {\displaystyle {\widehat {f\ast g}}(\varrho )={\hat {f}}(\varrho ){\hat {g}}(\varrho ).} === Plancherel formula === For functions f , g : G → C {\displaystyle f,g:G\to \mathbb {C} } , the Plancherel formula states ∑ a ∈ G f ( a − 1 ) g ( a ) = 1 | G | ∑ i d ϱ i Tr ( f ^ ( ϱ i ) g ^ ( ϱ i ) ) , {\displaystyle \sum _{a\in G}f(a^{-1})g(a)={\frac {1}{|G|}}\sum _{i}d_{\varrho _{i}}{\text{Tr}}\left({\hat {f}}(\varrho _{i}){\hat {g}}(\varrho _{i})\right),} where ϱ i {\displaystyle \varrho _{i}} are the irreducible representations of G {\displaystyle G} . == Fourier transform for finite abelian groups == If the group G is a finite abelian group, the situation simplifies considerably: all irreducible representations ϱ i {\displaystyle \varrho _{i}} are of degree 1 and hence equal to the irreducible characters of the group. Thus the matrix-valued Fourier transform becomes scalar-valued in this case. The set of irreducible G-representations has a natural group structure in its own right, which can be identified with the group G ^ := H o m ( G , S 1 ) {\displaystyle {\widehat {G}}:=\mathrm {Hom} (G,S^{1})} of group homomorphisms from G to S 1 = { z ∈ C , | z | = 1 } {\displaystyle S^{1}=\{z\in \mathbb {C} ,|z|=1\}} . This group is known as the Pontryagin dual of G. The Fourier transform of a function f : G → C {\displaystyle f:G\to \mathbb {C} } is the function f ^ : G ^ → C {\displaystyle {\widehat {f}}:{\widehat {G}}\to \mathbb {C} } given by f ^ ( χ ) = ∑ a ∈ G f ( a ) χ ¯ ( a ) . {\displaystyle {\widehat {f}}(\chi )=\sum _{a\in G}f(a){\bar {\chi }}(a).} The inverse Fourier transform is then given by f ( a ) = 1 | G | ∑ χ ∈ G ^ f ^ ( χ ) χ ( a ) . {\displaystyle f(a)={\frac {1}{|G|}}\sum _{\chi \in {\widehat {G}}}{\widehat {f}}(\chi )\chi (a).} For G = Z / n Z {\displaystyle G=\mathbb {Z} /n\mathbb {Z} } , a choice of a primitive n-th root of unity ζ {\displaystyle \zeta } yields an isomorphism G → G ^ , {\displaystyle G\to {\widehat {G}},} given by m ↦ ( r ↦ ζ m r ) {\displaystyle m\mapsto (r\mapsto \zeta ^{mr})} . In the literature, the common choice is ζ = e 2 π i / n {\displaystyle \zeta =e^{2\pi i/n}} , which explains the formula given in the article about the discrete Fourier transform. However, such an isomorphism is not canonical, similarly to the situation that a finite-dimensional vector space is isomorphic to its dual, but giving an isomorphism requires choosing a basis. A property that is often useful in probability is that the Fourier transform of the uniform distribution is simply δ a , 0 {\displaystyle \delta _{a,0}} , where 0 is the group identity and δ i , j {\displaystyle \delta _{i,j}} is the Kronecker delta. Fourier Transform can also be done on cosets of a group. == Relationship with representation theory == There is a direct relationship between the Fourier transform on finite groups and the representation theory of finite groups. The set of complex-valued functions on a finite group, G {\displaystyle G} , together with the operations of pointwise addition and convolution, form a ring that is naturally identified with the group ring of G {\displaystyle G} over the complex numbers, C [ G ] {\displaystyle \mathbb {C} [G]} . Modules of this ring are the same thing as representations. Maschke's theorem implies that C [ G ] {\displaystyle \mathbb {C} [G]} is a semisimple ring, so by the Artin–Wedderburn theorem it decomposes as a direct product of matrix rings. The Fourier transform on finite groups explicitly exhibits this decomposition, with a matrix ring of dimension d ϱ {\displaystyle d_{\varrho }} for each irreducible representation. More specifically, the Peter-Weyl theorem (for finite groups) states that there is an isomorphism C [ G ] ≅ ⨁ i E n d ( V i ) {\displaystyle \mathbb {C} [G]\cong \bigoplus _{i}\mathrm {End} (V_{i})} given by ∑ g ∈ G a g g ↦ ( ∑ a g ρ i ( g ) : V i → V i ) {\displaystyle \sum _{g\in G}a_{g}g\mapsto \left(\sum a_{g}\rho _{i}(g):V_{i}\to V_{i}\right)} The left hand side is the group algebra of G. The direct sum is over a complete set of inequivalent irreducible G-representations ϱ i : G → G L ( V i ) {\displaystyle \varrho _{i}:G\to \mathrm {GL} (V_{i})} . The Fourier transform for a finite group is just this isomorphism. The product formula mentioned above is equivalent to saying that this map is a ring isomorphism. == Over other fields == The above representation theoretic decomposition can be generalized to fields k {\displaystyle k} other than C {\displaystyle \mathbb {C} } as long as char ( k ) ∤ | G | {\displaystyle {\text{char}}(k)\nmid |G|} via Maschke's theorem. That is, the group algebra k [ G ] {\displaystyle k[G]} is semisimple. The same formulas may be used for the Fourier transform and its inverse, as crucially 1 | G | {\displaystyle {\frac {1}{|G|}}} is defined in k {\displaystyle k} . == Modular case == When char ( k ) ∣ | G | {\displaystyle {\text{char}}(k)\mid |G|} , k [ G ] {\displaystyle k[G]} is no longer semisimple and one must consider the modular representation theory of G {\displaystyle G} over k {\displaystyle k} . We can still decompose the group algebra into blocks via the Peirce decomposition using idempotents. That is k [ G ] ≅ ⨁ i k [ G ] e i {\displaystyle k[G]\cong \bigoplus _{i}k[G]e_{i}} and 1 = ∑ i e i {\displaystyle 1=\sum _{i}e_{i}} is a decomposition of the identity into central, primitive, orthogonal idempotents. Choosing a basis for the blocks span k { g e i | g ∈ G } {\displaystyle {\text{span}}_{k}\{ge_{i}|g\in G\}} and writing the projection maps v ↦ v e i {\displaystyle v\mapsto ve_{i}} as a matrix yields the modular DFT matrix. For example, for the symmetric group the idempotents of F p [ S n ] {\displaystyle F_{p}[S_{n}]} are computed in Murphy and explicitly in SageMath. == Unitarity == One can normalize the above definition to obtain f ^ ( ρ ) = d ρ | G | ∑ g ∈ G f ( g ) ρ ( g ) {\displaystyle {\hat {f}}(\rho )={\sqrt {\frac {d_{\rho }}{|G|}}}\sum _{g\in G}f(g)\rho (g)} with inverse f ( g ) = 1 | G | ∑ ρ ∈ G ^ d ρ T r ( f ^ ( ρ ) ρ − 1 ( g ) ) {\displaystyle f(g)={\frac {1}{\sqrt {|G|}}}\sum _{\rho \in {\widehat {G}}}{\sqrt {d_{\rho }}}\mathrm {Tr} ({\hat {f}}(\rho )\rho ^{-1}(g))} . Two representations are considered equivalent if one may be obtained from the other by a change of basis. This is an equivalence relation, and each equivalence class contains a unitary representation. The unitary representations can be obtained via Weyl's unitarian trick in characteristic zero. If G ^ {\displaystyle {\widehat {G}}} consists of unitary representations, then the corresponding DFT will be unitary. Over finite fields F q 2 {\displaystyle F_{q^{2}}} , it is possible to find a change of basis in certain cases, for example the symmetric group, by decomposing the matrix U {\displaystyle U} associated to a G {\displaystyle G} -invariant symmetric bilinear form as U = A A ∗ {\displaystyle U=AA^{*}} , where ∗ {\displaystyle ^{*}} denotes conjugate-transpose with respect to x ↦ x q {\displaystyle x\mapsto x^{q}} conjugation. The unitary representation is given by A ∗ ρ ( g ) A ∗ − 1 {\displaystyle A^{*}\rho (g)A^{*-1}} . To obtain the unitary DFT, note that as defined above D F T . D F T ∗ = S {\displaystyle DFT.DFT^{*}=S} , where S {\displaystyle S} is a diagonal matrix consisting of +1's and -1's. We can factor S = R R ∗ {\displaystyle S=RR^{*}} by factoring each sign c i = z i z i ∗ {\displaystyle c_{i}=z_{i}z_{i}^{*}} . u D F T = R − 1 . D F T {\displaystyle uDFT=R^{-1}.DFT} is unitary. == Applications == This generalization of the discrete Fourier transform is used in numerical analysis. A circulant matrix is a matrix where every column is a cyclic shift of the previous one. Circulant matrices can be diagonalized quickly using the fast Fourier transform, and this yields a fast method for solving systems of linear equations with circulant matrices. Similarly, the Fourier transform on arbitrary groups can be used to give fast algorithms for matrices with other symmetries (Åhlander & Munthe-Kaas 2005). These algorithms can be used for the construction of numerical methods for solving partial differential equations that preserve the symmetries of the equations (Munthe-Kaas 2006). When applied to the Boolean group ( Z / 2 Z ) n {\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}} , the Fourier transform on this group is the Hadamard transform, which is commonly used in quantum computing and other fields. Shor's algorithm uses both the Hadamard transform (by applying a Hadamard gate to every qubit) as well as the quantum Fourier transform. The former considers the qubits as indexed by the group ( Z / 2 Z ) n {\displaystyle (\mathbb {Z} /2\mathbb {Z} )^{n}} and the later considers them as indexed by Z / 2 n Z {\displaystyle \mathbb {Z} /2^{n}\mathbb {Z} } for the purpose of the Fourier transform on finite groups. == See also == Fourier transform Least-squares spectral analysis Representation theory of finite groups Character theory == References == == Further reading ==
Wikipedia/Fourier_transform_on_finite_groups
Bruun's algorithm is a fast Fourier transform (FFT) algorithm based on an unusual recursive polynomial-factorization approach, proposed for powers of two by G. Bruun in 1978 and generalized to arbitrary even composite sizes by H. Murakami in 1996. Because its operations involve only real coefficients until the last computation stage, it was initially proposed as a way to efficiently compute the discrete Fourier transform (DFT) of real data. Bruun's algorithm has not seen widespread use, however, as approaches based on the ordinary Cooley–Tukey FFT algorithm have been successfully adapted to real data with at least as much efficiency. Furthermore, there is evidence that Bruun's algorithm may be intrinsically less accurate than Cooley–Tukey in the face of finite numerical precision (Storn 1993). Nevertheless, Bruun's algorithm illustrates an alternative algorithmic framework that can express both itself and the Cooley–Tukey algorithm, and thus provides an interesting perspective on FFTs that permits mixtures of the two algorithms and other generalizations. == A polynomial approach to the DFT == Recall that the DFT is defined by the formula: X k = ∑ n = 0 N − 1 x n e − 2 π i N n k k = 0 , … , N − 1. {\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}e^{-{\frac {2\pi i}{N}}nk}\qquad k=0,\dots ,N-1.} For convenience, let us denote the N roots of unity by ωNn (n = 0, ..., N − 1): ω N n = e − 2 π i N n {\displaystyle \omega _{N}^{n}=e^{-{\frac {2\pi i}{N}}n}} and define the polynomial x(z) whose coefficients are xn: x ( z ) = ∑ n = 0 N − 1 x n z n . {\displaystyle x(z)=\sum _{n=0}^{N-1}x_{n}z^{n}.} The DFT can then be understood as a reduction of this polynomial; that is, Xk is given by: X k = x ( ω N k ) = x ( z ) mod ( z − ω N k ) {\displaystyle X_{k}=x(\omega _{N}^{k})=x(z)\mod (z-\omega _{N}^{k})} where mod denotes the polynomial remainder operation. The key to fast algorithms like Bruun's or Cooley–Tukey comes from the fact that one can perform this set of N remainder operations in recursive stages. == Recursive factorizations and FFTs == In order to compute the DFT, we need to evaluate the remainder of x ( z ) {\displaystyle x(z)} modulo N degree-1 polynomials as described above. Evaluating these remainders one by one is equivalent to the evaluating the usual DFT formula directly, and requires O(N2) operations. However, one can combine these remainders recursively to reduce the cost, using the following trick: if we want to evaluate x ( z ) {\displaystyle x(z)} modulo two polynomials U ( z ) {\displaystyle U(z)} and V ( z ) {\displaystyle V(z)} , we can first take the remainder modulo their product U ( z ) {\displaystyle U(z)} V ( z ) {\displaystyle V(z)} , which reduces the degree of the polynomial x ( z ) {\displaystyle x(z)} and makes subsequent modulo operations less computationally expensive. The product of all of the monomials ( z − ω N k ) {\displaystyle (z-\omega _{N}^{k})} for k=0..N-1 is simply z N − 1 {\displaystyle z^{N}-1} (whose roots are clearly the N roots of unity). One then wishes to find a recursive factorization of z N − 1 {\displaystyle z^{N}-1} into polynomials of few terms and smaller and smaller degree. To compute the DFT, one takes x ( z ) {\displaystyle x(z)} modulo each level of this factorization in turn, recursively, until one arrives at the monomials and the final result. If each level of the factorization splits every polynomial into an O(1) (constant-bounded) number of smaller polynomials, each with an O(1) number of nonzero coefficients, then the modulo operations for that level take O(N) time; since there will be a logarithmic number of levels, the overall complexity is O (N log N). More explicitly, suppose for example that z N − 1 = F 1 ( z ) F 2 ( z ) F 3 ( z ) {\displaystyle z^{N}-1=F_{1}(z)F_{2}(z)F_{3}(z)} , and that F k ( z ) = F k , 1 ( z ) F k , 2 ( z ) {\displaystyle F_{k}(z)=F_{k,1}(z)F_{k,2}(z)} , and so on. The corresponding FFT algorithm would consist of first computing xk(z) = x(z) mod Fk(z), then computing xk,j(z) = xk(z) mod Fk,j(z), and so on, recursively creating more and more remainder polynomials of smaller and smaller degree until one arrives at the final degree-0 results. Moreover, as long as the polynomial factors at each stage are relatively prime (which for polynomials means that they have no common roots), one can construct a dual algorithm by reversing the process with the Chinese remainder theorem. === Cooley–Tukey as polynomial factorization === The standard decimation-in-frequency (DIF) radix-r Cooley–Tukey algorithm corresponds closely to a recursive factorization. For example, radix-2 DIF Cooley–Tukey factors z N − 1 {\displaystyle z^{N}-1} into F 1 = ( z N / 2 − 1 ) {\displaystyle F_{1}=(z^{N/2}-1)} and F 2 = ( z N / 2 + 1 ) {\displaystyle F_{2}=(z^{N/2}+1)} . These modulo operations reduce the degree of x ( z ) {\displaystyle x(z)} by 2, which corresponds to dividing the problem size by 2. Instead of recursively factorizing F 2 {\displaystyle F_{2}} directly, though, Cooley–Tukey instead first computes x2(z ωN), shifting all the roots (by a twiddle factor) so that it can apply the recursive factorization of F 1 {\displaystyle F_{1}} to both subproblems. That is, Cooley–Tukey ensures that all subproblems are also DFTs, whereas this is not generally true for an arbitrary recursive factorization (such as Bruun's, below). == The Bruun factorization == The basic Bruun algorithm for powers of two N=2n factorizes z2n-1 recursively via the rules: z 2 M − 1 = ( z M − 1 ) ( z M + 1 ) {\displaystyle z^{2M}-1=(z^{M}-1)(z^{M}+1)\,} z 4 M + a z 2 M + 1 = ( z 2 M + 2 − a z M + 1 ) ( z 2 M − 2 − a z M + 1 ) {\displaystyle z^{4M}+az^{2M}+1=(z^{2M}+{\sqrt {2-a}}z^{M}+1)(z^{2M}-{\sqrt {2-a}}z^{M}+1)} where a is a real constant with |a| ≤ 2. If a = 2 cos ⁡ ( ϕ ) {\displaystyle a=2\cos(\phi )} , ϕ ∈ ( 0 , π ) {\displaystyle \phi \in (0,\pi )} , then 2 + a = 2 cos ⁡ ϕ 2 {\displaystyle {\sqrt {2+a}}=2\cos {\tfrac {\phi }{2}}} and 2 − a = 2 cos ⁡ ( π 2 − ϕ 2 ) {\displaystyle {\sqrt {2-a}}=2\cos({\tfrac {\pi }{2}}-{\tfrac {\phi }{2}})} . At stage s, s=0,1,2,n-1, the intermediate state consists of 2s polynomials p s , 0 , … , p s , 2 s − 1 {\displaystyle p_{s,0},\dots ,p_{s,2^{s}-1}} of degree 2n-s - 1 or less , where p s , 0 ( z ) = p ( z ) mod ( z 2 n − s − 1 ) and p s , m ( z ) = p ( z ) mod ( z 2 n − s − 2 cos ⁡ ( m 2 s π ) z 2 n − 1 − s + 1 ) m = 1 , 2 , … , 2 s − 1 {\displaystyle {\begin{aligned}p_{s,0}(z)&=p(z)\mod \left(z^{2^{n-s}}-1\right)&\quad &{\text{and}}\\p_{s,m}(z)&=p(z)\mod \left(z^{2^{n-s}}-2\cos \left({\tfrac {m}{2^{s}}}\pi \right)z^{2^{n-1-s}}+1\right)&m&=1,2,\dots ,2^{s}-1\end{aligned}}} By the construction of the factorization of z2n-1, the polynomials ps,m(z) each encode 2n-s values X k = p ( e 2 π i k 2 n ) {\displaystyle X_{k}=p(e^{2\pi i{\tfrac {k}{2^{n}}}})} of the Fourier transform, for m=0, the covered indices are k=0, 2k, 2∙2s, 3∙2s,..., (2n-s-1)∙2s, for m>0 the covered indices are k=m, 2s+1-m, 2s+1+m, 2∙2s+1-m, 2∙2s+1+m, ..., 2n-m. During the transition to the next stage, the polynomial p s , ℓ ( z ) {\displaystyle p_{s,\ell }(z)} is reduced to the polynomials p s + 1 , ℓ ( z ) {\displaystyle p_{s+1,\ell }(z)} and p s + 1 , 2 s − ℓ ( z ) {\displaystyle p_{s+1,2^{s}-\ell }(z)} via polynomial division. If one wants to keep the polynomials in increasing index order, this pattern requires an implementation with two arrays. An implementation in place produces a predictable, but highly unordered sequence of indices, for example for N=16 the final order of the 8 linear remainders is (0, 4, 2, 6, 1, 7, 3, 5). At the end of the recursion, for s = n-1, there remain 2n-1 linear polynomials encoding two Fourier coefficients X0 and X2n-1 for the first and for the any other kth polynomial the coefficients Xk and X2n-k. At each recursive stage, all of the polynomials of the common degree 4M-1 are reduced to two parts of half the degree 2M-1. The divisor of this polynomial remainder computation is a quadratic polynomial zm, so that all reductions can be reduced to polynomial divisions of cubic by quadratic polynomials. There are N/2 = 2n−1 of these small divisions at each stage, leading to an O(N log N) algorithm for the FFT. Moreover, since all of these polynomials have purely real coefficients (until the very last stage), they automatically exploit the special case where the inputs xn are purely real to save roughly a factor of two in computation and storage. One can also take straightforward advantage of the case of real-symmetric data for computing the discrete cosine transform (Chen & Sorensen 1992). === Generalization to arbitrary radices === The Bruun factorization, and thus the Bruun FFT algorithm, was generalized to handle arbitrary even composite lengths, i.e. dividing the polynomial degree by an arbitrary radix (factor), as follows. First, we define a set of polynomials φN,α(z) for positive integers N and for α in [0, 1) by: ϕ N , α ( z ) = { z 2 N − 2 cos ⁡ ( 2 π α ) z N + 1 if 0 < α < 1 z 2 N − 1 if α = 0 {\displaystyle \phi _{N,\alpha }(z)={\begin{cases}z^{2N}-2\cos(2\pi \alpha )z^{N}+1&{\text{if }}0<\alpha <1\\\\z^{2N}-1&{\text{if }}\alpha =0\end{cases}}} Note that all of the polynomials that appear in the Bruun factorization above can be written in this form. The zeroes of these polynomials are e 2 π i ( ± α + k ) / N {\displaystyle e^{2\pi i(\pm \alpha +k)/N}} for k = 0 , 1 , … , N − 1 {\displaystyle k=0,1,\dots ,N-1} in the α ≠ 0 {\displaystyle \alpha \neq 0} case, and e 2 π i k / 2 N {\displaystyle e^{2\pi ik/2N}} for k = 0 , 1 , … , 2 N − 1 {\displaystyle k=0,1,\dots ,2N-1} in the α = 0 {\displaystyle \alpha =0} case. Hence these polynomials can be recursively factorized for a factor (radix) r via: ϕ r M , α ( z ) = { ∏ ℓ = 0 r − 1 ϕ M , ( α + ℓ ) / r if 0 < α ≤ 0.5 ∏ ℓ = 0 r − 1 ϕ M , ( 1 − α + ℓ ) / r if 0.5 < α < 1 ∏ ℓ = 0 r − 1 ϕ M , ℓ / ( 2 r ) if α = 0 {\displaystyle \phi _{rM,\alpha }(z)={\begin{cases}\prod _{\ell =0}^{r-1}\phi _{M,(\alpha +\ell )/r}&{\text{if }}0<\alpha \leq 0.5\\\\\prod _{\ell =0}^{r-1}\phi _{M,(1-\alpha +\ell )/r}&{\text{if }}0.5<\alpha <1\\\\\prod _{\ell =0}^{r-1}\phi _{M,\ell /(2r)}&{\text{if }}\alpha =0\end{cases}}} == References ==
Wikipedia/Bruun's_FFT_algorithm
The chirp Z-transform (CZT) is a generalization of the discrete Fourier transform (DFT). While the DFT samples the Z plane at uniformly-spaced points along the unit circle, the chirp Z-transform samples along spiral arcs in the Z-plane, corresponding to straight lines in the S plane. The DFT, real DFT, and zoom DFT can be calculated as special cases of the CZT. Specifically, the chirp Z transform calculates the Z transform at a finite number of points zk along a logarithmic spiral contour, defined as: X k = ∑ n = 0 N − 1 x ( n ) z k − n {\displaystyle X_{k}=\sum _{n=0}^{N-1}x(n)z_{k}^{-n}} z k = A ⋅ W − k , k = 0 , 1 , … , M − 1 {\displaystyle z_{k}=A\cdot W^{-k},k=0,1,\dots ,M-1} where A is the complex starting point, W is the complex ratio between points, and M is the number of points to calculate. Like the DFT, the chirp Z-transform can be computed in O(n log n) operations where n = max ( M , N ) n=\max(M,N) . An O(N log N) algorithm for the inverse chirp Z-transform (ICZT) was described in 2003, and in 2019. == Bluestein's algorithm == Bluestein's algorithm expresses the CZT as a convolution and implements it efficiently using FFT/IFFT. As the DFT is a special case of the CZT, this allows the efficient calculation of discrete Fourier transform (DFT) of arbitrary sizes, including prime sizes. (The other algorithm for FFTs of prime sizes, Rader's algorithm, also works by rewriting the DFT as a convolution.) It was conceived in 1968 by Leo Bluestein. Bluestein's algorithm can be used to compute more general transforms than the DFT, based on the (unilateral) z-transform (Rabiner et al., 1969). Recall that the DFT is defined by the formula X k = ∑ n = 0 N − 1 x n e − 2 π i N n k k = 0 , … , N − 1. {\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}e^{-{\frac {2\pi i}{N}}nk}\qquad k=0,\dots ,N-1.} If we replace the product nk in the exponent by the identity n k = − ( k − n ) 2 2 + n 2 2 + k 2 2 {\displaystyle nk={\frac {-(k-n)^{2}}{2}}+{\frac {n^{2}}{2}}+{\frac {k^{2}}{2}}} we thus obtain: X k = e − π i N k 2 ∑ n = 0 N − 1 ( x n e − π i N n 2 ) e π i N ( k − n ) 2 k = 0 , … , N − 1. {\displaystyle X_{k}=e^{-{\frac {\pi i}{N}}k^{2}}\sum _{n=0}^{N-1}\left(x_{n}e^{-{\frac {\pi i}{N}}n^{2}}\right)e^{{\frac {\pi i}{N}}(k-n)^{2}}\qquad k=0,\dots ,N-1.} This summation is precisely a convolution of the two sequences an and bn defined by: a n = x n e − π i N n 2 {\displaystyle a_{n}=x_{n}e^{-{\frac {\pi i}{N}}n^{2}}} b n = e π i N n 2 , {\displaystyle b_{n}=e^{{\frac {\pi i}{N}}n^{2}},} with the output of the convolution multiplied by N phase factors bk*. That is: X k = b k ∗ ( ∑ n = 0 N − 1 a n b k − n ) k = 0 , … , N − 1. {\displaystyle X_{k}=b_{k}^{*}\left(\sum _{n=0}^{N-1}a_{n}b_{k-n}\right)\qquad k=0,\dots ,N-1.} This convolution, in turn, can be performed with a pair of FFTs (plus the pre-computed FFT of complex chirp bn) via the convolution theorem. The key point is that these FFTs are not of the same length N: such a convolution can be computed exactly from FFTs only by zero-padding it to a length greater than or equal to 2N–1. In particular, one can pad to a power of two or some other highly composite size, for which the FFT can be efficiently performed by e.g. the Cooley–Tukey algorithm in O(N log N) time. Thus, Bluestein's algorithm provides an O(N log N) way to compute prime-size DFTs, albeit several times slower than the Cooley–Tukey algorithm for composite sizes. The use of zero-padding for the convolution in Bluestein's algorithm deserves some additional comment. Suppose we zero-pad to a length M ≥ 2N–1. This means that an is extended to an array An of length M, where An = an for 0 ≤ n < N and An = 0 otherwise—the usual meaning of "zero-padding". However, because of the bk–n term in the convolution, both positive and negative values of n are required for bn (noting that b–n = bn). The periodic boundaries implied by the DFT of the zero-padded array mean that –n is equivalent to M–n. Thus, bn is extended to an array Bn of length M, where B0 = b0, Bn = BM–n = bn for 0 < n < N, and Bn = 0 otherwise. A and B are then FFTed, multiplied pointwise, and inverse FFTed to obtain the convolution of a and b, according to the usual convolution theorem. Let us also be more precise about what type of convolution is required in Bluestein's algorithm for the DFT. If the sequence bn were periodic in n with period N, then it would be a cyclic convolution of length N, and the zero-padding would be for computational convenience only. However, this is not generally the case: b n + N = e π i N ( n + N ) 2 = b n [ e π i N ( 2 N n + N 2 ) ] = ( − 1 ) N b n . {\displaystyle b_{n+N}=e^{{\frac {\pi i}{N}}(n+N)^{2}}=b_{n}\left[e^{{\frac {\pi i}{N}}(2Nn+N^{2})}\right]=(-1)^{N}b_{n}.} Therefore, for N even the convolution is cyclic, but in this case N is composite and one would normally use a more efficient FFT algorithm such as Cooley–Tukey. For N odd, however, then bn is antiperiodic and we technically have a negacyclic convolution of length N. Such distinctions disappear when one zero-pads an to a length of at least 2N−1 as described above, however. It is perhaps easiest, therefore, to think of it as a subset of the outputs of a simple linear convolution (i.e. no conceptual "extensions" of the data, periodic or otherwise). == z-transforms == Bluestein's algorithm can also be used to compute a more general transform based on the (unilateral) z-transform (Rabiner et al., 1969). In particular, it can compute any transform of the form: X k = ∑ n = 0 N − 1 x n z n k k = 0 , … , M − 1 , {\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}z^{nk}\qquad k=0,\dots ,M-1,} for an arbitrary complex number z and for differing numbers N and M of inputs and outputs. Given Bluestein's algorithm, such a transform can be used, for example, to obtain a more finely spaced interpolation of some portion of the spectrum (although the frequency resolution is still limited by the total sampling time, similar to a Zoom FFT), enhance arbitrary poles in transfer-function analyses, etc. The algorithm was dubbed the chirp z-transform algorithm because, for the Fourier-transform case (|z| = 1), the sequence bn from above is a complex sinusoid of linearly increasing frequency, which is called a (linear) chirp in radar systems. == See also == Fractional Fourier transform == References == === General === Leo I. Bluestein, "A linear filtering approach to the computation of the discrete Fourier transform," Northeast Electronics Research and Engineering Meeting Record 10, 218-219 (1968). Lawrence R. Rabiner, Ronald W. Schafer, and Charles M. Rader, "The chirp z-transform algorithm and its application," Bell Syst. Tech. J. 48, 1249-1292 (1969). Also published in: Rabiner, Shafer, and Rader, "The chirp z-transform algorithm," IEEE Trans. Audio Electroacoustics 17 (2), 86–92 (1969). D. H. Bailey and P. N. Swarztrauber, "The fractional Fourier transform and applications," SIAM Review 33, 389-404 (1991). (Note that this terminology for the z-transform is nonstandard: a fractional Fourier transform conventionally refers to an entirely different, continuous transform.) Lawrence Rabiner, "The chirp z-transform algorithm—a lesson in serendipity," IEEE Signal Processing Magazine 21, 118-119 (March 2004). (Historical commentary.) Vladimir Sukhoy and Alexander Stoytchev: "Generalizing the inverse FFT off the unit circle", (Oct 2019). # Open access. Vladimir Sukhoy and Alexander Stoytchev: "Numerical error analysis of the ICZT algorithm for chirp contours on the unit circle", Sci Rep 10, 4852 (2020). == External links == A DSP algorithm for frequency analysis - the Chirp-Z Transform (CZT) Solving a 50-year-old puzzle in signal processing, part two
Wikipedia/Chirp-z_algorithm
In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal x [ n ] {\displaystyle x[n]} with a finite impulse response (FIR) filter h [ n ] {\displaystyle h[n]} : where h [ m ] = 0 {\displaystyle h[m]=0} for m {\displaystyle m} outside the region [ 1 , M ] . {\displaystyle [1,M].} This article uses common abstract notations, such as y ( t ) = x ( t ) ∗ h ( t ) , {\textstyle y(t)=x(t)*h(t),} or y ( t ) = H { x ( t ) } , {\textstyle y(t)={\mathcal {H}}\{x(t)\},} in which it is understood that the functions should be thought of in their totality, rather than at specific instants t {\textstyle t} (see Convolution#Notation). The concept is to divide the problem into multiple convolutions of h [ n ] {\displaystyle h[n]} with short segments of x [ n ] {\displaystyle x[n]} : x k [ n ] ≜ { x [ n + k L ] , n = 1 , 2 , … , L 0 , otherwise , {\displaystyle x_{k}[n]\ \triangleq \ {\begin{cases}x[n+kL],&n=1,2,\ldots ,L\\0,&{\text{otherwise}},\end{cases}}} where L {\displaystyle L} is an arbitrary segment length. Then: x [ n ] = ∑ k x k [ n − k L ] , {\displaystyle x[n]=\sum _{k}x_{k}[n-kL],\,} and y [ n ] {\displaystyle y[n]} can be written as a sum of short convolutions: y [ n ] = ( ∑ k x k [ n − k L ] ) ∗ h [ n ] = ∑ k ( x k [ n − k L ] ∗ h [ n ] ) = ∑ k y k [ n − k L ] , {\displaystyle {\begin{aligned}y[n]=\left(\sum _{k}x_{k}[n-kL]\right)*h[n]&=\sum _{k}\left(x_{k}[n-kL]*h[n]\right)\\&=\sum _{k}y_{k}[n-kL],\end{aligned}}} where the linear convolution y k [ n ] ≜ x k [ n ] ∗ h [ n ] {\displaystyle y_{k}[n]\ \triangleq \ x_{k}[n]*h[n]\,} is zero outside the region [ 1 , L + M − 1 ] . {\displaystyle [1,L+M-1].} And for any parameter N ≥ L + M − 1 , {\displaystyle N\geq L+M-1,\,} it is equivalent to the N {\displaystyle N} -point circular convolution of x k [ n ] {\displaystyle x_{k}[n]\,} with h [ n ] {\displaystyle h[n]\,} in the region [ 1 , N ] . {\displaystyle [1,N].} The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem: where: DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over N {\displaystyle N} discrete points, and L {\displaystyle L} is customarily chosen such that N = L + M − 1 {\displaystyle N=L+M-1} is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency. == Pseudocode == The following is a pseudocode of the algorithm: (Overlap-add algorithm for linear convolution) h = FIR_filter M = length(h) Nx = length(x) N = 8 × 2^ceiling( log2(M) ) (8 times the smallest power of two bigger than filter length M. See next section for a slightly better choice.) step_size = N - (M-1) (L in the text above) H = DFT(h, N) position = 0 y(1 : Nx + M-1) = 0 while position + step_size ≤ Nx do y(position+(1:N)) = y(position+(1:N)) + IDFT(DFT(x(position+(1:step_size)), N) × H) position = position + step_size end == Efficiency considerations == When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about N (log2(N) + 1) complex multiplications for the FFT, product of arrays, and IFFT. Each iteration produces N-M+1 output samples, so the number of complex multiplications per output sample is about: For example, when M = 201 {\displaystyle M=201} and N = 1024 , {\displaystyle N=1024,} Eq.3 equals 13.67 , {\displaystyle 13.67,} whereas direct evaluation of Eq.1 would require up to 201 {\displaystyle 201} complex multiplications per output sample, the worst case being when both x {\displaystyle x} and h {\displaystyle h} are complex-valued. Also note that for any given M , {\displaystyle M,} Eq.3 has a minimum with respect to N . {\displaystyle N.} Figure 2 is a graph of the values of N {\displaystyle N} that minimize Eq.3 for a range of filter lengths ( M {\displaystyle M} ). Instead of Eq.1, we can also consider applying Eq.2 to a long sequence of length N x {\displaystyle N_{x}} samples. The total number of complex multiplications would be: N x ⋅ ( log 2 ⁡ ( N x ) + 1 ) . {\displaystyle N_{x}\cdot (\log _{2}(N_{x})+1).} Comparatively, the number of complex multiplications required by the pseudocode algorithm is: N x ⋅ ( log 2 ⁡ ( N ) + 1 ) ⋅ N N − M + 1 . {\displaystyle N_{x}\cdot (\log _{2}(N)+1)\cdot {\frac {N}{N-M+1}}.} Hence the cost of the overlap–add method scales almost as O ( N x log 2 ⁡ N ) {\displaystyle O\left(N_{x}\log _{2}N\right)} while the cost of a single, large circular convolution is almost O ( N x log 2 ⁡ N x ) {\displaystyle O\left(N_{x}\log _{2}N_{x}\right)} . The two methods are also compared in Figure 3, created by Matlab simulation. The contours are lines of constant ratio of the times it takes to perform both methods. When the overlap-add method is faster, the ratio exceeds 1, and ratios as high as 3 are seen. == See also == Overlap–save method Circular_convolution#Example == Notes == == References == == Further reading == Oppenheim, Alan V.; Schafer, Ronald W. (1975). Digital signal processing. Englewood Cliffs, N.J.: Prentice-Hall. ISBN 0-13-214635-5. Hayes, M. Horace (1999). Digital Signal Processing. Schaum's Outline Series. New York: McGraw Hill. ISBN 0-07-027389-8. Senobari, Nader Shakibay; Funning, Gareth J.; Keogh, Eamonn; Zhu, Yan; Yeh, Chin-Chia Michael; Zimmerman, Zachary; Mueen, Abdullah (2019). "Super-Efficient Cross-Correlation (SEC-C): A Fast Matched Filtering Code Suitable for Desktop Computers" (PDF). Seismological Research Letters. 90 (1): 322–334. doi:10.1785/0220180122. ISSN 0895-0695.
Wikipedia/Overlap–add_method
Hydrography is the branch of applied sciences which deals with the measurement and description of the physical features of oceans, seas, coastal areas, lakes and rivers, as well as with the prediction of their change over time, for the primary purpose of safety of navigation and in support of all other marine activities, including economic development, security and defense, scientific research, and environmental protection. == History == The origins of hydrography lay in the making of charts to aid navigation, by individual mariners as they navigated into new waters. These were usually the private property, even closely held secrets, of individuals who used them for commercial or military advantage. As transoceanic trade and exploration increased, hydrographic surveys started to be carried out as an exercise in their own right, and the commissioning of surveys was increasingly done by governments and special hydrographic offices. National organizations, particularly navies, realized that the collection, systematization and distribution of this knowledge gave it great organizational and military advantages. Thus were born dedicated national hydrographic organizations for the collection, organization, publication and distribution of hydrography incorporated into charts and sailing directions. Prior to the establishment of the United Kingdom Hydrographic Office, Royal Navy captains were responsible for the provision of their own charts. In practice this meant that ships often sailed with inadequate information for safe navigation, and that when new areas were surveyed, the data rarely reached all those who needed it. The Admiralty appointed Alexander Dalrymple as Hydrographer in 1795, with a remit to gather and distribute charts to HM Ships. Within a year existing charts from the previous two centuries had been collated, and the first catalog published. The first chart produced under the direction of the Admiralty, was a chart of Quiberon Bay in Brittany, and it appeared in 1800. Under Captain Thomas Hurd the department received its first professional guidelines, and the first catalogs were published and made available to the public and to other nations as well. In 1829, Rear-Admiral Sir Francis Beaufort, as Hydrographer, developed the eponymous Scale, and introduced the first official tide tables in 1833 and the first "Notices to Mariners" in 1834. The Hydrographic Office underwent steady expansion throughout the 19th century; by 1855, the Chart Catalogue listed 1,981 charts giving a definitive coverage over the entire world, and produced over 130,000 charts annually, of which about half were sold. The word hydrography comes from the Ancient Greek ὕδωρ (hydor), "water" and γράφω (graphō), "to write". == Overview == Large-scale hydrography is usually undertaken by national or international organizations which sponsor data collection through precise surveys and publish charts and descriptive material for navigational purposes. The science of oceanography is, in part, an outgrowth of classical hydrography. In many respects the data are interchangeable, but marine hydrographic data will be particularly directed toward marine navigation and safety of that navigation. Marine resource exploration and exploitation is a significant application of hydrography, principally focused on the search for hydrocarbons. Hydrographical measurements include the tidal, current and wave information of physical oceanography. They include bottom measurements, with particular emphasis on those marine geographical features that pose a hazard to navigation such as rocks, shoals, reefs and other features that obstruct ship passage. Bottom measurements also include collection of the nature of the bottom as it pertains to effective anchoring. Unlike oceanography, hydrography will include shore features, natural and manmade, that aid in navigation. Therefore, a hydrographic survey may include the accurate positions and representations of hills, mountains and even lights and towers that will aid in fixing a ship's position, as well as the physical aspects of the sea and seabed. Hydrography, mostly for reasons of safety, adopted a number of conventions that have affected its portrayal of the data on nautical charts. For example, hydrographic charts are designed to portray what is safe for navigation, and therefore will usually tend to maintain least depths and occasionally de-emphasize the actual submarine topography that would be portrayed on bathymetric charts. The former are the mariner's tools to avoid accident. The latter are best representations of the actual seabed, as in a topographic map, for scientific and other purposes. Trends in hydrographic practice since c. 2003–2005 have led to a narrowing of this difference, with many more hydrographic offices maintaining "best observed" databases, and then making navigationally "safe" products as required. This has been coupled with a preference for multi-use surveys, so that the same data collected for nautical charting purposes can also be used for bathymetric portrayal. Even though, in places, hydrographic survey data may be collected in sufficient detail to portray bottom topography in some areas, hydrographic charts only show depth information relevant for safe navigation and should not be considered as a product that accurately portrays the actual shape of the bottom. The soundings selected from the raw source depth data for placement on the nautical chart are selected for safe navigation and are biased to show predominantly the shallowest depths that relate to safe navigation. For instance, if there is a deep area that can not be reached because it is surrounded by shallow water, the deep area may not be shown. The color filled areas that show different ranges of shallow water are not the equivalent of contours on a topographic map since they are often drawn seaward of the actual shallowest depth portrayed. A bathymetric chart does show marine topology accurately. Details covering the above limitations can be found in Part 1 of Bowditch's American Practical Navigator. Another concept that affects safe navigation is the sparsity of detailed depth data from high resolution sonar systems. In more remote areas, the only available depth information has been collected with lead lines. This collection method drops a weighted line to the bottom at intervals and records the depth, often from a rowboat or sail boat. There is no data between soundings or between sounding lines to guarantee that there is not a hazard such as a wreck or a coral head waiting there to ruin a sailor's day. Often, the navigation of the collecting boat does not match today's GPS navigational accuracies. The hydrographic chart will use the best data available and will caveat its nature in a caution note or in the legend of the chart. A hydrographic survey is quite different from a bathymetric survey in some important respects, particularly in a bias toward least depths due to the safety requirements of the former and geomorphologic descriptive requirements of the latter. Historically, this could include echosoundings being conducted under settings biased toward least depths, but in modern practice hydrographic surveys typically attempt to best measure the depths observed, with the adjustments for navigational safety being applied after the fact. Hydrography of streams will include information on the stream bed, flows, water quality and surrounding land. Basin or interior hydrography pays special attention to rivers and potable water although if collected data is not for ship navigational uses, and is intended for scientific usage, it is more commonly called hydrometry or hydrology. Hydrography of rivers and streams is also an integral part of water management. Most reservoirs in the United States use dedicated stream gauging and rating tables to determine inflows into the reservoir and outflows to irrigation districts, water municipalities and other users of captured water. River/stream hydrographers use handheld and bank mounted devices, to capture a sectional flow rate of moving water through a section and or current. === Equipment === Uncrewed Surface Vessels (USVs) and are commonly used for hydrographic surveys - they are often equipped with some sort of sonar. Single-beam echosounders, multibeam echosounders, and side scan sonars are all frequently used in hydrographic applications. The knowledge gained from these surveys aid in disaster planning, port and harbor maintenance, and various other coastal planning activities. == Organizations == Hydrographic services in most countries are carried out by specialized hydrographic offices. The international coordination of hydrographic efforts lies with the International Hydrographic Organization. The United Kingdom Hydrographic Office is one of the oldest, supplying a wide range of charts covering the globe to other countries, allied military organizations and the public. In the United States, the hydrographic charting function has been carried out since 1807 by the Office of Coast Survey of the National Oceanic and Atmospheric Administration within the U.S. Department of Commerce and the U.S. Army Corps of Engineers. == See also == Bathymetric chart – Map depicting the submerged terrain of bodies of water Coastal geography – Study of the region between the ocean and the land Challenger expedition – Oceanographic research expedition (1872–1876) Hydrology – Science of the movement, distribution, and quality of water on Earth Hydrometry – Monitoring the components of the hydrological cycle World Hydrography Day – Publicate the importance of hydrographer Associations focussing on ocean hydrography International Federation of Hydrographic Societies (formerly The Hydrographic Society) State Hydrography Service of Georgia The Hydrographic Society of America Australasian Hydrographic Society Associations focussing on river stream and lake hydrography Australian Hydrographic Association New Zealand Hydrological Society American Institute of Hydrology == References == == External links == Hydro International Lemmer, the Netherlands: Hydrographic Information ISSN 1385-4569] What is hydrography? National Ocean Service
Wikipedia/Hydrography
Geographic Information Systems (GIS) has become an integral part of aquatic science and limnology. Water by its very nature is dynamic. Features associated with water are thus ever-changing. To be able to keep up with these changes, technological advancements have given scientists methods to enhance all aspects of scientific investigation, from satellite tracking of wildlife to computer mapping of habitats. Agencies like the US Geological Survey, US Fish and Wildlife Service as well as other federal and state agencies are utilizing GIS to aid in their conservation efforts. GIS is being used in multiple fields of aquatic science from limnology, hydrology, aquatic botany, stream ecology, oceanography and marine biology. Applications include using satellite imagery to identify, monitor and mitigate habitat loss. Imagery can also show the condition of inaccessible areas. Scientists can track movements and develop a strategy to locate locations of concern. GIS can be used to track invasive species, endangered species, and population changes. One of the advantages of the system is the availability for the information to be shared and updated at any time through the use of web-based data collection. == GIS and fish == In the past, GIS was not a practical source of analysis due to the difficulty in obtaining spatial data on habitats or organisms in underwater environments. With the advancement of radio telemetry, hydroacoustic telemetry and side-scan sonar biologists have been able to track fish species and create databases that can be incorporated into a GIS program to create a geographical representation. Using radio and hydroacoustic telemetry, biologists are able to locate fish and acquire relatable data for those sites, this data may include substrate samples, temperature, and conductivity. Side-scan sonar allows biologists to map out a river bottom to gain a representation of possible habitats that are used. These two sets of data can be overlaid to delineate the distribution of fish and their habitats for fish. This method has been used in the study of the pallid sturgeon. Over a period of time large amounts of data are collected and can be used to track patterns of migration, spawning locations and preferred habitat. Before, this data would be mapped and overlaid manually. Now this data can be entered into a GIS program and be layered, organized and analyzed in a way that was not possible to do in the past. Layering within a GIS program allows for the scientist to look at multiple species at once to find possible watersheds that are shared by these species, or to specifically choose one species for further examination. The US Geological Survey (USGS) in, cooperation with other agencies, were able to use GIS in helping map out habitat areas and movement patterns of pallid sturgeon. At the Columbia Environmental Research Center their effort relies on a customized ArcPad and ArcGIS, both ESRI (Environmental Systems Research Institute) applications, to record sturgeon movements to streamline data collection. A relational database was developed to manage tabular data for each individual sturgeon, including initial capture and reproductive physiology. Movement maps can be created for individual sturgeon. These maps help track the movements of each sturgeon through space and time. This allowed these researchers to prioritize and schedule field personnel efforts to track, map, and recapture sturgeon. == GIS and macrophytes == Macrophytes are an important part of healthy ecosystems. They provide habitat, refuge, and food for fish, wildlife, and other organisms. Though natural occurring species are of great interest so are the invasive species that occur alongside these in our environment. GIS is being used by agencies and their respective resource managers as a tool to model these important macrophyte species. Through the use of GIS resource managers can assess the distributions of this important aspect of aquatic environments through a spatial and temporal scale. The ability to track vegetation change through time and space to make predictions about vegetation change are some of the many possibilities of GIS. Accurate maps of the aquatic plant distribution within an aquatic ecosystem are an essential part resource management. It is possible to predict the possible occurrences of aquatic vegetation. For example, the USGS has created a model for the American wild celery (Vallisneria americana) by developing a statistical model that calculates the probability of submersed aquatic vegetation. They established a web link to an Environmental Systems Research Institute (ESRI) ArcGIS Server website *Submersed Aquatic Vegetation Model to make their model predictions available online. These predictions for distribution of submerged aquatic vegetation can potentially have an effect on foraging birds by creating avoidance zones by humans. If it is known where these areas are, birds can be left alone to feed undisturbed. When there are years where the aquatic vegetation is predicted to be limited in these important wildlife habitats, managers can be alerted. Invasive species have become a great conservation concern for resource managers. GIS allows managers to map out plant locations and abundances. These maps can then be used to determine the threat of these invasive plants and help the managers decide on management strategies. Surveys of these species can be conducted and then downloaded into a GIS system. Coupled with this, native species can be included to determine how these communities respond with each other. By using known data of preexisting invasive species GIS models could predict future outbreaks by comparing biological factors. The Connecticut Agricultural Experiment Station Invasive Aquatic Species Program (CAES IAPP) is using GIS to evaluate risk factors. GIS allows managers to georeference plant locations and abundance. This allows for managers to display invasive communities alongside native species for study and management. == See also == Acoustic tag Animal migration tracking Data storage tag Pop-up satellite archival tag == External links == Smithsonian National Zoological Park Missouri River InfoLINK Fisheries and Aquatics Bulletin Columbia Environmental Research Center Geographical Information Systems (GIS) for Fish Conservation GIS and Fish Population Dynamics ArcNews Online THE CONNECTICUT AGRICULTURAL EXPERIMENT STATION INVASIVE AQUATIC PLANT PROGRAM (CAES IAPP) Using GIS to Map Invasive Aquatic Plants in Connecticut Lakes Upper Midwest Environmental Sciences Center Smart River GIS for Improved Decision Making Archived 2009-04-30 at the Wayback Machine Marine web map tile service (Marine WMTS)
Wikipedia/GIS_and_aquatic_science
The Moderate Resolution Imaging Spectroradiometer (MODIS) is a satellite-based sensor used for earth and climate measurements. There are two MODIS sensors in Earth orbit: one on board the Terra (EOS AM) satellite, launched by NASA in 1999; and one on board the Aqua (EOS PM) satellite, launched in 2002. Since 2011, MODIS operations have been supplemented by VIIRS sensors, such as the one aboard Suomi NPP. The systems often conduct similar operations due to their similar designs and orbits (with VIIRS data systems deisgned to be compatible with MODIS), though they have subtle differences contributing to similar but not identical uses. The MODIS instruments were built by Santa Barbara Remote Sensing. They capture data in 36 spectral bands ranging in wavelength from 0.4 μm to 14.4 μm and at varying spatial resolutions (2 bands at 250 m, 5 bands at 500 m and 29 bands at 1 km). Together the instruments image the entire Earth every 1 to 2 days. They are designed to provide measurements in large-scale global dynamics including changes in Earth's cloud cover, radiation budget and processes occurring in the oceans, on land, and in the lower atmosphere. Support and calibration is provided by the MODIS characterization support team (MCST). == Applications == With its high temporal resolution although low spatial resolution, MODIS data are useful to track changes in the landscape over time. Examples of such applications are the monitoring of vegetation health by means of time-series analyses with vegetation indices, long term land cover changes (e.g. to monitor deforestation rates), global snow cover trends, water inundation from pluvial, riverine, or sea level rise flooding in coastal areas, change of water levels of major lakes such as the Aral Sea, and the detection and mapping of wildland fires in the United States. The United States Forest Service's Remote Sensing Applications Center analyzes MODIS imagery on a continuous basis to provide information for the management and suppression of wildfires. == Specifications == === Calibration === MODIS utilizes four on-board calibrators in addition to the space view in order to provide in-flight calibration: solar diffuser (SD), solar diffuser stability monitor (SDSM), spectral radiometric calibration assembly (SRCA), and a v-groove black body. MODIS has used the marine optical buoy for vicarious calibration. == MODIS bands == == MODIS data == === MODIS Level 3 datasets === The following MODIS Level 3 (L3) datasets are available from NASA, as processed by the Collection 5 software. == See also == Imaging spectroscopy NASA WorldWind Aqua (satellite) Terra (satellite) Fire Information for Resource Management System == References == == External links == ECHO Reverb – the next generation metadata and service discovery tool, which has replaced the former Warehouse Inventory and Search Tool (WIST); LAADS Web – Level 1 and Atmosphere Archive and Distribution System (LAADS) web interface; LANCE-MODIS – Land Atmosphere Near real-time Capability for EOS "FTP link". ladsftp.nascom.nasa.gov (FTP). (To view documents see Help:FTP) – LAADS underlying FTP server; http://e4ftl01.cr.usgs.gov/ – Earth land surface datasets; "FTP link". n4ftl01u.ecs.nasa.gov (FTP). (To view documents see Help:FTP) – snow and ice datasets. Official NASA site MODIS bands and spectral ranges (broken link) (archived 15 July 2007) MODIS Images of the Day MODIS Image of the Day – Google Gadget referring to MODIS image of the day. Gallery of Images of Interest (archived 25 August 2001) MODIS Land Product Subsetting Tool for North America from Oak Ridge National Laboratory (archived 27 May 2010) MODIS Rapid Response system (near real time images) NASA OnEarth (Web service for MODIS imagery) (archived 12 July 2003) Visible Earth: Latest MODIS images (archived 1 July 2006) MODIS Sinusoidal: Projection 6842 – MODIS Sinusoidal Python: accessing near real-time MODIS images and fire data from NASA's Aqua and Terra satellites (Python) Modis has 36 spectral bands
Wikipedia/Moderate-resolution_imaging_spectroradiometer
Children's geographies is an area of study within human geography and childhood studies which involves researching the places and spaces of children's lives. == Context == Children's geographies is the branch of human geography which deals with the study of places and spaces of children's lives, characterised experientially, politically and ethically. Ever since the cultural turn in geography, there has been recognition that society is not homogenous but heterogeneous. It is characterized by diversity, differences and subjectivities. While feminist geographers had been able to strengthen the need for examination of gender, class and race as issues affecting women, 'children' as an umbrella term encompassing children, teenagers, youths and young people, which are still relatively missing a 'frame of reference' in the complexities of 'geographies'. In the act of theorizing children and their geographies, the ways of doing research and the assumed ontological realities often "frame 'children' and 'adults' in ways that impose a bi-polar, hierarchical, and developmental model". This reproduces and enforces the hegemony of adult-centered discourses of children within knowledge production. Children's geographies has developed in academic human geography since the beginning of the 1990s, although there were notable studies in the area before that date. The earliest work done on children's geographies largely can be traced to William Bunge's work on spatial oppression of children in Detroit and Toronto where children are deemed as the ones who suffer the most under an oppressing adult framework of social, cultural and political forces controlling the urban built environment. This development emerged from the realisation that previously human geography had largely ignored the everyday lives of children, who (obviously) form a significant section of society, and who have specific needs and capacities, and who may experience the world in very different ways. Thus children's geographies can in part be seen in parallel to an interest in gender in geography and feminist geography in so much as their starting points were the gender blindness of mainstream academic geography. Children's geographies also shares many of the underpinning principles of Childhood Studies (and the so-called New Social Studies of Childhood) and Sociology of the family - namely, that childhood is a social construction and that scholars should pay greater attention to children's voices and agency, although recent 'new wave' scholarship has challenged these principles (Kraftl, 2013) Children's geographies rests on the idea that children as a social group share certain characteristics which are experientially, politically and ethically significant and which are worthy of study. The pluralisation in the title is intended to imply that children's lives will be markedly different in differing times and places and in differing circumstances such as gender, family, and class. The current developments in children's geographies are attempting to link the frame of analysing children's geographies to one that requires multiple perspectives and the willingness to acknowledge the 'multiplicity' of their geographies. Children's geographies is sometimes coupled with, and yet distinguished from the geographies of childhood. The former has an interest in the everyday lives of children; the latter has an interest in how (adult) society conceives of the very idea of childhood and how this impinges on children's lives in many ways. This includes imaginations about the nature of children and the related (spatial) implications. In an early article, Holloway and Valentine termed these 'spatial discourses' Children's geographies can be observed through the various lenses provided by foci, thus the plurality inspired by post-modern and post-structural social geographers (Panelli, 2009). These foci include, but are not limited to: the history of its emergence (key authors and texts), the nature of the child (geographical concepts, family contexts, society contexts, gender variation, aged-based variation, cultural variation), children in the environment (home, school, play, neighbourhood, street, city, country, landscapes of consumption, cyberspace), designing environments for children (children as planners, utopian visions), environmental hazards (traffic, health and environment, accidents), indirect experience of place (not medium specific, literature, T.V. and cyberspace), social issues (children's fears, parent's fears for their children, poverty and deprivation, work, migration, social hazards, crime and deviance), citizenship and agency (environmental action, local politics, interest in the environment), and children's geographical knowledge (environmental cognition, understanding the physical environment) (McKendrick, 2000). Also, the methodologies of researching children's worlds and the ethics of doing so has been distinguished by the otherness of childhood. There is now a journal dedicated to work in the subdiscipline: Children's Geographies which will give readers a good idea of the growing range of issues, theories and methodologies of this developing and vibrant sub-discipline. Another relevant journal is Children, Youth and Environments, published as an interdisciplinary tri-annual with a worldwide readership. === Theoretical trends in children's geographies === For some years, critics argued that scholarship in children's geographies was characterised by a lack of theoretical diversity and 'block politics'. However, since the mid-2000s, the subdiscipline has seen a proliferation and diversification of theoretical work away from the social constructivist principles of childhood studies and the New Social Studies of Childhood. A major, influential trend has been the development of Non-representational theory by children's geographers, and especially scholars such as Peter Kraftl, John Horton, Matej Blazek, Veronica Pacini-Ketchabaw, Affrica Taylor, Pauliina Rautio and Kim Kullman. This work shares many theoretical influences with a so-called 'new wave' of childhood studies, and especially the influence of poststructural, new materialist and feminist theorists such as Gilles Deleuze, Rosi Braidotti, Donna Haraway and Jane Bennett (political theorist). For instance, in a series of articles, John Horton and Peter Kraftl have challenged a sense of 'what matters' in scholarship with children - from the material objects, emotions and affects that characterise 'participation' to the ways in which our embodied engagements with place in childhood are carried forward into adulthood, thereby scrambling any neat notion of 'transition' from childhood to adulthood. Elsewhere, Veronica Pacini-Ketchabaw and Affrica Taylor have developed innovative approaches to understanding the 'common worlds' of children and a range of nonhuman species, including both domestic and 'wild' animals. Their vibrant 'common worlds' research collective [3] brings together a range of scholars who seek to explore how children's lives are entangled with those of nonhumans in ways that challenge oppressive, colonial and/or neoliberal views of the human as an individuated subject somehow distanced from 'nature'. Recently, there has been vibrant debate about the political value of nonrepresentational approaches to childhood. Some scholars argue that nonrepresentational theories encourage a focus upon the banal, everyday, ephemeral and small-scale at the expense of understanding and critically interrogating wider-scaled and longer-standing processes of marginalisation. Others argue that, whilst valid, nonrepresentational and 'new wave' approaches extend beyond the small-scale, offering useful and in some cases fundamental ways to critically and creatively re-think the ways that we do research with children and their 'common worlds'. A second key conceptual trend has been in work on Subjectivity, children's Political geography and emotion. For instance, Louise Holt (2013) uses the work of Judith Butler to critically examine the emergence of the infant as a 'subject' through power relations that are often gendered, as well as infanthood is a stage in the lifecourse that is subject to particular kinds of social construction. Elsewhere, there has been a surge in interest in children's political geographies, which has to some extent been informed both by developments in nonrepresentational theory and in theories of subjectivity. Central to this scholarship (especially in the work of Tracey Skelton, Kirsi Pauliina Kallio and Jouni Hakli) has been a move beyond a traditional concern with children's participation in decision-making processes to highlight the range of ways in which they may be 'political' - from 'micropolitical' engagements with ethnic or social in the school or the street to critical considerations of major policy documents such as the Convention on the Rights of the Child. Louise Holt's work on subjectivity also connects with a wider, ongoing interest in the Emotional geography of childhood and youth (Bartos, 2012, Blazek, 2013), which, although overlapping with interests in nonrepresentational children's geographies, also has its roots in feminist theory. Notably, such approaches informed seminal texts that were important to the early development of children's geographies, particularly in Sarah Holloway's work on parenting and local childcare cultures. More recently, there has been a reinvigoration of interest in parenting, some of which has driven theorisations of the emotions that characterise the intimacy of parent/carer-child relationships - especially where these are cut across by policies designed to intervene into the spaces of parenting. This work has therefore been crucial in linking together the apparently small-scale concerns of intimate family life with 'bigger' concerns such as Government policy-making and school-based interventions. == Children in the environment == Since the age range assumed to constitute as childhood is quite vague within the cumulative research of children's geographies it is evident the multitude of environments they experience will be quite broad. The array of spaces and places experienced by children includes, but are not restricted to, homes, schools, playgrounds, neighbourhoods, streets, cities, countries, landscapes of consumption, and cyberspace. As environment has been noted by a multitude of social geographers to entail a socio-spatial aspect, it is important to note that over time the recognition of the multiplicity of the term “environment” has both diverged and converged as social geography has evolved (Valentine, 2001; Bowlby, 2001). === Children at school === Although schools are such a relatively large institution in society, it has been noted that this environment has received little recognition in comparison to institutions of health (Collins and Coleman, 2008). Collins and Coleman also note the centrality of schools in everyday life as they are “found in almost every urban and suburban neighbourhood” and most children experience a considerable time within this environment in their day-to-day lives. The role of this environment in a child's life is pivotal to their development, especially in respects to the inclusionary and exclusionary processes of society experienced firsthand in schools (MacCrae, Maguire and Milbourne, 2002). The manifestation of social exclusion as bullying is an inter-personal socio-spatial aspect whose implications have been extensively researched both within school boundaries and how it is enabled by technology (Olweus and Limber, 2010; Black, Washington, Trent, Harner and Pollock, 2009). School, therefore, is not only a place where children learn quantifiable subjects, but also a learning ground of life interaction skills needed later on. Research in children's geographies has been central to the development of scholarship on 'geographies of education'. For many commentators, this work - which spans Social geography, Cultural geography, Political geography and Urban geography - does not (yet) constitute an identifiable subdiscipline of human geography. However, geographers have held an enduring concern with education spaces, extending to and beyond school, as Collins and Coleman identify. This work has burgeoned in recent years, with a number of special issues dedicated to education and emotion, embodiment and the cultural geographies of education. Yet, as Holloway et al. (2010) argue, the role and significance of children, young people and families has been underplayed in debate on geographies of education. As they argue, children's geographers have not only undertaken a huge range of research in schools, but that work has been central in developing geographers' understandings of both education spaces more widely, and schools in particular. ==== As an institution ==== Although schools are a relatively large institution in society, it has been noted that this environment has received little recognition in comparison to institutions of health (Collins and Coleman, 2008). Collins and Coleman also note the centrality of schools in everyday life as they are “found in almost every urban and suburban neighbourhood” and most children experience a considerable time within this environment in their day-to-day lives. The implications of home schooling have largely been a field of assumptions, taking after common myths (Romanowki, 2010), although later work by geographers has examined in considerable detail the significance of space, place, emotion and materiality to the experiences of homeschoolers. The variance between public and private sector institutions and the implications of social status of children within the school community has also been a contentious field (Nissan and Carter, 2010). Whilst the majority of research by children's and youth geographers on education has focussed on institutions like schools and universities, that work has been challenged in a number of ways by scholarship on the geographies of alternative education. Examining a diverse range of non-State-funded, explicitly 'alternative' education spaces in the UK (like Homeschooling, Waldorf education, Montessori education, Forest school (learning style) and Care farming), Peter Kraftl examines the connections and disconnections between 'mainstream' and 'alternative' education sectors. Drawing on nonrepresentational children's geographies, Peter Kraftl explores how alternative educators work to intervene into children's bodily habits, how they create spaces in which mess and disorder are valorised, and how they work with conceptions of 'nature' that both resonate with and, critically, counter mainstream assumptions about children's disengagement with 'nature' in Western societies (see Nature deficit disorder). In doing so, alternative educators are attempting to create 'alter-childhoods' - alternative constructions, imaginations and ways of treating childhood that are knowingly different from a perceived mainstream. ==== Relevance to social interactions ==== As children grow they look to the influential adults in their lives for guidance (parents, caregivers and teachers). Most researchers and adults alike agree that communication is key to healthy child development across all modal environments, especially within schools (Lasky, 2000; Hargreaves, 2000; Hargreaves and Fullan, 1998; Hargreaves and Lasky, 2004). Lasky's focus remains on the cultural and emotional dynamic between teachers and the parents of their students. Where as Hargreaves continuously exemplifies through his data the significant improvement in child performance at school because of an equal power-play communication between teachers and parents/caregivers. Where there may be a lack of influential adults, children may look to older age groups within the school environment to observe acceptable behaviours and attention seeking behaviours. Research has begun to display the components of the “high quality experience” provided by controlled school-based mentoring relationships (Ahrens et al. 2011). However, other research disputes that the experience is as helpful as it claims to be, suggesting child-mentoring situations often fall short or are only temporarily beneficial (Spencer, 2007; Pryce, 2012). Pryce's research highlights the attunement of the mentor to the other's needs highly dictates the beneficial nature of the mentor relationship. ==== Relevance of technology ==== The introduction of technology into children's lives has provided a new platform upon which the school environment is no longer contained within a space. The place's previous temporal and geographical constrictions have been mobilized by the used of the Internet. The outcomes of this mobilization have both been constructive and destructive in the availability of material to learning children (Sancho, 2004) and more extrapersonal interactions among children. The educational benefit of I.C.T. (Interactive Computer Technology) in the classroom has been a subject supported by various researchers (Aviram and Talmi, 2004). ==== Relevance to the creation of social identities ==== The school is an institution in which children observe one another and experiment continuously with their self-image (Hernandez, 2004). Hernandez's research recognized a need to recognize children as individuals, and to incorporate their “personal maps” into the educational process, so the gap between the school environment and external environment does not elevate dangerously. The centrality of schools to social geography is pivotal. Public institutions in Canada and the USA were defined as “nation-building institutions, which sought to create common citizens from ethnically, linguistically and religiously diverse populations” (Moore, 2000; Sweet, 1997). The connection between nation-building and public education has held the view that schools shape the knowledge and identities of children (Collins and Coleman, 2008). Whether the connection is seen to create negative, destructive social norms or positive, construction of progressive values is dependent “on one’s broader political/moral compass” (Collins, 2006; Hunter 1991). == See also == Children's culture Children's street culture Cultural geography Feminist geography Home zone Student transport == References ==
Wikipedia/Children's_geographies
In business intelligence, location intelligence (LI), or spatial intelligence, is the process of deriving meaningful insight from geospatial data relationships to solve a particular problem. It involves layering multiple data sets spatially and/or chronologically, for easy reference on a map, and its applications span industries, categories and organizations. Maps have been used to represent information throughout the ages, but what might be referenced as the first example of true location 'intelligence' was in London in 1854 when John Snow was able to debunk theories about the spread of cholera by overlaying a map of the area with the location of water pumps and was able to narrow the source to a single water pump. This layering of information over a map was able to identify relationships between different sets of geospatial data. Location or geographical information system (GIS) tools enable spatial experts to collect, store, analyze and visualize data. Location intelligence experts can use a variety of spatial and business analytical tools to measure optimal locations for operating a business or providing a service. Location intelligence experts begin with defining the business ecosystem which has many interconnected economic influences. Such economic influences include but are not limited to culture, lifestyle, labor, healthcare, cost of living, crime, economic climate and education. == Further definitions == The term "location intelligence" is often used to describe the people, data and technology employed to geographically "map" information. These mapping applications like Polaris Intelligence can transform large amounts of data linked to location (e.g. POIs, demographics, geofences) into color-coded visual representations (heat maps and thematic maps of variables of interest) that make it easy to see trends and generate meaningful intelligence. The creation of location intelligence is directed by domain knowledge, formal frameworks, and a focus on decision support. Location cuts across through everything i.e. devices, platforms, software and apps, and is one of the most important ingredients of understanding context in sync with social data, mobile data, user data, sensor data. Location intelligence is also used to describe the integration of a geographical component into business intelligence processes and tools, often incorporating spatial database and spatial OLAP tools. In 2012, Wayne Gearey from the real estate industry (JLL) offered the first applied course on location intelligence at the University of Texas at Dallas in which he defined location intelligence as the process for selecting the optimal location that will support workplace success and address a variety of business and financial objectives. Pitney Bowes MapInfo Corporation describes location intelligence as follows: "Spatial information, commonly known as "Location", relates to involving, or having the nature of where. Spatial is not constrained to a geographic location however most common business uses of spatial information deal with how spatial information is tied to a location on the earth. Miriam-Webster® defines Intelligence as "The ability to learn or understand, or the ability to apply knowledge to manipulate one`s environment." Combining these terms alludes to how you achieve an understanding of the spatial aspect of information and apply it to achieve a significant competitive advantage." Definition by Esri is as follows: "Location intelligence is achieved via visualization and analysis of data. By adding layers of geographic data—such as demographics, traffic, and weather—to a smart map or dashboard, organizations can use intelligence tools to identify where an event has taken place, understand why it is happening, and gain insight into what caused it." Definition by Yankee Group within their White Paper "Location Intelligence in Retail Banking: "...a business management term that refers to spatial data visualization, contextualization and analytical capabilities applied to solve a business problem." == Commercial applications == Location intelligence is used by a broad range of industries to improve overall business results. Applications include: Communications and telecommunications: Network planning and design, boundary identification, identifying new customer markets. Financial services: Optimize branch locations, market analysis, share of wallet and cross-sell activities, mergers & acquisitions, industry sector analysis, risk management. Government: Census updates, law enforcement crime analysis, emergency response, environmental and land management, electoral redistricting, tax jurisdiction assignment, urban planning. Healthcare: Site selection, market segmentation, network analysis, growth assessments, spread of disease. Higher education: Student Recruitment, Alumni & Donor Tracking, Campus Mapping. Hotels and restaurants: Customer profile analysis, site selection, target marketing, expansion planning. Insurance: Address validation, underwriting and risk management, claims management, marketing and sales analysis, market penetration studies. K-12: School site selection, enrollment planning, school attendance area modification (boundary change), school consolidation, district consolidation, student achievement plotting. Media: Target market identification, subscriber demographics, media planning. Real estate: Site reports, comprehensive site analysis, demographic analysis, growth pattern analysis, retail modeling, presentation quality maps. Retail: Site selection, maximize per-store sales, identify under-performing stores, market analysis, retail leakage and supply gap analysis. Transportation: Transport planning, route monitoring. == See also == Geographic information system (GIS) Geomarketing == References ==
Wikipedia/Spatial_intelligence_(business_method)
Photozincography, sometimes referred to as heliozincography but essentially the same process, known commercially as zinco, is the photographic process developed by Sir Henry James FRS (1803–1877) in the mid-nineteenth century. This method enabled the accurate reproduction of images, manuscript text and outline engravings, which proved invaluable when originally used to create maps during the Ordnance Survey of Great Britain during the 1850s, carried out by the government's Topographical Department, headed by Colonel Sir Henry James. == Basis == The foundation of this method is the insolubility of bichromate of potash upon exposure to light, allowing the printing of images onto zinc from photographic negatives. == Method == At this time, high-contrast negatives were made using the wet plate collodion method (a solution of nitrocellulose in ether or acetone on glass). Once the negative had been made, a sheet of thin tracing paper was coated in a mixture of saturated potassium bichromate solution and gum water, and dried. This was then placed under the photographic negative and exposed to light for 2–3 minutes. The bichromate/gum mixture remained soluble on the parts of the tracing paper that were shielded from light by the opaque areas of the negative, allowing it to be removed, leaving an insoluble ‘positive’ image. This bichromate positive was then placed on a sheet of zinc covered in lithographic ink, and put through a printing press three or four times. After removal of the paper, the zinc plate was washed in a tray of hot water (containing a small amount of gum), using a camel-hair brush to remove all the soluble bichromate combined with ink. What remained on the zinc plate was a perfect representation in ink of the original composition, by virtue of the ink binding to the insoluble potassium bichromate. The main advantage and innovation of this process over lithography was the use of zinc plates rather than stone ones. Zinc plates were lighter and easier to transport, could produce more prints, and were far less brittle than the stone plates originally used. The use of zinc plates was also the origin of the name photozincography, which Sir Henry James claims to have invented. == History == Zinco or photozincography developed at the Ordnance Survey out of a need to reduce large-scale maps more effectively. The original method using a pantograph, was overcomplicated, time-consuming and, due to the number of moving parts, inaccurate. While there was some concern that photography would distort the image, Sir Henry set out to explore the possibility of using photography, setting up a photography department at the Ordnance Survey in 1855 and also securing funds to build the "glasshouse", a photography building with an all glass roof to allow as much natural light in as possible for photography. The development and discovery of photozincography or zinco came about four years later, being first mentioned in Sir Henry's report to Parliament in 1859. While Sir Henry James claimed to have invented the process, a similar system of document copying had been developed in Australia. John Walter Osborne (1828–1902) developed a similar process and for the same reasons as Sir Henry, to avoid using the tracing system of the pantograph. While developed at the same time Sir Henry's process, however as Sir Henry explained to a representative of Mr. Osborne in the quote below, he publicized it first. I therefore handed this gentleman a copy of my Report, and desired him to read the account given of our process at page 6 of that Report, and to examine the copy of the Deed bound up with it, and not to show me the description of Mr. Osborne's process if it was differed from ours. After reading it, he said at once it was the same process, and I then told him it was useless for him to attempt to take out a patent as my printed Report had everywhere been circulated Sir Henry, despite being the person who oversaw and set up the photography department, was not the actual inventor. The head of the photography department at Southampton, Captain A. de C. Scott, did much of the ground work and basic development on photozincography. Sir Henry did acknowledge the work of Scott in the development and use of the system in the introduction to the photozincographied Domesday Book. Despite this it was Sir Henry who gained most of the public attention through his pamphlet on photozincography. He was knighted in 1861 for services to science. The use of photozincography at the Ordnance Survey was a great success, with Sir Henry claiming it saved over £2000 a year, from the invention of photo-zincography; the cost of producing a map of a rural district was reduced from 4 to 1 and maps of towns were reduced from 9 to 1. It was also claimed that up to 2000 or 3000 impressions could be taken from a single plate. Despite this, the process was not perfect: it did not reproduce a full colour picture, and until 1875 boys were employed to colour in the maps produced by this method. The process, while better than the pantograph, still required a large amount of labour to prepare the zinc plates for pressing. However, photozincography began to be used fairly rapidly in Europe. Sir Henry was even honoured by the Queen of Spain. Though originally developed to reproduce maps, the process was eventually to be used on a whole series of manuscripts, to preserve them and make them more available to the public. This included a reproduction of Domesday Book in 1861–64 and several volumes of historical manuscripts. Whilst the process of photo-zincography was invented mostly for use the Ordnance Survey, The Photographic News stated that the process could also be used in the Patent office and would save vast amounts of time and money. The use of photozincography began to decline in the 1880s as better methods of reproductions were made available and in the 1900s the glasshouse was pulled down to make way for new printing presses. == Gallery == == See also == Anastatic lithography Photography Gum bichromate == References ==
Wikipedia/Photozincography
A geographic information system (GIS) consists of integrated computer hardware and software that store, manage, analyze, edit, output, and visualize geographic data. Much of this often happens within a spatial database; however, this is not essential to meet the definition of a GIS. In a broader sense, one may consider such a system also to include human users and support staff, procedures and workflows, the body of knowledge of relevant concepts and methods, and institutional organizations. The uncounted plural, geographic information systems, also abbreviated GIS, is the most common term for the industry and profession concerned with these systems. The academic discipline that studies these systems and their underlying geographic principles, may also be abbreviated as GIS, but the unambiguous GIScience is more common. GIScience is often considered a subdiscipline of geography within the branch of technical geography. Geographic information systems are utilized in multiple technologies, processes, techniques and methods. They are attached to various operations and numerous applications, that relate to: engineering, planning, management, transport/logistics, insurance, telecommunications, and business, as well as the natural sciences such as forestry, ecology, and Earth science. For this reason, GIS and location intelligence applications are at the foundation of location-enabled services, which rely on geographic analysis and visualization. GIS provides the ability to relate previously unrelated information, through the use of location as the "key index variable". Locations and extents that are found in the Earth's spacetime are able to be recorded through the date and time of occurrence, along with x, y, and z coordinates; representing, longitude (x), latitude (y), and elevation (z). All Earth-based, spatial–temporal, location and extent references should be relatable to one another, and ultimately, to a "real" physical location or extent. This key characteristic of GIS has begun to open new avenues of scientific inquiry and studies. == History and development == While digital GIS dates to the mid-1960s, when Roger Tomlinson first coined the phrase "geographic information system", many of the geographic concepts and methods that GIS automates date back decades earlier. One of the first known instances in which spatial analysis was used came from the field of epidemiology in the Rapport sur la marche et les effets du choléra dans Paris et le département de la Seine (1832). French cartographer and geographer Charles Picquet created a map outlining the forty-eight districts in Paris, using halftone color gradients, to provide a visual representation for the number of reported deaths due to cholera per every 1,000 inhabitants. In 1854, John Snow, an epidemiologist and physician, was able to determine the source of a cholera outbreak in London through the use of spatial analysis. Snow achieved this through plotting the residence of each casualty on a map of the area, as well as the nearby water sources. Once these points were marked, he was able to identify the water source within the cluster that was responsible for the outbreak. This was one of the earliest successful uses of a geographic methodology in pinpointing the source of an outbreak in epidemiology. While the basic elements of topography and theme existed previously in cartography, Snow's map was unique due to his use of cartographic methods, not only to depict, but also to analyze clusters of geographically dependent phenomena. The early 20th century saw the development of photozincography, which allowed maps to be split into layers, for example one layer for vegetation and another for water. This was particularly used for printing contours – drawing these was a labour-intensive task but having them on a separate layer meant they could be worked on without the other layers to confuse the draughtsman. This work was initially drawn on glass plates, but later plastic film was introduced, with the advantages of being lighter, using less storage space and being less brittle, among others. When all the layers were finished, they were combined into one image using a large process camera. Once color printing came in, the layers idea was also used for creating separate printing plates for each color. While the use of layers much later became one of the typical features of a contemporary GIS, the photographic process just described is not considered a GIS in itself – as the maps were just images with no database to link them to. Two additional developments are notable in the early days of GIS: Ian McHarg's publication Design with Nature and its map overlay method and the introduction of a street network into the U.S. Census Bureau's DIME (Dual Independent Map Encoding) system. The first publication detailing the use of computers to facilitate cartography was written by Waldo Tobler in 1959. Further computer hardware development spurred by nuclear weapon research led to more widespread general-purpose computer "mapping" applications by the early 1960s. In 1963, the world's first true operational GIS was developed in Ottawa, Ontario, Canada, by the federal Department of Forestry and Rural Development. Developed by Roger Tomlinson, it was called the Canada Geographic Information System (CGIS) and was used to store, analyze, and manipulate data collected for the Canada Land Inventory, an effort to determine the land capability for rural Canada by mapping information about soils, agriculture, recreation, wildlife, waterfowl, forestry and land use at a scale of 1:50,000. A rating classification factor was also added to permit analysis. CGIS was an improvement over "computer mapping" applications as it provided capabilities for data storage, overlay, measurement, and digitizing/scanning. It supported a national coordinate system that spanned the continent, coded lines as arcs having a true embedded topology and it stored the attribute and locational information in separate files. As a result of this, Tomlinson has become known as the "father of GIS", particularly for his use of overlays in promoting the spatial analysis of convergent geographic data. CGIS lasted into the 1990s and built a large digital land resource database in Canada. It was developed as a mainframe-based system in support of federal and provincial resource planning and management. Its strength was continent-wide analysis of complex datasets. The CGIS was never available commercially. In 1964, Howard T. Fisher formed the Laboratory for Computer Graphics and Spatial Analysis at the Harvard Graduate School of Design (LCGSA 1965–1991), where a number of important theoretical concepts in spatial data handling were developed, and which by the 1970s had distributed seminal software code and systems, such as SYMAP, GRID, and ODYSSEY, to universities, research centers and corporations worldwide. These programs were the first examples of general-purpose GIS software that was not developed for a particular installation, and was very influential on future commercial software, such as Esri ARC/INFO, released in 1983. By the late 1970s, two public domain GIS systems (MOSS and GRASS GIS) were in development, and by the early 1980s, M&S Computing (later Intergraph) along with Bentley Systems Incorporated for the CAD platform, Environmental Systems Research Institute (ESRI), CARIS (Computer Aided Resource Information System), and ERDAS (Earth Resource Data Analysis System) emerged as commercial vendors of GIS software, successfully incorporating many of the CGIS features, combining the first-generation approach to separation of spatial and attribute information with a second-generation approach to organizing attribute data into database structures. In 1986, Mapping Display and Analysis System (MIDAS), the first desktop GIS product, was released for the DOS operating system. This was renamed in 1990 to MapInfo for Windows when it was ported to the Microsoft Windows platform. This began the process of moving GIS from the research department into the business environment. By the end of the 20th century, the rapid growth in various systems had been consolidated and standardized on relatively few platforms and users were beginning to explore viewing GIS data over the Internet, requiring data format and transfer standards. More recently, a growing number of free, open-source GIS packages run on a range of operating systems and can be customized to perform specific tasks. The major trend of the 21st Century has been the integration of GIS capabilities with other Information technology and Internet infrastructure, such as relational databases, cloud computing, software as a service (SAAS), and mobile computing. == GIS software == The distinction must be made between a singular geographic information system, which is a single installation of software and data for a particular use, along with associated hardware, staff, and institutions (e.g., the GIS for a particular city government); and GIS software, a general-purpose application program that is intended to be used in many individual geographic information systems in a variety of application domains.: 16  Starting in the late 1970s, many software packages have been created specifically for GIS applications. Esri's ArcGIS, which includes ArcGIS Pro and the legacy software ArcMap, currently dominates the GIS market. Other examples of GIS include Autodesk and MapInfo Professional and open-source programs such as QGIS, GRASS GIS, MapGuide, and Hadoop-GIS. These and other desktop GIS applications include a full suite of capabilities for entering, managing, analyzing, and visualizing geographic data, and are designed to be used on their own. Starting in the late 1990s with the emergence of the Internet, as computer network technology progressed, GIS infrastructure and data began to move to servers, providing another mechanism for providing GIS capabilities.: 216  This was facilitated by standalone software installed on a server, similar to other server software such as HTTP servers and relational database management systems, enabling clients to have access to GIS data and processing tools without having to install specialized desktop software. These networks are known as distributed GIS. This strategy has been extended through the Internet and development of cloud-based GIS platforms such as ArcGIS Online and GIS-specialized software as a service (SAAS). The use of the Internet to facilitate distributed GIS is known as Internet GIS. An alternative approach is the integration of some or all of these capabilities into other software or information technology architectures. One example is a spatial extension to Object-relational database software, which defines a geometry datatype so that spatial data can be stored in relational tables, and extensions to SQL for spatial analysis operations such as overlay. Another example is the proliferation of geospatial libraries and application programming interfaces (e.g., GDAL, Leaflet, D3.js) that extend programming languages to enable the incorporation of GIS data and processing into custom software, including web mapping sites and location-based services in smartphones. == Geospatial data management == The core of any GIS is a database that contains representations of geographic phenomena, modeling their geometry (location and shape) and their properties or attributes. A GIS database may be stored in a variety of forms, such as a collection of separate data files or a single spatially-enabled relational database. Collecting and managing these data usually constitutes the bulk of the time and financial resources of a project, far more than other aspects such as analysis and mapping.: 175  === Aspects of geographic data === GIS uses spatio-temporal (space-time) location as the key index variable for all other information. Just as a relational database containing text or numbers can relate many different tables using common key index variables, GIS can relate otherwise unrelated information by using location as the key index variable. The key is the location and/or extent in space-time. Any variable that can be located spatially, and increasingly also temporally, can be referenced using a GIS. Locations or extents in Earth space–time may be recorded as dates/times of occurrence, and x, y, and z coordinates representing, longitude, latitude, and elevation, respectively. These GIS coordinates may represent other quantified systems of temporo-spatial reference (for example, film frame number, stream gage station, highway mile-marker, surveyor benchmark, building address, street intersection, entrance gate, water depth sounding, POS or CAD drawing origin/units). Units applied to recorded temporal-spatial data can vary widely (even when using exactly the same data, see map projections), but all Earth-based spatial–temporal location and extent references should, ideally, be relatable to one another and ultimately to a "real" physical location or extent in space–time. Related by accurate spatial information, an incredible variety of real-world and projected past or future data can be analyzed, interpreted and represented. This key characteristic of GIS has begun to open new avenues of scientific inquiry into behaviors and patterns of real-world information that previously had not been systematically correlated. === Data modeling === GIS data represents phenomena that exist in the real world, such as roads, land use, elevation, trees, waterways, and states. The most common types of phenomena that are represented in data can be divided into two conceptualizations: discrete objects (e.g., a house, a road) and continuous fields (e.g., rainfall amount or population density). : 62–65  Other types of geographic phenomena, such as events (e.g., location of World War II battles), processes (e.g., extent of suburbanization), and masses (e.g., types of soil in an area) are represented less commonly or indirectly, or are modeled in analysis procedures rather than data. Traditionally, there are two broad methods used to store data in a GIS for both kinds of abstractions mapping references: raster images and vector. Points, lines, and polygons represent vector data of mapped location attribute references. A new hybrid method of storing data is that of identifying point clouds, which combine three-dimensional points with RGB information at each point, returning a 3D color image. GIS thematic maps then are becoming more and more realistically visually descriptive of what they set out to show or determine. === Data acquisition === GIS data acquisition includes several methods for gathering spatial data into a GIS database, which can be grouped into three categories: primary data capture, the direct measurement phenomena in the field (e.g., remote sensing, the global positioning system); secondary data capture, the extraction of information from existing sources that are not in a GIS form, such as paper maps, through digitization; and data transfer, the copying of existing GIS data from external sources such as government agencies and private companies. All of these methods can consume significant time, finances, and other resources.: 173  ==== Primary data capture ==== Survey data can be directly entered into a GIS from digital data collection systems on survey instruments using a technique called coordinate geometry (COGO). Positions from a global navigation satellite system (GNSS) like the Global Positioning System can also be collected and then imported into a GIS. A current trend in data collection gives users the ability to utilize field computers with the ability to edit live data using wireless connections or disconnected editing sessions. The current trend is to utilize applications available on smartphones and PDAs in the form of mobile GIS. This has been enhanced by the availability of low-cost mapping-grade GPS units with decimeter accuracy in real time. This eliminates the need to post process, import, and update the data in the office after fieldwork has been collected. This includes the ability to incorporate positions collected using a laser rangefinder. New technologies also allow users to create maps as well as analysis directly in the field, making projects more efficient and mapping more accurate. Remotely sensed data also plays an important role in data collection and consist of sensors attached to a platform. Sensors include cameras, digital scanners and lidar, while platforms usually consist of aircraft and satellites. In England in the mid-1990s, hybrid kite/balloons called helikites first pioneered the use of compact airborne digital cameras as airborne geo-information systems. Aircraft measurement software, accurate to 0.4 mm, was used to link the photographs and measure the ground. Helikites are inexpensive and gather more accurate data than aircraft. Helikites can be used over roads, railways and towns where unmanned aerial vehicles (UAVs) are banned. Recently, aerial data collection has become more accessible with miniature UAVs and drones. For example, the Aeryon Scout was used to map a 50-acre area with a ground sample distance of 1 inch (2.54 cm) in only 12 minutes. The majority of digital data currently comes from photo interpretation of aerial photographs. Soft-copy workstations are used to digitize features directly from stereo pairs of digital photographs. These systems allow data to be captured in two and three dimensions, with elevations measured directly from a stereo pair using principles of photogrammetry. Analog aerial photos must be scanned before being entered into a soft-copy system, for high-quality digital cameras this step is skipped. Satellite remote sensing provides another important source of spatial data. Here satellites use different sensor packages to passively measure the reflectance from parts of the electromagnetic spectrum or radio waves that were sent out from an active sensor such as radar. Remote sensing collects raster data that can be further processed using different bands to identify objects and classes of interest, such as land cover. ==== Secondary data capture ==== The most common method of data creation is digitization, where a hard copy map or survey plan is transferred into a digital medium through the use of a CAD program, and geo-referencing capabilities. With the wide availability of ortho-rectified imagery (from satellites, aircraft, Helikites and UAVs), heads-up digitizing is becoming the main avenue through which geographic data is extracted. Heads-up digitizing involves the tracing of geographic data directly on top of the aerial imagery instead of by the traditional method of tracing the geographic form on a separate digitizing tablet (heads-down digitizing). Heads-down digitizing, or manual digitizing, uses a special magnetic pen, or stylus, that feeds information into a computer to create an identical, digital map. Some tablets use a mouse-like tool, called a puck, instead of a stylus. The puck has a small window with cross-hairs which allows for greater precision and pinpointing map features. Though heads-up digitizing is more commonly used, heads-down digitizing is still useful for digitizing maps of poor quality. Existing data printed on paper or PET film maps can be digitized or scanned to produce digital data. A digitizer produces vector data as an operator traces points, lines, and polygon boundaries from a map. Scanning a map results in raster data that could be further processed to produce vector data. When data is captured, the user should consider if the data should be captured with either a relative accuracy or absolute accuracy, since this could not only influence how information will be interpreted but also the cost of data capture. After entering data into a GIS, the data usually requires editing, to remove errors, or further processing. For vector data it must be made "topologically correct" before it can be used for some advanced analysis. For example, in a road network, lines must connect with nodes at an intersection. Errors such as undershoots and overshoots must also be removed. For scanned maps, blemishes on the source map may need to be removed from the resulting raster. For example, a fleck of dirt might connect two lines that should not be connected. === Projections, coordinate systems, and registration === The earth can be represented by various models, each of which may provide a different set of coordinates (e.g., latitude, longitude, elevation) for any given point on the Earth's surface. The simplest model is to assume the earth is a perfect sphere. As more measurements of the earth have accumulated, the models of the earth have become more sophisticated and more accurate. In fact, there are models called datums that apply to different areas of the earth to provide increased accuracy, like North American Datum of 1983 for U.S. measurements, and the World Geodetic System for worldwide measurements. The latitude and longitude on a map made against a local datum may not be the same as one obtained from a GPS receiver. Converting coordinates from one datum to another requires a datum transformation such as a Helmert transformation, although in certain situations a simple translation may be sufficient. In popular GIS software, data projected in latitude/longitude is often represented as a Geographic coordinate system. For example, data in latitude/longitude if the datum is the 'North American Datum of 1983' is denoted by 'GCS North American 1983'. === Data quality === While no digital model can be a perfect representation of the real world, it is important that GIS data be of a high quality. In keeping with the principle of homomorphism, the data must be close enough to reality so that the results of GIS procedures correctly correspond to the results of real world processes. This means that there is no single standard for data quality, because the necessary degree of quality depends on the scale and purpose of the tasks for which it is to be used. Several elements of data quality are important to GIS data: Accuracy The degree of similarity between a represented measurement and the actual value; conversely, error is the amount of difference between them.: 623  In GIS data, there is concern for accuracy in representations of location (positional accuracy), property (attribute accuracy), and time. For example, the US 2020 Census says that the population of Houston on April 1, 2020 was 2,304,580; if it was actually 2,310,674, this would be an error and thus a lack of attribute accuracy. Precision The degree of refinement in a represented value. In a quantitative property, this is the number of significant digits in the measured value.: 115  An imprecise value is vague or ambiguous, including a range of possible values. For example, if one were to say that the population of Houston on April 1, 2020 was "about 2.3 million," this statement would be imprecise, but likely accurate because the correct value (and many incorrect values) are included. As with accuracy, representations of location, property, and time can all be more or less precise. Resolution is a commonly used expression of positional precision, especially in raster data sets. Scale is closely related to precision in maps, as it dictates a desirable level of spatial precision, but is problematic in GIS, where a data set can be shown at a variety of display scales (including scales that would not be appropriate for the quality of the data). Uncertainty A general acknowledgement of the presence of error and imprecision in geographic data.: 99  That is, it is a degree of general doubt, given that it is difficult to know exactly how much error is present in a data set, although some form of estimate may be attempted (a confidence interval being such an estimate of uncertainty). This is sometimes used as a collective term for all or most aspects of data quality. Vagueness or fuzziness The degree to which an aspect (location, property, or time) of a phenomenon is inherently imprecise, rather than the imprecision being in a measured value.: 103  For example, the spatial extent of the Houston metropolitan area is vague, as there are places on the outskirts of the city that are less connected to the central city (measured by activities such as commuting) than places that are closer. Mathematical tools such as fuzzy set theory are commonly used to manage vagueness in geographic data. Completeness The degree to which a data set represents all of the actual features that it purports to include.: 623  For example, if a layer of "roads in Houston" is missing some actual streets, it is incomplete. Currency The most recent point in time at which a data set claims to be an accurate representation of reality. This is a concern for the majority of GIS applications, which attempt to represent the world "at present," in which case older data is of lower quality. Consistency The degree to which the representations of the many phenomena in a data set correctly correspond with each other.: 623  Consistency in topological relationships between spatial objects is an especially important aspect of consistency.: 117  For example, if all of the lines in a street network were accidentally moved 10 meters to the East, they would be inaccurate but still consistent, because they would still properly connect at each intersection, and network analysis tools such as shortest path would still give correct results. Propagation of uncertainty The degree to which the quality of the results of Spatial analysis methods and other processing tools derives from the quality of input data.: 118  For example, interpolation is a common operation used in many ways in GIS; because it generates estimates of values between known measurements, the results will always be more precise, but less certain (as each estimate has an unknown amount of error). The quality of a dataset is very dependent upon its sources, and the methods used to create it. Land surveyors have been able to provide a high level of positional accuracy utilizing high-end GPS equipment, but GPS locations on the average smartphone are much less accurate. Common datasets such as digital terrain and aerial imagery are available in a wide variety of levels of quality, especially spatial precision. Paper maps, which have been digitized for many years as a data source, can also be of widely varying quality. A quantitative analysis of maps brings accuracy issues into focus. The electronic and other equipment used to make measurements for GIS is far more precise than the machines of conventional map analysis. All geographical data are inherently inaccurate, and these inaccuracies will propagate through GIS operations in ways that are difficult to predict. === Raster-to-vector translation === Data restructuring can be performed by a GIS to convert data into different formats. For example, a GIS may be used to convert a satellite image map to a vector structure by generating lines around all cells with the same classification, while determining the cell spatial relationships, such as adjacency or inclusion. More advanced data processing can occur with image processing, a technique developed in the late 1960s by NASA and the private sector to provide contrast enhancement, false color rendering and a variety of other techniques including use of two dimensional Fourier transforms. Since digital data is collected and stored in various ways, the two data sources may not be entirely compatible. So a GIS must be able to convert geographic data from one structure to another. In so doing, the implicit assumptions behind different ontologies and classifications require analysis. Object ontologies have gained increasing prominence as a consequence of object-oriented programming and sustained work by Barry Smith and co-workers. === Spatial ETL === Spatial ETL tools provide the data processing functionality of traditional extract, transform, load (ETL) software, but with a primary focus on the ability to manage spatial data. They provide GIS users with the ability to translate data between different standards and proprietary formats, whilst geometrically transforming the data en route. These tools can come in the form of add-ins to existing wider-purpose software such as spreadsheets. == Spatial analysis == GIS spatial analysis is a rapidly changing field, and GIS packages are increasingly including analytical tools as standard built-in facilities, as optional toolsets, as add-ins or 'analysts'. In many instances these are provided by the original software suppliers (commercial vendors or collaborative non commercial development teams), while in other cases facilities have been developed and are provided by third parties. Furthermore, many products offer software development kits (SDKs), programming languages and language support, scripting facilities and/or special interfaces for developing one's own analytical tools or variants. The increased availability has created a new dimension to business intelligence termed "spatial intelligence" which, when openly delivered via intranet, democratizes access to geographic and social network data. Geospatial intelligence, based on GIS spatial analysis, has also become a key element for security. GIS as a whole can be described as conversion to a vectorial representation or to any other digitisation process. Geoprocessing is a GIS operation used to manipulate spatial data. A typical geoprocessing operation takes an input dataset, performs an operation on that dataset, and returns the result of the operation as an output dataset. Common geoprocessing operations include geographic feature overlay, feature selection and analysis, topology processing, raster processing, and data conversion. Geoprocessing allows for definition, management, and analysis of information used to form decisions. === Terrain analysis === Many geographic tasks involve the terrain, the shape of the surface of the earth, such as hydrology, earthworks, and biogeography. Thus, terrain data is often a core dataset in a GIS, usually in the form of a raster Digital elevation model (DEM) or a Triangulated irregular network (TIN). A variety of tools are available in most GIS software for analyzing terrain, often by creating derivative datasets that represent a specific aspect of the surface. Some of the most common include: Slope or grade is the steepness or gradient of a unit of terrain, usually measured as an angle in degrees or as a percentage. Aspect can be defined as the direction in which a unit of terrain faces. Aspect is usually expressed in degrees from north. Cut and fill is a computation of the difference between the surface before and after an excavation project to estimate costs. Hydrological modeling can provide a spatial element that other hydrological models lack, with the analysis of variables such as slope, aspect and watershed or catchment area. Terrain analysis is fundamental to hydrology, since water always flows down a slope. As basic terrain analysis of a digital elevation model (DEM) involves calculation of slope and aspect, DEMs are very useful for hydrological analysis. Slope and aspect can then be used to determine direction of surface runoff, and hence flow accumulation for the formation of streams, rivers and lakes. Areas of divergent flow can also give a clear indication of the boundaries of a catchment. Once a flow direction and accumulation matrix has been created, queries can be performed that show contributing or dispersal areas at a certain point. More detail can be added to the model, such as terrain roughness, vegetation types and soil types, which can influence infiltration and evapotranspiration rates, and hence influencing surface flow. One of the main uses of hydrological modeling is in environmental contamination research. Other applications of hydrological modeling include groundwater and surface water mapping, as well as flood risk maps. Viewshed analysis predicts the impact that terrain has on the visibility between locations, which is especially important for wireless communications. Shaded relief is a depiction of the surface as if it were a three dimensional model lit from a given direction, which is very commonly used in maps. Most of these are generated using algorithms that are discrete simplifications of vector calculus. Slope, aspect, and surface curvature in terrain analysis are all derived from neighborhood operations using elevation values of a cell's adjacent neighbours. Each of these is strongly affected by the level of detail in the terrain data, such as the resolution of a DEM, which should be chosen carefully. === Proximity analysis === Distance is a key part of solving many geographic tasks, usually due to the friction of distance. Thus, a wide variety of analysis tools have analyze distance in some form, such as buffers, Voronoi or Thiessen polygons, Cost distance analysis, and network analysis. === Data analysis === It is difficult to relate wetlands maps to rainfall amounts recorded at different points such as airports, television stations, and schools. A GIS, however, can be used to depict two- and three-dimensional characteristics of the Earth's surface, subsurface, and atmosphere from information points. For example, a GIS can quickly generate a map with isopleth or contour lines that indicate differing amounts of rainfall. Such a map can be thought of as a rainfall contour map. Many sophisticated methods can estimate the characteristics of surfaces from a limited number of point measurements. A two-dimensional contour map created from the surface modeling of rainfall point measurements may be overlaid and analyzed with any other map in a GIS covering the same area. This GIS derived map can then provide additional information - such as the viability of water power potential as a renewable energy source. Similarly, GIS can be used to compare other renewable energy resources to find the best geographic potential for a region. Additionally, from a series of three-dimensional points, or digital elevation model, isopleth lines representing elevation contours can be generated, along with slope analysis, shaded relief, and other elevation products. Watersheds can be easily defined for any given reach, by computing all of the areas contiguous and uphill from any given point of interest. Similarly, an expected thalweg of where surface water would want to travel in intermittent and permanent streams can be computed from elevation data in the GIS. === Topological modeling === A GIS can recognize and analyze the spatial relationships that exist within digitally stored spatial data. These topological relationships allow complex spatial modelling and analysis to be performed. Topological relationships between geometric entities traditionally include adjacency (what adjoins what), containment (what encloses what), and proximity (how close something is to something else). === Geometric networks === Geometric networks are linear networks of objects that can be used to represent interconnected features, and to perform special spatial analysis on them. A geometric network is composed of edges, which are connected at junction points, similar to graphs in mathematics and computer science. Just like graphs, networks can have weight and flow assigned to its edges, which can be used to represent various interconnected features more accurately. Geometric networks are often used to model road networks and public utility networks, such as electric, gas, and water networks. Network modeling is also commonly employed in transportation planning, hydrology modeling, and infrastructure modeling. === Cartographic modeling === Dana Tomlin coined the term cartographic modeling in his PhD dissertation (1983); he later used it in the title of his book, Geographic Information Systems and Cartographic Modeling (1990). Cartographic modeling refers to a process where several thematic layers of the same area are produced, processed, and analyzed. Tomlin used raster layers, but the overlay method (see below) can be used more generally. Operations on map layers can be combined into algorithms, and eventually into simulation or optimization models. === Map overlay === The combination of several spatial datasets (points, lines, or polygons) creates a new output vector dataset, visually similar to stacking several maps of the same region. These overlays are similar to mathematical Venn diagram overlays. A union overlay combines the geographic features and attribute tables of both inputs into a single new output. An intersect overlay defines the area where both inputs overlap and retains a set of attribute fields for each. A symmetric difference overlay defines an output area that includes the total area of both inputs except for the overlapping area. Data extraction is a GIS process similar to vector overlay, though it can be used in either vector or raster data analysis. Rather than combining the properties and features of both datasets, data extraction involves using a "clip" or "mask" to extract the features of one data set that fall within the spatial extent of another dataset. In raster data analysis, the overlay of datasets is accomplished through a process known as "local operation on multiple rasters" or "map algebra", through a function that combines the values of each raster's matrix. This function may weigh some inputs more than others through use of an "index model" that reflects the influence of various factors upon a geographic phenomenon. === Geostatistics === Geostatistics is a branch of statistics that deals with field data, spatial data with a continuous index. It provides methods to model spatial correlation, and predict values at arbitrary locations (interpolation). When phenomena are measured, the observation methods dictate the accuracy of any subsequent analysis. Due to the nature of the data (e.g. traffic patterns in an urban environment; weather patterns over the Pacific Ocean), a constant or dynamic degree of precision is always lost in the measurement. This loss of precision is determined from the scale and distribution of the data collection. To determine the statistical relevance of the analysis, an average is determined so that points (gradients) outside of any immediate measurement can be included to determine their predicted behavior. This is due to the limitations of the applied statistic and data collection methods, and interpolation is required to predict the behavior of particles, points, and locations that are not directly measurable. Interpolation is the process by which a surface is created, usually a raster dataset, through the input of data collected at a number of sample points. There are several forms of interpolation, each which treats the data differently, depending on the properties of the data set. In comparing interpolation methods, the first consideration should be whether or not the source data will change (exact or approximate). Next is whether the method is subjective, a human interpretation, or objective. Then there is the nature of transitions between points: are they abrupt or gradual. Finally, there is whether a method is global (it uses the entire data set to form the model), or local where an algorithm is repeated for a small section of terrain. Interpolation is a justified measurement because of a spatial autocorrelation principle that recognizes that data collected at any position will have a great similarity to, or influence of those locations within its immediate vicinity. Digital elevation models, triangulated irregular networks, edge-finding algorithms, Thiessen polygons, Fourier analysis, (weighted) moving averages, inverse distance weighting, kriging, spline, and trend surface analysis are all mathematical methods to produce interpolative data. === Address geocoding === Geocoding is interpolating spatial locations (X,Y coordinates) from street addresses or any other spatially referenced data such as ZIP Codes, parcel lots and address locations. A reference theme is required to geocode individual addresses, such as a road centerline file with address ranges. The individual address locations have historically been interpolated, or estimated, by examining address ranges along a road segment. These are usually provided in the form of a table or database. The software will then place a dot approximately where that address belongs along the segment of centerline. For example, an address point of 500 will be at the midpoint of a line segment that starts with address 1 and ends with address 1,000. Geocoding can also be applied against actual parcel data, typically from municipal tax maps. In this case, the result of the geocoding will be an actually positioned space as opposed to an interpolated point. This approach is being increasingly used to provide more precise location information. === Reverse geocoding === Reverse geocoding is the process of returning an estimated street address number as it relates to a given coordinate. For example, a user can click on a road centerline theme (thus providing a coordinate) and have information returned that reflects the estimated house number. This house number is interpolated from a range assigned to that road segment. If the user clicks at the midpoint of a segment that starts with address 1 and ends with 100, the returned value will be somewhere near 50. Note that reverse geocoding does not return actual addresses, only estimates of what should be there based on the predetermined range. === Multi-criteria decision analysis === Coupled with GIS, multi-criteria decision analysis methods support decision-makers in analysing a set of alternative spatial solutions, such as the most likely ecological habitat for restoration, against multiple criteria, such as vegetation cover or roads. MCDA uses decision rules to aggregate the criteria, which allows the alternative solutions to be ranked or prioritised. GIS MCDA may reduce costs and time involved in identifying potential restoration sites. === GIS data mining === GIS or spatial data mining is the application of data mining methods to spatial data. Data mining, which is the partially automated search for hidden patterns in large databases, offers great potential benefits for applied GIS-based decision making. Typical applications include environmental monitoring. A characteristic of such applications is that spatial correlation between data measurements require the use of specialized algorithms for more efficient data analysis. == Data output and cartography == Cartography is the design and production of maps, or visual representations of spatial data. The vast majority of modern cartography is done with the help of computers, usually using GIS but production of quality cartography is also achieved by importing layers into a design program to refine it. Most GIS software gives the user substantial control over the appearance of the data. Cartographic work serves two major functions: First, it produces graphics on the screen or on paper that convey the results of analysis to the people who make decisions about resources. Wall maps and other graphics can be generated, allowing the viewer to visualize and thereby understand the results of analyses or simulations of potential events. Web Map Servers facilitate distribution of generated maps through web browsers using various implementations of web-based application programming interfaces (AJAX, Java, Flash, etc.). Second, other database information can be generated for further analysis or use. An example would be a list of all addresses within one mile (1.6 km) of a toxic spill. An archeochrome is a new way of displaying spatial data. It is a thematic on a 3D map that is applied to a specific building or a part of a building. It is suited to the visual display of heat-loss data. === Terrain depiction === Traditional maps are abstractions of the real world, a sampling of important elements portrayed on a sheet of paper with symbols to represent physical objects. People who use maps must interpret these symbols. Topographic maps show the shape of land surface with contour lines or with shaded relief. Today, graphic display techniques such as shading based on altitude in a GIS can make relationships among map elements visible, heightening one's ability to extract and analyze information. For example, two types of data were combined in a GIS to produce a perspective view of a portion of San Mateo County, California. The digital elevation model, consisting of surface elevations recorded on a 30-meter horizontal grid, shows high elevations as white and low elevation as black. The accompanying Landsat Thematic Mapper image shows a false-color infrared image looking down at the same area in 30-meter pixels, or picture elements, for the same coordinate points, pixel by pixel, as the elevation information. A GIS was used to register and combine the two images to render the three-dimensional perspective view looking down the San Andreas Fault, using the Thematic Mapper image pixels, but shaded using the elevation of the landforms. The GIS display depends on the viewing point of the observer and time of day of the display, to properly render the shadows created by the sun's rays at that latitude, longitude, and time of day. === Web mapping === In recent years there has been a proliferation of free-to-use and easily accessible mapping software such as the proprietary web applications Google Maps and Bing Maps, as well as the free and open-source alternative OpenStreetMap. These services give the public access to huge amounts of geographic data, perceived by many users to be as trustworthy and usable as professional information. For example, during the COVID-19 pandemic, web maps hosted on dashboards were used to rapidly disseminate case data to the general public. Some of them, like Google Maps and OpenLayers, expose an application programming interface (API) that enable users to create custom applications. These toolkits commonly offer street maps, aerial/satellite imagery, geocoding, searches, and routing functionality. Web mapping has also uncovered the potential of crowdsourcing geodata in projects like OpenStreetMap, which is a collaborative project to create a free editable map of the world. These mashup projects have been proven to provide a high level of value and benefit to end users outside that possible through traditional geographic information. Web mapping is not without its drawbacks. Web mapping allows for the creation and distribution of maps by people without proper cartographic training. This has led to maps that ignore cartographic conventions and are potentially misleading, with one study finding that more than half of United States state government COVID-19 dashboards did not follow these conventions. == Uses == Since its origin in the 1960s, GIS has been used in an ever-increasing range of applications, corroborating the widespread importance of location and aided by the continuing reduction in the barriers to adopting geospatial technology. The perhaps hundreds of different uses of GIS can be classified in several ways: Goal: the purpose of an application can be broadly classified as either scientific research or resource management. The purpose of research, defined as broadly as possible, is to discover new knowledge; this may be performed by someone who considers themself a scientist, but may also be done by anyone who is trying to learn why the world appears to work the way it does. A study as practical as deciphering why a business location has failed would be research in this sense. Management (sometimes called operational applications), also defined as broadly as possible, is the application of knowledge to make practical decisions on how to employ the resources one has control over to achieve one's goals. These resources could be time, capital, labor, equipment, land, mineral deposits, wildlife, and so on.: 791  Decision level: Management applications have been further classified as strategic, tactical, operational, a common classification in business management. Strategic tasks are long-term, visionary decisions about what goals one should have, such as whether a business should expand or not. Tactical tasks are medium-term decisions about how to achieve strategic goals, such as a national forest creating a grazing management plan. Operational decisions are concerned with the day-to-day tasks, such as a person finding the shortest route to a pizza restaurant. Topic: the domains in which GIS is applied largely fall into those concerned with the human world (e.g., economics, politics, transportation, education, landscape architecture, archaeology, urban planning, real estate, public health, crime mapping, national defense), and those concerned with the natural world (e.g., geology, biology, oceanography, climate). That said, one of the powerful capabilities of GIS and the spatial perspective of geography is their integrative ability to compare disparate topics, and many applications are concerned with multiple domains. Examples of integrated human-natural application domains include deep mapping, Natural hazard mitigation, wildlife management, sustainable development, natural resources, and climate change response. Institution: GIS has been implemented in a variety of different kinds of institutions: government (at all levels from municipal to international), business (of all types and sizes), non-profit organizations (even churches), as well as personal uses. The latter has become increasingly prominent with the rise of location-enabled smartphones. Lifespan: GIS implementations may be focused on a project or an enterprise. A Project GIS is focused on accomplishing a single task: data is gathered, analysis is performed, and results are produced separately from any other projects the person may perform, and the implementation is essentially transitory. An Enterprise GIS is intended to be a permanent institution, including a database that is carefully designed to be useful for a variety of projects over many years, and is likely used by many individuals across an enterprise, with some employed full-time just to maintain it. Integration: Traditionally, most GIS applications were standalone, using specialized GIS software, specialized hardware, specialized data, and specialized professionals. Although these remain common to the present day, integrated applications have greatly increased, as geospatial technology was merged into broader enterprise applications, sharing IT infrastructure, databases, and software, often using enterprise integration platforms such as SAP. The implementation of a GIS is often driven by jurisdictional (such as a city), purpose, or application requirements. Generally, a GIS implementation may be custom-designed for an organization. Hence, a GIS deployment developed for an application, jurisdiction, enterprise, or purpose may not be necessarily interoperable or compatible with a GIS that has been developed for some other application, jurisdiction, enterprise, or purpose. GIS is also diverging into location-based services, which allows GPS-enabled mobile devices to display their location in relation to fixed objects (nearest restaurant, gas station, fire hydrant) or mobile objects (friends, children, police car), or to relay their position back to a central server for display or other processing. GIS is also used in digital marketing and SEO for audience segmentation based on location. === Topics === ==== Aquatic science ==== ==== Archaeology ==== ==== Disaster response ==== Geospatial disaster response uses geospatial data and tools to help emergency responders, land managers, and scientists respond to disasters. Geospatial data can help save lives, reduce damage, and improve communication. Geospatial data can be used by federal authorities like FEMA to create maps that show the extent of a disaster, the location of people in need, and the location of debris, create models that estimate the number of people at risk and the amount of damage, improve communication between emergency responders, land managers, and scientists, as well as help determine where to allocate resources, such as emergency medical resources or search and rescue teams and plan evacuation routes and identify which areas are most at risk. In the United States, FEMA's Response Geospatial Office is responsible for the agency's capture, analysis and development of GIS products to enhance situational awareness and enable expeditions and effective decision making. The RGO's mission is to support decision makers in understanding the size, scope, and extent of disaster impacts so they can deliver resources to the communities most in need. ==== Environmental governance ==== ==== Environmental contamination ==== ==== Geological mapping ==== ==== Geospatial intelligence ==== ==== History ==== The use of digital maps generated by GIS has also influenced the development of an academic field known as spatial humanities. ==== Hydrology ==== ==== Participatory GIS ==== ==== Public health ==== ==== Traditional knowledge GIS ==== == Other aspects == === Open Geospatial Consortium standards === The Open Geospatial Consortium (OGC) is an international industry consortium of 384 companies, government agencies, universities, and individuals participating in a consensus process to develop publicly available geoprocessing specifications. Open interfaces and protocols defined by OpenGIS Specifications support interoperable solutions that "geo-enable" the Web, wireless and location-based services, and mainstream IT, and empower technology developers to make complex spatial information and services accessible and useful with all kinds of applications. Open Geospatial Consortium protocols include Web Map Service, and Web Feature Service. GIS products are broken down by the OGC into two categories, based on how completely and accurately the software follows the OGC specifications. Compliant products are software products that comply to OGC's OpenGIS Specifications. When a product has been tested and certified as compliant through the OGC Testing Program, the product is automatically registered as "compliant" on this site. Implementing products are software products that implement OpenGIS Specifications but have not yet passed a compliance test. Compliance tests are not available for all specifications. Developers can register their products as implementing draft or approved specifications, though OGC reserves the right to review and verify each entry. === Adding the dimension of time === The condition of the Earth's surface, atmosphere, and subsurface can be examined by feeding satellite data into a GIS. GIS technology gives researchers the ability to examine the variations in Earth processes over days, months, and years through the use of cartographic visualizations. As an example, the changes in vegetation vigor through a growing season can be animated to determine when drought was most extensive in a particular region. The resulting graphic represents a rough measure of plant health. Working with two variables over time would then allow researchers to detect regional differences in the lag between a decline in rainfall and its effect on vegetation. GIS technology and the availability of digital data on regional and global scales enable such analyses. The satellite sensor output used to generate a vegetation graphic is produced for example by the advanced very-high-resolution radiometer (AVHRR). This sensor system detects the amounts of energy reflected from the Earth's surface across various bands of the spectrum for surface areas of about 1 km2 (0.39 sq mi). The satellite sensor produces images of a particular location on the Earth twice a day. AVHRR and more recently the moderate-resolution imaging spectroradiometer (MODIS) are only two of many sensor systems used for Earth surface analysis. In addition to the integration of time in environmental studies, GIS is also being explored for its ability to track and model the progress of humans throughout their daily routines. A concrete example of progress in this area is the recent release of time-specific population data by the U.S. Census. In this data set, the populations of cities are shown for daytime and evening hours highlighting the pattern of concentration and dispersion generated by North American commuting patterns. The manipulation and generation of data required to produce this data would not have been possible without GIS. Using models to project the data held by a GIS forward in time have enabled planners to test policy decisions using spatial decision support systems. === Semantics === Tools and technologies emerging from the World Wide Web Consortium's Semantic Web are proving useful for data integration problems in information systems. Correspondingly, such technologies have been proposed as a means to facilitate interoperability and data reuse among GIS applications and also to enable new analysis mechanisms. Ontologies are a key component of this semantic approach as they allow a formal, machine-readable specification of the concepts and relationships in a given domain. This in turn allows a GIS to focus on the intended meaning of data rather than its syntax or structure. For example, reasoning that a land cover type classified as deciduous needleleaf trees in one dataset is a specialization or subset of land cover type forest in another more roughly classified dataset can help a GIS automatically merge the two datasets under the more general land cover classification. Tentative ontologies have been developed in areas related to GIS applications, for example the hydrology ontology developed by the Ordnance Survey in the United Kingdom and the SWEET ontologies developed by NASA's Jet Propulsion Laboratory. Also, simpler ontologies and semantic metadata standards are being proposed by the W3C Geo Incubator Group to represent geospatial data on the web. GeoSPARQL is a standard developed by the Ordnance Survey, United States Geological Survey, Natural Resources Canada, Australia's Commonwealth Scientific and Industrial Research Organisation and others to support ontology creation and reasoning using well-understood OGC literals (GML, WKT), topological relationships (Simple Features, RCC8, DE-9IM), RDF and the SPARQL database query protocols. Recent research results in this area can be seen in the International Conference on Geospatial Semantics and the Terra Cognita – Directions to the Geospatial Semantic Web workshop at the International Semantic Web Conference. == Societal implications == With the popularization of GIS in decision making, scholars have begun to scrutinize the social and political implications of GIS. GIS can also be misused to distort reality for individual and political gain. It has been argued that the production, distribution, utilization, and representation of geographic information are largely related with the social context and has the potential to increase citizen trust in government. Other related topics include discussion on copyright, privacy, and censorship. A more optimistic social approach to GIS adoption is to use it as a tool for public participation. === In education === At the end of the 20th century, GIS began to be recognized as tools that could be used in the classroom. The benefits of GIS in education seem focused on developing spatial cognition, but there is not enough bibliography or statistical data to show the concrete scope of the use of GIS in education around the world, although the expansion has been faster in those countries where the curriculum mentions them.: 36  GIS seems to provide many advantages in teaching geography because it allows for analysis based on real geographic data and also helps raise research questions from teachers and students in the classroom. It also contributes to improvement in learning by developing spatial and geographical thinking and, in many cases, student motivation.: 38  Courses in GIS are also offered by educational institutions. === In local government === GIS is proven as an organization-wide, enterprise and enduring technology that continues to change how local government operates. Government agencies have adopted GIS technology as a method to better manage the following areas of government organization: Economic development departments use interactive GIS mapping tools, aggregated with other data (demographics, labor force, business, industry, talent) along with a database of available commercial sites and buildings in order to attract investment and support existing business. Businesses making location decisions can use the tools to choose communities and sites that best match their criteria for success. Public safety operations such as emergency operations centers, fire prevention, police and sheriff mobile technology and dispatch, and mapping weather risks. Parks and recreation departments and their functions in asset inventory, land conservation, land management, and cemetery management Public works and utilities, tracking water and stormwater drainage, electrical assets, engineering projects, and public transportation assets and trends Fiber network management for interdepartmental network assets School analytical and demographic data, asset management, and improvement/expansion planning Public administration for election data, property records, and zoning/management The open data initiative is pushing local government to take advantage of technology such as GIS technology, as it encompasses the requirements to fit the open data/open government model of transparency. With open data, local government organizations can implement citizen engagement applications and online portals, allowing citizens to see land information, report potholes and signage issues, view and sort parks by assets, view real-time crime rates and utility repairs, and much more. The push for open data within government organizations is driving the growth in local government GIS technology spending, and database management. == See also == == References == == Further reading == Bolstad, P. (2019). GIS Fundamentals: A first text on Geographic Information Systems, Sixth Edition. Ann Arbor: XanEdu, 764 pp. Burrough, P. A. and McDonnell, R. A. (1998). Principles of geographical information systems. Oxford University Press, Oxford, 327 pp. DeMers, M. (2009). Fundamentals of Geographic Information Systems, 4th Edition. Wiley, ISBN 978-0-470-12906-7 Harvey, Francis (2008). A Primer of GIS, Fundamental geographic and cartographic concepts. The Guilford Press, 31 pp. Heywood, I., Cornelius, S., and Carver, S. (2006). An Introduction to Geographical Information Systems. Prentice Hall. 3rd edition. Ott, T. and Swiaczny, F. (2001) .Time-integrative GIS. Management and analysis of Spatio-temporal data, Berlin / Heidelberg / New York: Springer. Thurston, J., Poiker, T.K. and J. Patrick Moore. (2003). Integrated Geospatial Technologies: A Guide to GPS, GIS, and Data Logging. Hoboken, New Jersey: Wiley. Worboys, Michael; Duckham, Matt (2004). GIS: a computing perspective. Boca Raton: CRC Press. ISBN 978-0415283755. == External links == Media related to Geographic information systems at Wikimedia Commons
Wikipedia/Geographic_Information_Systems
A transport network, or transportation network, is a network or graph in geographic space, describing an infrastructure that permits and constrains movement or flow. Examples include but are not limited to road networks, railways, air routes, pipelines, aqueducts, and power lines. The digital representation of these networks, and the methods for their analysis, is a core part of spatial analysis, geographic information systems, public utilities, and transport engineering. Network analysis is an application of the theories and algorithms of graph theory and is a form of proximity analysis. == History == The applicability of graph theory to geographic phenomena was recognized at an early date. Many of the early problems and theories undertaken by graph theorists were inspired by geographic situations, such as the Seven Bridges of Königsberg problem, which was one of the original foundations of graph theory when it was solved by Leonhard Euler in 1736. In the 1970s, the connection was reestablished by the early developers of geographic information systems, who employed it in the topological data structures of polygons (which is not of relevance here), and the analysis of transport networks. Early works, such as Tinkler (1977), focused mainly on simple schematic networks, likely due to the lack of significant volumes of linear data and the computational complexity of many of the algorithms. The full implementation of network analysis algorithms in GIS software did not appear until the 1990s, but rather advanced tools are generally available today. == Network data == Network analysis requires detailed data representing the elements of the network and its properties. The core of a network dataset is a vector layer of polylines representing the paths of travel, either precise geographic routes or schematic diagrams, known as edges. In addition, information is needed on the network topology, representing the connections between the lines, thus enabling the transport from one line to another to be modeled. Typically, these connection points, or nodes, are included as an additional dataset. Both the edges and nodes are attributed with properties related to the movement or flow: Capacity, measurements of any limitation on the volume of flow allowed, such as the number of lanes in a road, telecommunications bandwidth, or pipe diameter. Impedance, measurements of any resistance to flow or to the speed of flow, such as a speed limit or a forbidden turn direction at a street intersection Cost accumulated through individual travel along the edge or through the node, commonly elapsed time, in keeping with the principle of friction of distance. For example, a node in a street network may require a different amount of time to make a particular left turn or right turn. Such costs can vary over time, such as the pattern of travel time along an urban street depending on diurnal cycles of traffic volume. Flow volume, measurements of the actual movement taking place. This may be specific time-encoded measurements collected using sensor networks such as traffic counters, or general trends over a period of time, such as Annual average daily traffic (AADT). == Analysis methods == A wide range of methods, algorithms, and techniques have been developed for solving problems and tasks relating to network flow. Some of these are common to all types of transport networks, while others are specific to particular application domains. Many of these algorithms are implemented in commercial and open-source GIS software, such as GRASS GIS and the Network Analyst extension to Esri ArcGIS. === Optimal routing === One of the simplest and most common tasks in a network is to find the optimal route connecting two points along the network, with optimal defined as minimizing some form of cost, such as distance, energy expenditure, or time. A common example is finding directions in a street network, a feature of almost any web street mapping application such as Google Maps. The most popular method of solving this task, implemented in most GIS and mapping software, is Dijkstra's algorithm. In addition to the basic point-to-point routing, composite routing problems are also common. The Traveling salesman problem asks for the optimal (least distance/cost) ordering and route to reach a number of destinations; it is an NP-hard problem, but somewhat easier to solve in network space than unconstrained space due to the smaller solution set. The Vehicle routing problem is a generalization of this, allowing for multiple simultaneous routes to reach the destinations. The Route inspection or "Chinese Postman" problem asks for the optimal (least distance/cost) path that traverses every edge; a common application is the routing of garbage trucks. This turns out to be a much simpler problem to solve, with polynomial time algorithms. === Location analysis === This class of problems aims to find the optimal location for one or more facilities along the network, with optimal defined as minimizing the aggregate or mean travel cost to (or from) another set of points in the network. A common example is determining the location of a warehouse to minimize shipping costs to a set of retail outlets, or the location of a retail outlet to minimize the travel time from the residences of its potential customers. In unconstrained (cartesian coordinate) space, this is an NP-hard problem requiring heuristic solutions such as Lloyd's algorithm, but in a network space it can be solved deterministically. Particular applications often add further constraints to the problem, such as the location of pre-existing or competing facilities, facility capacities, or maximum cost. === Service areas === A network service area is analogous to a buffer in unconstrained space, a depiction of the area that can be reached from a point (typically a service facility) in less than a specified distance or other accumulated cost. For example, the preferred service area for a fire station would be the set of street segments it can reach in a small amount of time. When there are multiple facilities, each edge would be assigned to the nearest facility, producing a result analogous to a Voronoi diagram. === Fault analysis === A common application in public utility networks is the identification of possible locations of faults or breaks in the network (which is often buried or otherwise difficult to directly observe), deduced from reports that can be easily located, such as customer complaints. === Transport engineering === Traffic has been studied extensively using statistical physics methods. === Vertical analysis === To ensure the railway system is as efficient as possible a complexity/vertical analysis should also be undertaken. This analysis will aid in the analysis of future and existing systems which is crucial in ensuring the sustainability of a system (Bednar, 2022, pp. 75–76). Vertical analysis will consist of knowing the operating activities (day to day operations) of the system, problem prevention, control activities, development of activities and coordination of activities. == See also == Braess's paradox Flow network Heuristic routing Interplanetary Transport Network Network science Percolation theory Street network Rail network Highway dimension Multimodal transport Supply chain Logistics == References ==
Wikipedia/Transport_network_analysis
Geographic data and information is defined in the ISO/TC 211 series of standards as data and information having an implicit or explicit association with a location relative to Earth (a geographic location or geographic position). It is also called geospatial data and information, georeferenced data and information, as well as geodata and geoinformation. Location information (known by the many names mentioned here) is stored in a geographic information system (GIS). There are also many different types of geodata, including vector files, raster files, geographic databases, web files, and multi-temporal data. Spatial data or spatial information is broader class of data whose geometry is relevant but it is not necessarily georeferenced, such as in computer-aided design (CAD), see geometric modeling. == Fields of study == Geographic data and information are the subject of a number of overlapping fields of study, mainly: Geocomputation Geographic information science Geographic information science and technology Geoinformatics Geomatics Geovisualization Technical geography "Geospatial technology" may refer to any of "geomatics", "geomatics", or "geographic information technology." The above is in addition to other related fields, such as: Cartography Geodesy Geography Geostatistics Photogrammetry Remote sensing Spatial data analysis Surveying Topography == See also == Geomatics engineering Earth observation data Geographic feature Georeferencing Geospatial intelligence Ubiquitous geographic information == References == == Further reading == Roger A. Longhorn; Michael Blakemore (2007). Geographic Information: Value, Pricing, Production, and Consumption. CRC Press. == External links == Media related to Geographic data and information at Wikimedia Commons
Wikipedia/Geographic_data
Intergraph Corporation was an American software development and services company, which now forms part of Hexagon AB. It provides enterprise engineering and geospatially powered software to businesses, governments, and organizations around the world, and operates through three divisions: Hexagon Asset Lifecycle Intelligence (ALI, formerly PPM), Hexagon Safety & Infrastructure, and Hexagon Geospatial. The company's headquarters is in Huntsville, Alabama, United States. In 2008, Intergraph was one of the one hundred largest software companies in the world. In July 2010, Intergraph was acquired by Hexagon AB. == History == Intergraph was founded in 1969 as M&S Computing, Inc., by former IBM engineers Jim Meadlock, his wife Nancy, Terry Schansman (the S of M&S), Keith Schonrock, and Robert Thurber who had been working with NASA and the U.S. Army in developing systems that would apply digital computing to real time missile guidance. The company was later renamed to Intergraph Corporation in 1980. One of Intergraph's major hardware projects was developing a line of workstations using the Clipper architecture created by Fairchild Semiconductor. Intergraph was one of only two companies to use the chips in a major product line. Intergraph developed their own version of UNIX for the architecture, which they called CLIX. In 1987, Intergraph bought the Fairchild division responsible for the chip. In 1997, Intergraph began pursuing patent infringement litigation against Intel and other computer hardware manufacturers based on the intellectual property developed in Clipper. Intergraph negotiated major settlements with Intel, HP, Texas Instruments and Gateway, earning the company over $394M. In 2000, Intergraph exited the hardware business and became purely a software company. On July 21, 2000, it sold its Intense3D graphics accelerator division to 3Dlabs, and its workstation and server division to Silicon Graphics. On November 29, 2006, Intergraph was acquired by an investor group led by Hellman & Friedman LLC, Texas Pacific Group and JMI Equity, making the company privately held. On October 28, 2010, Intergraph was acquired by Hexagon AB. The transaction marks the return of Intergraph as part of a publicly traded company. As part of the Hexagon acquisition, Hexagon moved the management of ERDAS, Inc. from under Leica Geosystems to Intergraph, and Z/I Imaging airborne imaging sensors from under Intergraph to Leica Geosystems. On December 2, 2013, the geospatial technology portfolio was split out from under the Intergraph Security, Government and Infrastructure division to form the Hexagon Geospatial division. On October 13, 2015, the Intergraph Security, Government & Infrastructure division was rebranded as Hexagon Safety & Infrastructure. On January 9, 2017, the Intergraph Government Solutions division was rebranded as Hexagon US Federal. On June 5, 2017, the Intergraph Process, Power & Marine division was rebranded as Hexagon PPM. On June 6, 2022, the Hexagon PPM division was rebranded as Hexagon Asset Lifecycle Intelligence. == References == == External links == Official website
Wikipedia/Intergraph
In geography and particularly in geographic information science, a geographic feature or simply feature (also called an object or entity) is a representation of phenomenon that exists at a location in the space and scale of relevance to geography; that is, at or near the surface of Earth.: 62  It is an item of geographic information, and may be represented in maps, geographic information systems, remote sensing imagery, statistics, and other forms of geographic discourse. Such representations of phenomena consist of descriptions of their inherent nature, their spatial form and location, and their characteristics or properties. == Terminology == The term "feature" is broad and inclusive, and includes both natural and human-constructed objects. The term covers things which exist physically (e.g. a building) as well as those that are conceptual or social creations (e.g. a neighbourhood). Formally, the term is generally restricted to things which endure over a period. A feature is also discrete, meaning that it has a clear identity and location distinct from other objects, and is defined as a whole, defined more or less precisely by the boundary of its geographical extent. This differentiates features from geographic processes and events, which are perdurants that only exist in time; and from geographic masses and fields, which are continuous in that they are not conceptualized as a distinct whole. In geographic information science, the terms feature, object, and entity are generally used as roughly synonymous. In the 1992 Spatial Data Transfer Standard (SDTS), one of the first public standard models of geographic information, an attempt was made to formally distinguish them: an entity as the real-world phenomenon, an object as a representation thereof (e.g. on paper or digital), and a feature as the combination of both entity and representation objects. Although this distinction is often cited in textbooks, it has not gained lasting nor widespread usage. In the ISO 19101 Geographic Information Reference Model and Open Geospatial Consortium (OGC) Simple Features Specification, international standards that form the basis for most modern geospatial technologies, a feature is defined as "an abstraction of a real-world phenomenon", essentially the object in SDTS. == Types of features == === Natural features === A natural feature is an object on the planet that was not created by humans, but is a part of the natural world. ==== Ecosystems ==== There are two different terms to describe habitats: ecosystem and biome. An ecosystem is a community of organisms. In contrast, biomes occupy large areas of the globe and often encompass many different kinds of geographical features, including mountain ranges. Biotic diversity within an ecosystem is the variability among living organisms from all sources, including inter alia, terrestrial, marine and other aquatic ecosystems. Living organisms are continually engaged in a set of relationships with every other element constituting the environment in which they exist, and ecosystem describes any situation where there is relationship between organisms and their environment. Biomes represent large areas of ecologically similar communities of plants, animals, and soil organisms. Biomes are defined based on factors such as plant structures (such as trees, shrubs, and grasses), leaf types (such as broadleaf and needleleaf), plant spacing (forest, woodland, savanna), and climate. Unlike biogeographic realms, biomes are not defined by genetic, taxonomic, or historical similarities. Biomes are often identified with particular patterns of ecological succession and climax vegetation. ==== Water bodies ==== A body of water is any significant and reasonably long-lasting accumulation of water, usually covering the land. The term "body of water" most often refers to oceans, seas, and lakes, but it may also include smaller pools of water such as ponds, creeks or wetlands. Rivers, streams, canals, and other geographical features where water moves from one place to another are not always considered bodies of water, but they are included as geographical formations featuring water. Some of these are easily recognizable as distinct real-world entities (e.g. an isolated lake), while others are at least partially based on human conceptualizations. Examples of the latter are a branching stream network in which one of the branches has been arbitrarily designated as the continuation of the primary named stream; or a gulf or bay of a body of water (e.g. a lake or an ocean), which has no meaningful dividing line separatingt it from the rest of the lake or ocean. ==== Landforms ==== A landform comprises a geomorphological unit and is largely defined by its surface form and location in the landscape, as part of the terrain, and as such is typically an element of topography. Landforms are categorized by features such as elevation, slope, orientation, stratification, rock exposure, and soil type. They include berms, mounds, hills, cliffs, valleys, rivers, and numerous other elements. Oceans and continents are the highest-order landforms. === Artificial features === ==== Settlements ==== A settlement is a permanent or temporary community in which people live. Settlements range in components from a small number of dwellings grouped together to the largest of cities with surrounding urbanized areas. Other landscape features such as roads, enclosures, field systems, boundary banks and ditches, ponds, parks and woods, mills, manor houses, moats, and churches may be considered part of a settlement. ==== Administrative regions and other constructs ==== These include social constructions that are created to administer and organize the land, people, and other spatially-relevant resources. Examples are governmental units such as a state, cadastral land parcels, mining claims, zoning partitions of a city, and church parishes. There are also more informal social features, such as city neighbourhoods and other vernacular regions. These are purely conceptual entities established by edict or practice, although they may align with visible features (e.g. a river boundary), and may be subsequently manifested on the ground, such as by survey markers or fences. ==== Engineered constructs ==== Engineered geographic features include highways, bridges, airports, railroads, buildings, dams, and reservoirs, and are part of the anthroposphere because they are man-made geographic features. === Cartographic features === Cartographic features are types of abstract geographical features, which appear on maps but not on the planet itself, even though they are located on the planet. For example, grid lines, latitudes, longitudes, the Equator, the prime meridian, and many types of boundary, are shown on maps of Earth, but do not physically exist. They are theoretical lines used for reference, navigation, and measurement. == Features and Geographic Information == In GIS, maps, statistics, databases, and other information systems, a geographic feature is represented by a set of descriptors of its various characteristics. A common classification of those characteristics has emerged based on developments by Peuquet, Mennis, and others, including the following : Identity, the fact that a feature is unique and distinct from all other features. This does not have an inherent description, but humans have created many systems for attempting to express identity, such as names and identification numbers/codes. Existence, the fact that a feature exists in the world. At first, this may seem trivial, but complex situations are common, such as features that are proposed or planned, abstract concepts (e.g., the Equator), under construction, or that no longer exist. Kind (also known as class, type, or category), one or more groups to which a feature belongs, typically focused on those that are most fundamental to its existence. It thus completes the sentence "This is a _________." These are generally in the form of common nouns (tree, dog, building, county, etc.), which may be isolated or part of a taxonomic hierarchy. Relationships to other features. These may be inherent if they are crucial to the existence and identity of the feature, or incidental if they are not crucial, but "just happen to be." These may be of at least three types: Spatial relations, those that can be visualized and measured in space. For example, the fact that the Potomac River is adjacent to Maryland is an inherent spatial relation because the river is part of the definition of the boundary of Maryland, but the overlap relation between Maryland and the Delmarva Peninsula is incidental, as each would exist unproblematically without the other. Meronomic relations (also known as partonomy), in which a feature may exist as a part of a larger whole, or may exist as a collection of parts. For example, the relationship between Maryland and the United States is a meronomic relation; one is not just spatially within the boundaries of the other, but is a component part of the other that in part defines the existence of both. Genealogical relations (also known as parent-child), which tie a feature to others that existed previously and created it (or from which it was formed by another agent), and in turn to any features it has created. For example, if a county were created by the subdivision of two existing counties, they would be considered its parents. Location, a description of where the feature exists, often including the shape of its extent. While a feature has an inherent location, measuring it for the purpose of representation as data can be a complex process, such as requiring the invention of abstract spatial reference systems, and the necessary employment of cartographic generalization, including an expedient choice of dimension (e.g., a city could be represented as a region or as a point, depending on scale and need). Attributes, characteristics of a feature other than location, often expressed as text or numbers; for example, the population of a city. In geography, the levels of measurement developed by Stanley Smith Stevens (and further extended by others) is a common system for understanding and using attribute data. Time is fundamental to the representation of a feature, although it does not have independent temporal descriptions. Instead, expressions of time are attached to other characteristics, describing how they change (thus, they are analogous to adverbs in common discourse). Any of the above characteristics is mutable, with the possible exception of identity. For example, the lifespan of a feature could be considered as the temporal extent of its existence. The location of a city can change over time as annexations expand its extent. The resident population of a country changes frequently due to immigration, emigration, birth, and death. The descriptions of features (i.e., the measured values of each of the above characteristics) are typically collected in Geographic databases, such as GIS datasets, based on a variety of data models and file formats, often based on the vector logical model. == See also == Geographical field Geographical location Human geography Landscape Physical geography Simple Features == References ==
Wikipedia/Geographical_feature
Geographers on Film is an archival collection and series of more than 550 filmed interviews with experts of the geographic scholar community. This is a 40 year long initiative. == Production == The series was created as an historical and educational resource by geographer and professor emeritus Maynard Weston Dow (1929 - 2011) of Plymouth State University, and his wife, Nancy Freeman Dow. The series was supported in part by the American Association of Geographers, the National Science Foundation, Plymouth State University, and the Marion and Jasper Whiting Foundation, of Boston. It has been ongoing or 40 years. == Synopsis == The series "highlights leading voices that transformed the discipline of cartography and geography in the 20th century in America." A prếcis of the collection's point was penned by Maynard Weston Dow:"August 1970 marked the origin of Geographers on Film (GOF). Participants speak for the record (varying from ten to eighty-nine minutes) that samples of the geographical experience are maintained on video; the ultimate concomitant goal is full transcription. The project resulted from teaching thought and methodology courses; students therein would pore over the writings of cognoscenti to acquire an appreciation for the genesis and development of geography as a field of learning. After considering the advantage of having Aristotle on film it was decided to secure in a permanent medium something of the more fertile minds of modern geography. In the beginning concentration was on elder statespersons, thus coverage spans much of 20th Century geography." The Library of Congress and the American Association of Geographers hold the films in their collections and have both preserved and digitized them. Initial work for digitization of the films and hosting them on a publicly accessible website was undertaken by a student at Plymouth State College in 1997 as part of her senior project in her Computer Science degree program, on which she collaborated with Dr. Dow. "Geographers on Film are a collection of recorded interviews conducted with hundreds of geographers from August 1970 until the mid-1980s." The National Gallery of the Spoken Word at Michigan State University has a copy, at least some of which is available on line. As a complement to Geographers on Film, "sixteen thematic video presentations have evolved" which include compilations from the larger oeuvre. == 25 Archival Gems == Short clips from 25 of the interviews are available as a 35-minute, streaming video via the AAG website and YouTube. Geographers featured in this video include, in order of appearance: == References == === Notes === === Citations === === Bibliography === Boyle, Mark (March 29, 2021). Human Geography: An Essential Introduction (ebook). Wiley. pp. xxxiii, 55, 130, 166, 102, 176. ISBN 9781119374725. DeVivo, Michael S. (November 14, 2014). Leadership in American Academic Geography: The Twentieth Century (ebook). Lexington Books. p. 196. ISBN 9780739199138. Johnston, Ron; Sidaway, James D. (December 22, 2015). Geography and Geographers: Anglo-American Human Geography Since 1945 (ebook). Taylor & Francis. pp. XXI, 456, 511. ISBN 9781134065875. Mannix, Mary K.; Burchsted, Fred (January 14, 2015). Guide to Reference in Genealogy and Biography. American Library Association. p. 123. ISBN 978-0-8389-1295-9. == External links == Association of American Geographers website Archived 2015-03-06 at the Wayback Machine "25 Archival Gems of the First 25 Years of Geographers on Film," YouTube.com
Wikipedia/Geographers_on_Film
Geographic information science (GIScience, GISc) or geoinformation science is a scientific discipline at the crossroads of computational science, social science, and natural science that studies geographic information, including how it represents phenomena in the real world, how it represents the way humans understand the world, and how it can be captured, organized, and analyzed. It is a sub-field of geography, specifically part of technical geography. It has applications to both physical geography and human geography, although its techniques can be applied to many other fields of study as well as many different industries. As a field of study or profession, it can be contrasted with geographic information systems (GIS), which are the actual repositories of geospatial data, the software tools for carrying out relevant tasks, and the profession of GIS users. That said, one of the major goals of GIScience is to find practical ways to improve GIS data, software, and professional practice; it is more focused on how gis is applied in real life as opposed to being a geographic information system tool in and of itself. The field is also sometimes called geographical information science. British geographer Michael Goodchild defined this area in the 1990s and summarized its core interests, including spatial analysis, visualization, and the representation of uncertainty. GIScience is conceptually related to geomatics, information science, computer science, and data science, but it claims the status of an independent scientific discipline. Recent developments in the field have expanded its focus to include studies on human dynamics in hybrid physical-virtual worlds, quantum GIScience, the development of smart cities, and the social and environmental impacts of technological innovations. These advancements indicate a growing intersection of GIScience with contemporary societal and technological issues. Overlapping disciplines are: geocomputation, geoinformatics, geomatics and geovisualization. Other related terms are geographic data science (after data science) and geographic information science and technology (GISci&T), with job titles geospatial information scientists and technologists. == Definitions == Since its inception in the 1990s, the boundaries between GIScience and cognate disciplines are contested, and different communities might disagree on what GIScience is and what it studies. In particular, Goodchild stated that "information science can be defined as the systematic study according to scientific principles of the nature and properties of information. Geographic information science is the subset of/or information science that is about geographic information." Another influential definition is that by geographic information scientist (GIScientist) David Mark, which states:Geographic Information Science (GIScience) is the basic research field that seeks to redefine geographic concepts and their use in the context of geographic information systems. GIScience also examines the impacts of GIS on individuals and society, and the influences of society on GIS. GIScience re-examines some of the most fundamental themes in traditional spatially oriented fields such as geography, cartography, and geodesy, while incorporating more recent developments in cognitive and information science. It also overlaps with and draws from more specialized research fields such as computer science, statistics, mathematics, and psychology, and contributes to progress in those fields. It supports research in political science and anthropology, and draws on those fields in studies of geographic information and society. In 2009, Goodchild summarized the history of GIScience and its achievements and open challenges. == See also == Category:Geographic information scientists Geographic Information Science and Technology Body of Knowledge Geostatistics Organizations Association of Geographic Information Laboratories for Europe National Center for Geographic Information and Analysis UCSB Center for Spatial Studies University Consortium for Geographic Information Science United States Geospatial Intelligence Foundation Journals GIScience & Remote Sensing International Journal of Applied Earth Observation and Geoinformation International Journal of Geographical Information Science Journal of Spatial Information Science == References == == External links == Official website of GIScience List of GIScience Conferences Archived 2023-05-30 at the Wayback Machine Conference on Spatial Information Theory (COSIT)
Wikipedia/GIScience
Integrated Geo Systems (IGS) is a computational architecture system developed for managing geoscientific data through systems and data integration. Geosciences often involve large volumes of diverse data which have to be processed by computer and graphics intensive applications. The processes involved in processing these large datasets are often so complex that no single applications software can perform all the required tasks. Specialized applications have emerged for specific tasks. To get the required results, it is necessary that all applications software involved in various stages of data processing, analysis and interpretation effectively communicate with each other by sharing data. IGS provides a framework for maintaining an electronic workflow between various geoscience software applications through data connectivity. The main components of IGS are: Geographic information systems as a front end. Format engine for data connectivity link between various geoscience software applications. The format engine uses Output Input Language (OIL), an interpreted language, to define various data formats. An array of geoscience relational databases for data integration. Data highways as internal data formats for each data type. Specialized geoscience applications software as processing modules. Geoscientific processing libraries == External links == Geological Society Books American Association of Petroleum Geologists Book Store Integrated Geo Systems Research Paper
Wikipedia/Integrated_Geo_Systems
The concept of imagined geographies (or imaginative geographies) originated from Edward Said, particularly his work on critique on Orientalism. Imagined geographies refers to the perception of a space created through certain imagery, texts, and/or discourses. For Said, imagined does not mean to be false or made-up, but rather is used synonymous with perceived. Despite often being constructed on a national level, imagined geographies also occur domestically in nations and locally within regions, cities, etc. Imagined geographies can be seen as a form of social constructionism on par with Benedict Anderson's concept of imagined communities. Edward Said's notion of Orientalism is tied to the tumultuous dynamics of contemporary history. Orientalism is often referred to as the West's patronizing perceptions and depictions of the East, but more specifically towards Islamic and Confucian states. Orientalism has also been labeled as the cornerstone of postcolonial studies. This theory has also been used to critique several geographies created; both historically and contemporarily—an example is Maria Todorova's work Imagining the Balkans. Samuel P. Huntington's Clash of Civilizations has also been criticized as showing a whole set of imagined geographies. Halford Mackinder's theories have also been argued by scholars to be an imagined geography that emphasised the importance of Europe over non-European countries, and asserted the view of the geographical "expert" with the "God's eye view". == Orientalism == In his book Orientalism, Edward Said argued that Western culture had produced a view of the "Orient" based on a particular imagination, popularized through academic Oriental studies, travel writing, anthropology and a colonial view of the Orient. This imagination included painting the orient as feminine- however, Said's view on the gendered nature has been criticized by other scholars due to a limited exploration of the construct. At a 1993 lecture located at York University, Toronto, Canada, Said stressed the role culture plays in Orientalism-based imperialism and colonialism. By differentiating and elevating a national culture over another a validating process of "othering" is undertaken. This process underlies imagined geographies such as orientalism as it creates a set of preconceived notions for self-serving purposes. In constructing itself as superior, the imperial force or colonizing agent is able to justify its actions as somehow necessary or beneficial to the "other". Despite the broad scope and effect of orientalism as an imagined geography, it and the underlying process of "othering" are discursive and thereby normalized within dominant, Western societies. It is in this sense that Orientalism may be reinforced in cultural texts such as art, film, literature, music, etc. where one-dimensional and often backwards constructions prevail. A prime source of cinematic examples is the documentary-film Reel Bad Arabs: How Hollywood Vilifies a People. The film demonstrates the process of orientalism centric "othering" within Western films from the silent era to modern classics such as Disney's Aladdin. Inferior, backwards, and culturally stagnate constructions of Oriental "others" become normalized in the minds of Western consumers of cultural texts; reinforcing racist or insensitive beliefs and assumptions. In Orientalism, Said says that Orientalism is an imagined geography because a) Europeans created one culture for the entirety of the 'Orient', and b) the 'Orient' was defined by text and not by the 'Orient'. == Theory == Said was heavily influenced by French philosopher Michel Foucault, and those who have developed the theory of imagined geographies have linked these together. Foucault states that power and knowledge are always intertwined. Said then developed an idea of a relationship between power and descriptions. Imagined geographies are thus seen as a tool of power, of a means of controlling and subordinating areas. Power is seen as being in the hands of those who have the right to objectify those that they are imagining. Imagined geographies were mostly based on myth and legend, often depicting monstrous "others". Edward Said elaborates that: “Europe is powerful and articulate; Asia is defeated and distant." Further writers to have been heavily influenced by the concept of imagined geographies including Derek Gregory and Gearóid Ó Tuathail. Gregory argues that the War on Terror shows a continuation of the same imagined geographies that Said uncovered. He claims that the Islamic world is portrayed as uncivilized; it is labeled as backward and failing. This justifies, in the view of those imagining, the military intervention that has been seen in Afghanistan and Iraq. Edward Said mentions that when Islam appeared in Europe in the Middle Ages, the response was conservative and defensive. Ó' Tuathail has argued that geopolitical knowledges are forms of imagined geography. Using the example of Halford Mackinder's heartland theory, he has shown how the presentation of Eastern Europe / Western Russia as a key geopolitical region after the First World War influenced actions such as the recreation of Poland and the Polish Corridor in the 1918 Treaty of Versailles. == See also == Lila Abu-Lughod Imagined communities India (Herodotus) Padaei == References == == Further reading == Huntington, Samuel, 1991, Clash of Civilizations Gregory, Derek, 2004, The Colonial Present, Blackwell Marx, Karl, [1853] "The British Rule In India" in Macfie, A. L. (ed.), 2000, Orientalism: A Reader, Edinburgh University Press Ó' Tuathail, Gearoid, 1996, Critical Geopolitics: The Writing of Global Space, Routledge Said, Edward, [1978]1995, Orientalism, Penguin Books Mohnike, Thomas, 2007, Imaginierte Geographien, Ergon-Verlag Said, Edward. [1979] “Imaginative Geography and Its Representations: Orientalizing the Oriental.” Orientalism. New York: Vintage, Sharp, Joanne P. [2009]. "Geographies of Postcolonialism." Sage Publications: London.
Wikipedia/Imagined_geographies
A GIS software program is a computer program to support the use of a geographic information system, providing the ability to create, store, manage, query, analyze, and visualize geographic data, that is, data representing phenomena for which location is important. The GIS software industry encompasses a broad range of commercial and open-source products that provide some or all of these capabilities within various information technology architectures. == History == The earliest geographic information systems, such as the Canadian Geographic Information System started in 1963, were bespoke programs developed specifically for a single installation (usually a government agency), based on custom-designed data models. During the 1950s and 1960s, academic researchers during the quantitative revolution of geography began writing computer programs to perform spatial analysis, especially at the University of Washington and the University of Michigan, but these were also custom programs that were rarely available to other potential users. Perhaps the first general-purpose software that provided a range of GIS functionality was the Synagraphic Mapping Package (SYMAP), developed by Howard T. Fisher and others at the nascent Harvard Laboratory for Computer Graphics and Spatial Analysis starting in 1965. While not a true full-range GIS program, it included some basic mapping and analysis functions, and was freely available to other users. Through the 1970s, the Harvard Lab continued to develop and publish other packages focused on automating specific operations, such as SYMVU (3-D surface visualization), CALFORM (choropleth maps), POLYVRT (topological vector data management), WHIRLPOOL (vector overlay), GRID and IMGRID (raster data management), and others. During the late 1970s, several of these modules were brought together into Odyssey, one of the first commercial complete GIS programs, released in 1980. During the late 1970s and early 1980s, GIS was emerging in many large government agencies that were responsible for managing land and facilities. Particularly, federal agencies of the United States government developed software that was by definition in the public domain because of the Freedom of Information Act, and was thus released to the public. Notable examples included the Map Overlay and Statistical System (MOSS) developed by the Fish & Wildlife Service and Bureau of Land Management (BLM) starting in 1976; the PROJ library developed at the United States Geological Survey (USGS), one of the first programming libraries available; and GRASS GIS originally developed by the Army Corps of Engineers starting in 1982. These formed the foundation of the open source GIS software community. The 1980s also saw the beginnings of most commercial GIS software, including Esri ARC/INFO in 1982; Intergraph IGDS in 1985, and the Mapping Display and Analysis System (MIDAS), the first GIS product for MS-DOS personal computers, which later became MapInfo. These would proliferate in the 1990s with the advent of more powerful personal computers, Microsoft Windows, and the 1990 U.S. Census, which raised awareness of the usefulness of geographic data to businesses and other new users. Several trends emerged in the late 1990s that have significantly changed the GIS software ecosystem leading to the present, by moving in directions beyond the traditional full-featured desktop GIS application. The emergence of object-oriented programming languages facilitated the release of component libraries and application programming interfaces, both commercial and open-source, which encapsulated specific GIS functions, allowing programmers to build spatial capabilities into their own programs. Second, the development of spatial extensions to object-relational database management systems (also both open-source and commercial) created new opportunities for data storage for traditional GIS, but also enabled spatial capabilities to be integrated into enterprise information systems, including business processes such as human resources. Third, as the World Wide Web emerged, web mapping quickly became one of its most popular applications; this led to the development of Server-based GIS software that could perform the same functions as a traditional GIS, but at a location remote from a client who only needed a web browser installed. All of these have combined to enable emerging trends in GIS software, such as the use of cloud computing, software as a service (SAAS), and smartphones to broaden the availability of spatial data, processing, and visualization. == Types of software == The software component of a traditional geographic information system is expected to provide a wide range of functions for handling spatial data:: 16  Data management, including the creation, editing, and storage of geographic data, as well as transformations such as changing coordinate systems and converting between raster and vector models. Spatial analysis, including a range of processing tools from basic queries to advanced algorithms such as network analysis and vector overlay Output, especially cartographic design. The modern GIS software ecosystem includes a variety of products that may include more or less of these capabilities, collect them in a single program, or distribute them over the Internet. These products can be grouped into the following broad classes: Desktop GIS application The traditional form of GIS software, first developed for mainframes and minicomputers, then Unix workstations, and now personal computers. A desktop GIS program provides a full suite of capabilities, although some programs are modularized with extensions that can be purchased separately. Server GIS application A program which runs on a remote server (usually in concert with an HTTP server), handling many or all of the above functions, taking in requests and delivering results via the World Wide Web. Thus, the client typically accesses server capabilities using a normal web browser. Early server software was focused specifically on web mapping, only including the output phase, but current server GIS provides the full suite of functions. This server software is at the core of modern cloud-based platforms such as ArcGIS Online. Geospatial library A software component that provides a focused set of documented functions, which software developers can incorporate into their own programs. In modern object-oriented programming languages such as C#, JavaScript and Python, these are typically encapsulated as classes with a documented application programming interface (API). Spatial database An extension to an existing database software program (most commonly, an object-relational database management system) that creates a geometry datatype, enabling spatial data to be stored in a column in a table, but also provides new functions to query languages such as SQL that include many of the management and analysis functions of GIS. This enables database managers and programmers to perform GIS functions without traditional GIS software. The current software industry consists of many competing products of each of these types, in both open-source and commercial forms. Many of these are listed below; for a direct comparison of the characteristics of some of them, see Comparison of geographic information systems software. == Open source software == The development of open source GIS software has—in terms of software history—a long tradition with the appearance of a first system in 1978. Numerous systems are available which cover all sectors of geospatial data handling. === Desktop GIS === The following open-source desktop GIS projects are reviewed in Steiniger and Bocher (2008/9): GRASS GIS – Geospatial data management, vector and raster manipulation - developed by the U.S. Army Corps of Engineers gvSIG – Mapping and geoprocessing with a 3D rendering plugin ILWIS (Integrated Land and Water Information System) – Integrates image, vector and thematic data. JUMP GIS / OpenJUMP ((Open) Java Unified Mapping Platform) – The desktop GISs OpenJUMP, SkyJUMP, deeJUMP and Kosmo all emerged from JUMP. MapWindow GIS – Free desktop application with plugins and a programmer library QGIS (previously known as Quantum GIS) – Powerful cartographic and geospatial data processing tools with extensive plug-in support SAGA GIS (System for Automated Geoscientific Analysis) – Tools for environmental modeling, terrain analysis, and 3D mapping uDig – API and source code (Java) available. Besides these, there are other open source GIS tools: Generic Mapping Tools – A collection of command-line tools for manipulating geographic and Cartesian data sets and producing PostScript illustrations. FalconView – A mapping system created by the Georgia Tech Research Institute for Windows. A free, open source version is available. Kalypso – Uses Java and GML3. Focuses mainly on numerical simulations in water management. TerraView – Handles vector and raster data stored in a relational or geo-relational database, i.e. a frontend for TerraLib. Whitebox GAT – Cross-platform, free and open-source GIS software. === Other geospatial tools === Apart from desktop GIS, many other types of GIS software exist. ==== Web map servers ==== GeoServer – Written in Java and relies on GeoTools. Allows users to share and edit geospatial data. MapGuide Open Source – Runs on Linux or Windows, supports Apache and IIS web servers, and has APIs (PHP, .NET, Java, and JavaScript) for application development. Mapnik – C++/Python library for rendering - used by OpenStreetMap. MapServer – Written in C. Developed by the University of Minnesota. ==== Spatial database management systems ==== PostGIS – Spatial extensions for the open source PostgreSQL database, allowing geospatial queries. ArangoDB – Builtin features available for Spatial data management, allowing geospatial queries. SpatiaLite – Spatial extensions for the open source SQLite database, allowing geospatial queries. TerraLib – Provides advanced functions for GIS analysis. OrientDB – Builtin features available for Spatial data management, allowing geospatial queries. ==== Software development frameworks and libraries (for web applications) ==== GeoBase (Telogis GIS software) – Geospatial mapping software available as a software development kit. OpenLayers – Open source AJAX library for accessing geographic data layers of all kinds, originally developed and sponsored by MetaCarta. Leafletjs – Open source JavaScript Library for Mobile-Friendly Interactive Maps ==== Software development frameworks and libraries (non-web) ==== GeoTools – Open source GIS toolkit written in Java, using Open Geospatial Consortium specifications. GDAL / OGR Orfeo toolbox ==== Cataloging application for spatially referenced resources ==== GeoNetwork opensource – A catalog application to manage spatially referenced resources pycsw – pycsw is an OGC CSW server implementation written in Python ==== Other tools ==== == Commercial or proprietary GIS software == === Desktop GIS === Note: Almost all of the companies below offer Desktop GIS and WebMap Server products. Some such as Manifold Systems and Esri offer Spatial DBMS products as well. ==== Companies with high market share ==== Autodesk – Products that interface with its AutoCAD software package include Map 3D, Topobase, and MapGuide. Bentley Systems – Products that interface with its MicroStation software package include Bentley Map and Bentley Map View. ENVI – Utilized for image analysis, exploitation, and hyperspectral analysis. ERDAS IMAGINE – Products include Leica Photogrammetry Suite, ERDAS ER Mapper, ERDAS ECW/JP2 SDK (ECW (file format)) and ERDAS APOLLO. Esri – Products include ArcMap, ArcGIS, ArcSDE, ArcIMS, ArcWeb services and ArcGIS Server. Intergraph – Products include G/Technology, GeoMedia, GeoMedia Professional, GeoMedia WebMap, and add-on products for industry sectors, as well as photogrammetry. MapInfo – Desktop GIS MapInfo Professional. Smallworld ==== Companies with minor but notable market share ==== Cadcorp – Products include Cadcorp SIS, GeognoSIS, mSIS and developer kits. Caliper – Products include Maptitude, TransModeler and TransCAD. Conform by GameSim – Software for fusing and visualizing elevation, imagery, vectors, and LiDAR. The fused environment can be exported into 3D formats for gaming, simulation, and urban planning. Dragon/ips – Remote sensing software with GIS capabilities. Geosoft – GIS and data processing software used in natural resource exploration. GeoTime – software for 3D visual analysis and reporting of location data over time; an ArcGIS extension is also available. Global Mapper – GIS software package currently developed by Blue Marble Geographics; originally based on USGS dlgv32 source code. Golden Software – GIS and scientific software. Products include Surfer for gridding and contouring, MapViewer for thematic mapping and spatial analysis, Strater for well or borehole logging and cross sections, Voxler for true 3D well and component mapping, Didger for digitizing and coordinate conversion, and Grapher for 2D and 3D graphing. Kongsberg Gallium Ltd. – Products include InterMAPhics and InterView. MapDotNet – Framework written in C#/.NET for building WPF, Silverlight, and HTML5 applications. Manifold System – GIS software package. RegioGraph by GfK GeoMarketing – GIS software for business planning and analyses; company also provides compatible maps and market data. RemoteView SuperMap Inc. – a GIS software provider that offers Desktop, Component, Web, and Mobile GIS. TerrSet (formerly IDRISI) – GIS and Image Processing product developed by Clark Labs at Clark University. TNTmips by MicroImages – a system integrating desktop GIS, advanced image processing, 2D-3D-stereo visualization, desktop cartography, geospatial database management, and webmap publishing. === GIS as a service === Many suppliers are now starting to offer Internet based services as well as or instead of downloadable software and/or data. These can be free, funded by advertising or paid for on subscription; they split into three areas: SaaS – Software as a Service: Software available as a service on the Internet ArcGIS Online – Esri's cloud based version of ArcGIS CartoDB – Online mapping platform that offers an open source, cloud based SaaS model Google Earth#Google_Earth_Engine; Provides algorithms and a large catalog of public data for global scale spatial computation. Mapbox – Provider of custom online maps for websites MapTiler – Provider of customizable maps for applications and websites. PaaS – Platform as a Service: geocoding or analysis/processing services ArcGIS Online FME Cloud Google Maps JavaScript API version 3 Here Maps JavaScript API version Microsoft Bing Geocode Dataflow API US Census Geocoder DaaS – Data as a Service: data or content services ArcGIS Online Apple Maps Google Maps Here Maps OpenStreetMap Microsoft Bing Maps === Spatial DBMS === Boeing's Spatial Query Server – Spatially enables Sybase ASE. IBM Db2 – Allows spatial querying and storing of most spatial data types. Informix – Allows spatial querying and storing of most spatial data types. MySQL – Allows spatial querying and storing of most spatial data types. Microsoft SQL Server (2008 and later) – GIS products such as MapInfo and Cadcorp SIS can read and edit this data while Esri and others are expected to be able to read and edit this data at some point in the future. Oracle Spatial – Product allows users to perform geographic operations and store spatial data types in an Oracle environment. Most commercial GIS packages can read and edit spatial data stored in this way. SAP HANA – Allows users to store common spatial data types, load spatial data files with well-known text (WKT) and well-known binary (WKB) formats and perform spatial processing using SQL. Open Geospatial Consortium (OGC) certification allows third party GIS software providers to store and process spatial data. GIS products such as ArcGIS from Esri work with HANA. Teradata – Teradata geospatial allows storage and spatial analysis on location-based data which is stored using native geospatial data-types within the Teradata database. VMDS – Version managed data store from Smallworld. === Geospatial Internet of Things === SensorUp – SensorUp provides the Cloud hosting and SDKs, based on the Open Geospatial Consortium SensorThings API standard. == See also == Comparison of geographic information systems software GIS Live DVD Open Source Geospatial Foundation (OSGeo) == References ==
Wikipedia/Geographic_information_system_software
A mashup (computer industry jargon), in web development, is a web page or web application that uses content from more than one source to create a single new service displayed in a single graphical interface. For example, a user could combine the addresses and photographs of their library branches with a Google map to create a map mashup. The term implies easy, fast integration, frequently using open application programming interfaces (open API) and data sources to produce enriched results that were not necessarily the original reason for producing the raw source data. The term mashup originally comes from creating something by combining elements from two or more sources. The main characteristics of a mashup are combination, visualization, and aggregation. It is important to make existing data more useful, for personal and professional use. To be able to permanently access the data of other services, mashups are generally client applications or hosted online. In the past years, more and more Web applications have published APIs that enable software developers to easily integrate data and functions the SOA way, instead of building them by themselves. Mashups can be considered to have an active role in the evolution of social software and Web 2.0. Mashup composition tools are usually simple enough to be used by end-users. They generally do not require programming skills and rather support visual wiring of GUI widgets, services and components together. Therefore, these tools contribute to a new vision of the Web, where users are able to contribute. The term "mashup" is not formally defined by any standard-setting body. == History == The broader context of the history of the Web provides a background for the development of mashups. Under the Web 1.0 model, organizations stored consumer data on portals and updated them regularly. They controlled all the consumer data, and the consumer had to use their products and services to get the information. The advent of Web 2.0 introduced Web standards that were commonly and widely adopted across traditional competitors and which unlocked the consumer data. At the same time, mashups emerged, allowing mixing and matching competitors' APIs to develop new services. The first mashups used mapping services or photo services to combine these services with data of any kind and therefore to produce visualizations of data. In the beginning, most mashups were consumer-based, but recently the mashup is to be seen as an interesting concept useful also to enterprises. Business mashups can combine existing internal data with external services to generate new views on the data. There was also the free Yahoo! Pipes to build mashups for free using the Yahoo! Query Language. == Types of mashup == There are many types of mashup, such as business mashups, consumer mashups, and data mashups. The most common type of mashup is the consumer mashup, aimed at the general public. Business (or enterprise) mashups define applications that combine their own resources, application and data, with other external Web services. They focus data into a single presentation and allow for collaborative action among businesses and developers. This works well for an agile development project, which requires collaboration between the developers and customer (or customer proxy, typically a product manager) for defining and implementing the business requirements. Enterprise mashups are secure, visually rich Web applications that expose actionable information from diverse internal and external information sources. Consumer mashups combine data from multiple public sources in the browser and organize it through a simple browser user interface. (e.g.: Wikipediavision combines Google Map and a Wikipedia API) Data mashups, opposite to the consumer mashups, combine similar types of media and information from multiple sources into a single representation. The combination of all these resources create a new and distinct Web service that was not originally provided by either source. === By API type === Mashups can also be categorized by the basic API type they use but any of these can be combined with each other or embedded into other applications. ==== Data types ==== Indexed data (documents, weblogs, images, videos, shopping articles, jobs ...) used by metasearch engines Cartographic and geographic data: geolocation software, geovisualization Feeds, podcasts: news aggregators ==== Functions ==== Data converters: language translators, speech processing, URL shorteners... Communication: email, instant messaging, notification... Visual data rendering: information visualization, diagrams Security related: electronic payment systems, ID identification... Editors == Mashup enabler == In technology, a mashup enabler is a tool for transforming incompatible IT resources into a form that allows them to be easily combined, in order to create a mashup. Mashup enablers allow powerful techniques and tools (such as mashup platforms) for combining data and services to be applied to new kinds of resources. An example of a mashup enabler is a tool for creating an RSS feed from a spreadsheet (which cannot easily be used to create a mashup). Many mashup editors include mashup enablers, for example, Presto Mashup Connectors, Convertigo Web Integrator or Caspio Bridge. Mashup enablers have also been described as "the service and tool providers, [sic] that make mashups possible". === History === Early mashups were developed manually by enthusiastic programmers. However, as mashups became more popular, companies began creating platforms for building mashups, which allow designers to visually construct mashups by connecting together mashup components. Mashup editors have greatly simplified the creation of mashups, significantly increasing the productivity of mashup developers and even opening mashup development to end-users and non-IT experts. Standard components and connectors enable designers to combine mashup resources in all sorts of complex ways with ease. Mashup platforms, however, have done little to broaden the scope of resources accessible by mashups and have not freed mashups from their reliance on well-structured data and open libraries (RSS feeds and public APIs). Mashup enablers evolved to address this problem, providing the ability to convert other kinds of data and services into mashable resources. === Web resources === Of course, not all valuable data is located within organizations. In fact, the most valuable information for business intelligence and decision support is often external to the organization. With the emergence of rich web applications and online Web portals, a wide range of business-critical processes (such as ordering) are becoming available online. Unfortunately, very few of these data sources syndicate content in RSS format and very few of these services provide publicly accessible APIs. Mashup editors therefore solve this problem by providing enablers or connectors. == Mashups versus portals == Mashups and portals are both content aggregation technologies. Portals are an older technology designed as an extension to traditional dynamic Web applications, in which the process of converting data content into marked-up Web pages is split into two phases: generation of markup "fragments" and aggregation of the fragments into pages. Each markup fragment is generated by a "portlet", and the portal combines them into a single Web page. Portlets may be hosted locally on the portal server or remotely on a separate server. Portal technology defines a complete event model covering reads and updates. A request for an aggregate page on a portal is translated into individual read operations on all the portlets that form the page ("render" operations on local, JSR 168 portlets or "getMarkup" operations on remote, WSRP portlets). If a submit button is pressed on any portlet on a portal page, it is translated into an update operation on that portlet alone (processAction on a local portlet or performBlockingInteraction on a remote, WSRP portlet). The update is then immediately followed by a read on all portlets on the page. Portal technology is about server-side, presentation-tier aggregation. It cannot be used to drive more robust forms of application integration such as two-phase commit. Mashups differ from portals in the following respects: The portal model has been around longer and has had greater investment and product research. Portal technology is therefore more standardized and mature. Over time, increasing maturity and standardization of mashup technology will likely make it more popular than portal technology because it is more closely associated with Web 2.0 and lately Service-oriented Architectures (SOA). New versions of portal products are expected to eventually add mashup support while still supporting legacy portlet applications. Mashup technologies, in contrast, are not expected to provide support for portal standards. == Business mashups == Mashup uses are expanding in the business environment. Business mashups are useful for integrating business and data services, as business mashups technologies provide the ability to develop new integrated services quickly, to combine internal services with external or personalized information, and to make these services tangible to the business user through user-friendly Web browser interfaces. Business mashups differ from consumer mashups in the level of integration with business computing environments, security and access control features, governance, and the sophistication of the programming tools (mashup editors) used. Another difference between business mashups and consumer mashups is a growing trend of using business mashups in commercial software as a service (SaaS) offering. Many of the providers of business mashups technologies have added SOA features. == Architectural aspects of mashups == The architecture of a mashup is divided into three layers: Presentation / user interaction: this is the user interface of mashups. The technologies used are HTML/XHTML, CSS, JavaScript, Asynchronous JavaScript and Xml (Ajax). Web Services: the product's functionality can be accessed using API services. The technologies used are XMLHTTPRequest, XML-RPC, JSON-RPC, SOAP, REST. Data: handling the data like sending, storing and receiving. The technologies used are XML, JSON, KML. Architecturally, there are two styles of mashups: Web-based and server-based. Whereas Web-based mashups typically use the user's web browser to combine and reformat the data, server-based mashups analyze and reformat the data on a remote server and transmit the data to the user's browser in its final form. Mashups appear to be a variation of a façade pattern. That is: a software engineering design pattern that provides a simplified interface to a larger body of code (in this case the code to aggregate the different feeds with different APIs). Mashups can be used with software provided as a service (SaaS). After several years of standards development, mainstream businesses are starting to adopt service-oriented architectures (SOA) to integrate disparate data by making them available as discrete Web services. Web services provide open, standardized protocols to provide a unified means of accessing information from a diverse set of platforms (operating systems, programming languages, applications). These Web services can be reused to provide completely new services and applications within and across organizations, providing business flexibility. == See also == Mashup (culture) Mashup (music) Open Mashup Alliance Open API Yahoo! Pipes Webhook Web portal Web scraping == References == == Further reading == Ahmet Soylu, Felix Mödritscher, Fridolin Wild, Patrick De Causmaecker, Piet Desmet. 2012 . “Mashups by Orchestration and Widget-based Personal Environments: Key Challenges, Solution Strategies, and an Application.” Program: Electronic Library and Information Systems 46 (4): 383–428. Endres-Niggemeyer, Brigitte ed. 2013. Semantic Mashups. Intelligent Reuse of Web Resources. Springer. ISBN 978-3-642-36402-0 (Print)
Wikipedia/Mashup_(web_application_hybrid)
Animal geography is a subfield of the nature–society/human–environment branch of geography as well as a part of the larger, interdisciplinary umbrella of human–animal studies (HAS). Animal geography is defined as the study of "the complex entanglings of human–animal relations with space, place, location, environment and landscape" or "the study of where, when, why and how nonhuman animals intersect with human societies". Recent work advances these perspectives to argue about an ecology of relations in which humans and animals are enmeshed, taking seriously the lived spaces of animals themselves and their sentient interactions with not just human but other nonhuman bodies as well. The Animal Geography Specialty Group of the Association of American Geographers was founded in 2009 by Monica Ogra and Julie Urbanik, and the Animal Geography Research Network was founded in 2011 by Daniel Allen. == Overview == === First wave === The first wave of animal geography, known as zoogeography, came to prominence as a geographic subfield from the late 1800s through the early part of the 20th century. During this time the study of animals was seen as a key part of the discipline and the goal was "the scientific study of animal life with reference to the distribution of animals on the earth and the mutual influence of environment and animals upon each other". The animals that were the focus of studies were almost exclusively wild animals and zoogeographers were building on the new theories of evolution and natural selection. They mapped the evolution and movement of species across time and space and also sought to understand how animals adapted to different ecosystems. "The ambition was to establish general laws of how animals arranged themselves across the earth's surface or, at smaller scales, to establish patterns of spatial co-variation between animals and other environmental factors." Key works include Newbigin's Animal Geography, Bartholomew, Clarke, and Grimshaw's Atlas of Zoogeography, and Allee and Schmidt's Ecological Animal Geography. By the middle of the 20th century, emerging disciplines such as biology and zoology began taking on the traditional zoogeographic cataloging of species, their distributions, and ecologies. In geography zoogeography exists today as the vibrant subfield of biogeography. === Second wave === The middle of the 20th century saw a turn away from zoogeography (while never fully relinquishing it) towards questions about and interest in the impact of humans on wildlife and in human relations with livestock. Two key geographers shaping this wave of animal geography were Carl Sauer and Charles Bennett. Sauer's interest in the cultural landscape – or cultural ecology (how human cultures are shaped and are shaped by their environment) – necessarily involved addressing the topic of animal domestication. In Sauer's research he focused on the history of domestication, and how human uses of livestock shaped the landscape (via fencing, grazing, and shelters). Bennett called for a 'cultural animal geography' that focused on the interactions of animals and human cultures such as subsistence hunting and fishing. The shift from the first wave to the second wave of animal geography had to do with the species being studied. Second wave animal geography brought domesticated livestock into the view instead of just focusing on wildlife. For the next several decades animal geography, as cultural ecology, was dominated by research into the origins of domestication, cultural rituals around domestication, and different cultures livestock relations (sedentary versus nomadic herding). Key works include Simoons and Simoons' A Ceremonial Ox of India, Gades' work on the guinea pig, and Cansdale's Animals and Man. Baldwin provides an excellent overview of second wave animal geography research. === Third wave === In the early 1990s several things happened to cause geographers with an interest in animals and human–animal studies to rethink what was possible within animal geography. The 1980s and early 1990s saw the rise of the worldwide animal advocacy movement addressing everything from pet overpopulation to saving endangered species, exposing cruelty to animals in industrial farming (factory farms or concentrated animal feeding operations), and protesting circuses, the use of fur, and hunting – all an effort to raise the visibility of how humans treat non-human others amongst the general public. In the academy, biologists and ethologists were studying animal behavior and species loss/discovery raising awareness about the experiential lives of animals as well as their perilous existence alongside humans. Social scientists were reassessing what it means to be a subject and breaking into the black box of nature to explore new understandings of the relations between humans and the rest of the planet. Animal geographers realized there was a whole spectrum of human–animal relations that should be addressed from a geographic perspective. At the forefront of this third wave of animal geography was Tuan's work on pets in Dominance and Affection and a special topics issue of the journal Environment and Planning D: Society and Space edited by Wolch and Emel. The two key features of the third wave of animal geography that distinguish it from the earlier waves are (1) an expanded notion of human–animal relations to include all time periods and locations of human–animal encounters (rather than just wildlife or livestock), and (2) attempts to bring in the animals themselves as subjects. Since the 1995 publication there has been an explosion of case studies and theorizing. Key works that bring together third wave animal geography are Wolch and Emel's Animal Geographies: Place, Politics and Identity in the Nature–Culture Borderlands, Philo and Wilbert's Animal Spaces, Beastly Places: New Geographies of Human–Animal Relations, Urbanik's Placing Animals: An Introduction to the Geography of Human–Animal Relations, Gillespie and Collard's Critical Animal Geographies: Politics, Intersections and Hierarchies in a Multispecies World, and Wilcox and Rutherford's Historical Animal Geographies. == Areas of focus == There are presently nine areas of focus within animal geography: Theorizing animal geography. Two major works addressing how to think about human–animal relations as a whole are Whatmore's Hybrid Geographies, Hobson's work on political animals through the practice of bear-file farming, and new scholarship that look at animals' relations with the material world. Urban animal geography. Researchers in this area seek to understand that cities are, historically and today, multi-species spaces. Theoretical work comes from Wolch et al. on what constitutes a transspecies urban theory and Wolch on manifesting a multi-species city, along with Philo's work on the historical context for the removal of livestock from the city. Ethics and animal geography. How space, place, and time shape what practices on other species are right or wrong is the concern of this area. Articles by Lynn on what he terms geoethics and Jones on what he terms an ethics of encounter are a good place to start. Human identities and animals. How humans use animals to identify themselves as humans or to distinguish between human groups has a fascinating geographical history. Brown and Rasmussen examine the issue of bestiality, Elder et al. explore how animals are used to discriminate against human groups, and Neo studies how ethnicity comes into play with pig production in Malaysia. Others such as Barua argue that the identities of animals may be cosmopolitan, constituted by the circulation of animals and their contact with divergent cultures. These are all excellent case studies. Animals as subjects. One of the most difficult aspects of studying animals is the fact that they can't talk back to us in human language. Animal geographers have been tackling how, exactly, to address the fact that individuals of other species are experiential entities. Examples include work by Barua on elephants, Bear on fish, Hinchliffe et al. on water voles, and Lorimer on nonhuman charisma. Geographers are also contending with how to reconstruct the lives of animal subjects in the past, how these lives may be resurrected from the historical record, and how spatially situated human–animal relations have changed through time. Pets. One of the most intimate relationships that people have with other species is often through the animals living in their homes. How we have shaped these animals to fit human lifestyles and what this means for negotiating a more-than-human existence is the concern here. Key articles include Fox on dogs, Lulka on the American Kennel Club, and Nast on critical pet studies. Working animals. Human uses of other species as labor are quite extensive both historically and today. From logging elephants to laboratory mice and zoo animals to military dogs and draft animals, the spaces and places of how animals work for us make fascinating geographies. For insight see Anderson's work on zoos, Davies' work on virtual zoos and laboratory mice, and Urbanik's work on the politics of animal biotechnology. Farmed animals. How we raise and farm animals – both as food and for their parts (e.g., fur) – is the largest category of actual use of animals. Research in this area has focused on the development of industrial farming systems, the ethics of consuming animals, and how livestock relations impact notions of place. Buller and Morris discuss farm animal welfare, Holloway examines technological advances in dairy production, Hovorka looks at urban livestock in Africa, and Yarwood et al. explore the livestock landscape. Wild animals. To date, animal geographers have done the most work with this category of human–animal relations. From theoretical explorations of wildlife classification to case studies of human–wildlife conflict, wildlife tourism, to particular human–wild animal geographies, this has proven a dynamic avenue. Key articles include Emel's work on wolves, work on wildlife and mobility, Vaccaro and Beltran's work on reintroductions, Whatmore and Thorne's work on relational typologies of wildlife, and further extensions of the latter's work through explorations of animals and conservation in historical and contemporary trans-national contexts. == Animals of focus == Despite an entire menagerie of animals being the subjects of the animal geographies project, certain species have received more attention than others. These creatures have been ideal 'model' organisms for asking questions about animals in geographical thought. === Elephants === Elephants have featured most prominently in animal geography, beginning with the work of Whatmore & Thorne on the spatial configurations of wildlife. They ask questions about how the African elephant Duchess is configured by different practices in zoos, contrasting her with counterparts in the wild. Whatmore & Thorne's exploration of becoming-elephant was a milestone in animal and more-than-human geographies. Asian elephants have also been the feature of historical animal geographies, the subjects of animal geography methods, interdisciplinary biogeographies. They have been the mainstay of new work on cosmopolitan ecologies, and in thinking about the links between political ecology and nonrepresentational theory. === Wild cats === Wild cat species have also been featured in recent scholarship in animal geography, including Gullo, Lassiter and Wolch and Collard's work on place-specific relational geographies, use of shared landscapes, and interactions between cougars and people. Doubleday's work on tigers in India and Wilcox's work on jaguars in the Americas also explore socially constructed affective logics and their impacts on conservation priorities across a range of geographies and time periods. === Dogs === Dogs have featured heavily in animal geography scholarship in recent years. Notable work includes that of Haraway and Instone and Sweeney on human-dog hybridity, and work by Srinivasan on street dogs in India. More recent work examines how walking dogs displaying unwanted behaviour - animals falling short of our collective expectations of what a ‘pet’ should be - extracts a potentially considerable "social and emotional toll" for dog owners. === Wolves === == See also == Biogeography Fauna Phytogeography Zoology == References == == Further reading == Barua, M. (2013). "Volatile ecologies: towards a material politics of human–animal relations". Environment and Planning A. 46 (6): 1462–1478. doi:10.1068/a46138. S2CID 144550925. Retrieved 21 December 2013. Barua, M. (2013). "Circulating elephants: unpacking the geographies of a cosmopolitan animal". Transactions of the Institute of British Geographers. Retrieved 21 December 2013. == External links == Animal Geography Specialty Group of the Association of American Geographers
Wikipedia/Animal_geographies
In information science, an ontology encompasses a representation, formal naming, and definitions of the categories, properties, and relations between the concepts, data, or entities that pertain to one, many, or all domains of discourse. More simply, an ontology is a way of showing the properties of a subject area and how they are related, by defining a set of terms and relational expressions that represent the entities in that subject area. The field which studies ontologies so conceived is sometimes referred to as applied ontology. Every academic discipline or field, in creating its terminology, thereby lays the groundwork for an ontology. Each uses ontological assumptions to frame explicit theories, research and applications. Improved ontologies may improve problem solving within that domain, interoperability of data systems, and discoverability of data. Translating research papers within every field is a problem made easier when experts from different countries maintain a controlled vocabulary of jargon between each of their languages. For instance, the definition and ontology of economics is a primary concern in Marxist economics, but also in other subfields of economics. An example of economics relying on information science occurs in cases where a simulation or model is intended to enable economic decisions, such as determining what capital assets are at risk and by how much (see risk management). What ontologies in both information science and philosophy have in common is the attempt to represent entities, including both objects and events, with all their interdependent properties and relations, according to a system of categories. In both fields, there is considerable work on problems of ontology engineering (e.g., Quine and Kripke in philosophy, Sowa and Guarino in information science), and debates concerning to what extent normative ontology is possible (e.g., foundationalism and coherentism in philosophy, BFO and Cyc in artificial intelligence). Applied ontology is considered by some as a successor to prior work in philosophy. However many current efforts are more concerned with establishing controlled vocabularies of narrow domains than with philosophical first principles, or with questions such as the mode of existence of fixed essences or whether enduring objects (e.g., perdurantism and endurantism) may be ontologically more primary than processes. Artificial intelligence has retained considerable attention regarding applied ontology in subfields like natural language processing within machine translation and knowledge representation, but ontology editors are being used often in a range of fields, including biomedical informatics, industry. Such efforts often use ontology editing tools such as Protégé. == Ontology in Philosophy == Ontology is a branch of philosophy and intersects areas such as metaphysics, epistemology, and philosophy of language, as it considers how knowledge, language, and perception relate to the nature of reality. Metaphysics deals with questions like "what exists?" and "what is the nature of reality?". One of five traditional branches of philosophy, metaphysics is concerned with exploring existence through properties, entities and relations such as those between particulars and universals, intrinsic and extrinsic properties, or essence and existence. Metaphysics has been an ongoing topic of discussion since recorded history. == Etymology == The compound word ontology combines onto-, from the Greek ὄν, on (gen. ὄντος, ontos), i.e. "being; that which is", which is the present participle of the verb εἰμί, eimí, i.e. "to be, I am", and -λογία, -logia, i.e. "logical discourse", see classical compounds for this type of word formation. While the etymology is Greek, the oldest extant record of the word itself, the Neo-Latin form ontologia, appeared in 1606 in the work Ogdoas Scholastica by Jacob Lorhard (Lorhardus) and in 1613 in the Lexicon philosophicum by Rudolf Göckel (Goclenius). The first occurrence in English of ontology as recorded by the OED (Oxford English Dictionary, online edition, 2008) came in Archeologia Philosophica Nova or New Principles of Philosophy by Gideon Harvey. == Formal Ontology == Since the mid-1970s, researchers in the field of artificial intelligence (AI) have recognized that knowledge engineering is the key to building large and powerful AI systems. AI researchers argued that they could create new ontologies as computational models that enable certain kinds of automated reasoning, which was only marginally successful. In the 1980s, the AI community began to use the term ontology to refer to both a theory of a modeled world and a component of knowledge-based systems. In particular, David Powers introduced the word ontology to AI to refer to real world or robotic grounding, publishing in 1990 literature reviews emphasizing grounded ontology in association with the call for papers for a AAAI Summer Symposium Machine Learning of Natural Language and Ontology, with an expanded version published in SIGART Bulletin and included as a preface to the proceedings. Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy. In 1993, the widely cited web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" by Tom Gruber used ontology as a technical term in computer science closely related to earlier idea of semantic networks and taxonomies. Gruber introduced the term as a specification of a conceptualization: An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy. Attempting to distance ontologies from taxonomies and similar efforts in knowledge modeling that rely on classes and inheritance, Gruber stated (1993): Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited to conservative definitions, that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world (Enderton, 1972). To specify a conceptualization, one needs to state axioms that do constrain the possible interpretations for the defined terms. Recent experimental ontology frameworks have also explored resonance-based AI-human co-evolution structures, such as IAMF (Illumination AI Matrix Framework). Though not yet widely adopted in academic discourse, such models propose phased approaches to ethical harmonization and structural emergence. As refinement of Gruber's definition Feilmayr and Wöß (2016) stated: "An ontology is a formal, explicit specification of a shared conceptualization that is characterized by high semantic expressiveness required for increased complexity." == Formal Ontology Components == Contemporary ontologies share many structural similarities, regardless of the language in which they are expressed. Most ontologies describe individuals (instances), classes (concepts), attributes and relations. === Types === ==== Domain ontology ==== A domain ontology (or domain-specific ontology) represents concepts which belong to a realm of the world, such as biology or politics. Each domain ontology typically models domain-specific definitions of terms. For example, the word card has many different meanings. An ontology about the domain of poker would model the "playing card" meaning of the word, while an ontology about the domain of computer hardware would model the "punched card" and "video card" meanings. Since domain ontologies are written by different people, they represent concepts in very specific and unique ways, and are often incompatible within the same project. As systems that rely on domain ontologies expand, they often need to merge domain ontologies by hand-tuning each entity or using a combination of software merging and hand-tuning. This presents a challenge to the ontology designer. Different ontologies in the same domain arise due to different languages, different intended usage of the ontologies, and different perceptions of the domain (based on cultural background, education, ideology, etc.). At present, merging ontologies that are not developed from a common upper ontology is a largely manual process and therefore time-consuming and expensive. Domain ontologies that use the same upper ontology to provide a set of basic elements with which to specify the meanings of the domain ontology entities can be merged with less effort. There are studies on generalized techniques for merging ontologies, but this area of research is still ongoing, and it is a recent event to see the issue sidestepped by having multiple domain ontologies using the same upper ontology like the OBO Foundry. === Upper ontology === An upper ontology (or foundation ontology) is a model of the commonly shared relations and objects that are generally applicable across a wide range of domain ontologies. It usually employs a core glossary that overarches the terms and associated object descriptions as they are used in various relevant domain ontologies. Standardized upper ontologies available for use include BFO, BORO method, Dublin Core, GFO, Cyc, SUMO, UMBEL, and DOLCE. WordNet has been considered an upper ontology by some and has been used as a linguistic tool for learning domain ontologies. === Hybrid ontology === The Gellish ontology is an example of a combination of an upper and a domain ontology. == Visualization == A survey of ontology visualization methods is presented by Katifori et al. An updated survey of ontology visualization methods and tools was published by Dudás et al. The most established ontology visualization methods, namely indented tree and graph visualization are evaluated by Fu et al. A visual language for ontologies represented in OWL is specified by the Visual Notation for OWL Ontologies (VOWL). == Engineering == Ontology engineering (also called ontology building) is a set of tasks related to the development of ontologies for a particular domain. It is a subfield of knowledge engineering that studies the ontology development process, the ontology life cycle, the methods and methodologies for building ontologies, and the tools and languages that support them. Ontology engineering aims to make explicit the knowledge contained in software applications, and organizational procedures for a particular domain. Ontology engineering offers a direction for overcoming semantic obstacles, such as those related to the definitions of business terms and software classes. Known challenges with ontology engineering include: Ensuring the ontology is current with domain knowledge and term use Providing sufficient specificity and concept coverage for the domain of interest, thus minimizing the content completeness problem Ensuring the ontology can support its use cases === Editors === Ontology editors are applications designed to assist in the creation or manipulation of ontologies. It is common for ontology editors to use one or more ontology languages. Aspects of ontology editors include: visual navigation possibilities within the knowledge model, inference engines and information extraction; support for modules; the import and export of foreign knowledge representation languages for ontology matching; and the support of meta-ontologies such as OWL-S, Dublin Core, etc. === Learning === Ontology learning is the automatic or semi-automatic creation of ontologies, including extracting a domain's terms from natural language text. As building ontologies manually is extremely labor-intensive and time-consuming, there is great motivation to automate the process. Information extraction and text mining have been explored to automatically link ontologies to documents, for example in the context of the BioCreative challenges. === Research === Epistemological assumptions, which in research asks "What do you know? or "How do you know it?", creates the foundation researchers use when approaching a certain topic or area for potential research. As epistemology is directly linked to knowledge and how we come about accepting certain truths, individuals conducting academic research must understand what allows them to begin theory building. Simply, epistemological assumptions force researchers to question how they arrive at the knowledge they have. == Languages == An ontology language is a formal language used to encode an ontology. There are a number of such languages for ontologies, both proprietary and standards-based: Common Algebraic Specification Language is a general logic-based specification language developed within the IFIP working group 1.3 "Foundations of System Specifications" and is a de facto standard language for software specifications. It is now being applied to ontology specifications in order to provide modularity and structuring mechanisms. Common logic is ISO standard 24707, a specification of a family of ontology languages that can be accurately translated into each other. The Cyc project has its own ontology language called CycL, based on first-order predicate calculus with some higher-order extensions. DOGMA (Developing Ontology-Grounded Methods and Applications) adopts the fact-oriented modeling approach to provide a higher level of semantic stability. The Gellish language includes rules for its own extension and thus integrates an ontology with an ontology language. IDEF5 is a software engineering method to develop and maintain usable, accurate, domain ontologies. KIF is a syntax for first-order logic that is based on S-expressions. SUO-KIF is a derivative version supporting the Suggested Upper Merged Ontology. MOF and UML are standards of the OMG Olog is a category theoretic approach to ontologies, emphasizing translations between ontologies using functors. OBO, a language used for biological and biomedical ontologies. OntoUML is an ontologically well-founded profile of UML for conceptual modeling of domain ontologies. OWL is a language for making ontological statements, developed as a follow-on from RDF and RDFS, as well as earlier ontology language projects including OIL, DAML, and DAML+OIL. OWL is intended to be used over the World Wide Web, and all its elements (classes, properties and individuals) are defined as RDF resources, and identified by URIs. Rule Interchange Format (RIF) and F-Logic combine ontologies and rules. Semantic Application Design Language (SADL) captures a subset of the expressiveness of OWL, using an English-like language entered via an Eclipse Plug-in. SBVR (Semantics of Business Vocabularies and Rules) is an OMG standard adopted in industry to build ontologies. TOVE Project, TOronto Virtual Enterprise project == Published examples == Arabic Ontology, a linguistic ontology for Arabic, which can be used as an Arabic Wordnet but with ontologically-clean content. AURUM – Information Security Ontology, An ontology for information security knowledge sharing, enabling users to collaboratively understand and extend the domain knowledge body. It may serve as a basis for automated information security risk and compliance management. BabelNet, a very large multilingual semantic network and ontology, lexicalized in many languages Basic Formal Ontology, a formal upper ontology designed to support scientific research BioPAX, an ontology for the exchange and interoperability of biological pathway (cellular processes) data BMO, an e-Business Model Ontology based on a review of enterprise ontologies and business model literature SSBMO, a Strongly Sustainable Business Model Ontology based on a review of the systems based natural and social science literature (including business). Includes critique of and significant extensions to the Business Model Ontology (BMO). CCO and GexKB, Application Ontologies (APO) that integrate diverse types of knowledge with the Cell Cycle Ontology (CCO) and the Gene Expression Knowledge Base (GexKB) CContology (Customer Complaint Ontology), an e-business ontology to support online customer complaint management CIDOC Conceptual Reference Model, an ontology for cultural heritage COSMO, a Foundation Ontology (current version in OWL) that is designed to contain representations of all of the primitive concepts needed to logically specify the meanings of any domain entity. It is intended to serve as a basic ontology that can be used to translate among the representations in other ontologies or databases. It started as a merger of the basic elements of the OpenCyc and SUMO ontologies, and has been supplemented with other ontology elements (types, relations) so as to include representations of all of the words in the Longman dictionary defining vocabulary. Computer Science Ontology, an automatically generated ontology of research topics in the field of computer science Cyc, a large Foundation Ontology for formal representation of the universe of discourse Disease Ontology, designed to facilitate the mapping of diseases and associated conditions to particular medical codes DOLCE, a Descriptive Ontology for Linguistic and Cognitive Engineering Drammar, ontology of drama Dublin Core, a simple ontology for documents and publishing Financial Industry Business Ontology (FIBO), a business conceptual ontology for the financial industry Foundational, Core and Linguistic Ontologies Foundational Model of Anatomy, an ontology for human anatomy Friend of a Friend, an ontology for describing persons, their activities and their relations to other people and objects Gene Ontology for genomics Gellish English dictionary, an ontology that includes a dictionary and taxonomy that includes an upper ontology and a lower ontology that focuses on industrial and business applications in engineering, technology and procurement. Geopolitical ontology, an ontology describing geopolitical information created by Food and Agriculture Organization(FAO). The geopolitical ontology includes names in multiple languages (English, French, Spanish, Arabic, Chinese, Russian and Italian); maps standard coding systems (UN, ISO, FAOSTAT, AGROVOC, etc.); provides relations among territories (land borders, group membership, etc.); and tracks historical changes. In addition, FAO provides web services of geopolitical ontology and a module maker to download modules of the geopolitical ontology into different formats (RDF, XML, and EXCEL). See more information at FAO Country Profiles. GAO (General Automotive Ontology) – an ontology for the automotive industry that includes 'car' extensions GOLD, General Ontology for Linguistic Description GUM (Generalized Upper Model), a linguistically motivated ontology for mediating between clients systems and natural language technology IDEAS Group, a formal ontology for enterprise architecture being developed by the Australian, Canadian, UK and U.S. Defence Depts. Linkbase, a formal representation of the biomedical domain, founded upon Basic Formal Ontology. LPL, Landmark Pattern Language NCBO Bioportal, biological and biomedical ontologies and associated tools to search, browse and visualise NIFSTD Ontologies from the Neuroscience Information Framework: a modular set of ontologies for the neuroscience domain. OBO-Edit, an ontology browser for most of the Open Biological and Biomedical Ontologies OBO Foundry, a suite of interoperable reference ontologies in biology and biomedicine OMNIBUS Ontology, an ontology of learning, instruction, and instructional design Ontology for Biomedical Investigations, an open-access, integrated ontology of biological and clinical investigations ONSTR, Ontology for Newborn Screening Follow-up and Translational Research, Newborn Screening Follow-up Data Integration Collaborative, Emory University, Atlanta. Plant Ontology for plant structures and growth/development stages, etc. POPE, Purdue Ontology for Pharmaceutical Engineering PRO, the Protein Ontology of the Protein Information Resource, Georgetown University ProbOnto, knowledge base and ontology of probability distributions. Program abstraction taxonomy Protein Ontology for proteomics RXNO Ontology, for name reactions in chemistry SCDO, the Sickle Cell Disease Ontology, facilitates data sharing and collaborations within the SDC community, amongst other applications (see list on SCDO website). Schema.org, for embedding structured data into web pages, primarily for the benefit of search engines Sequence Ontology, for representing genomic feature types found on biological sequences SNOMED CT (Systematized Nomenclature of Medicine – Clinical Terms) Suggested Upper Merged Ontology, a formal upper ontology Systems Biology Ontology (SBO), for computational models in biology SWEET, Semantic Web for Earth and Environmental Terminology SSN/SOSA, The Semantic Sensor Network Ontology (SSN) and Sensor, Observation, Sample, and Actuator Ontology (SOSA) are W3C Recommendation and OGC Standards for describing sensors and their observations. ThoughtTreasure ontology TIME-ITEM, Topics for Indexing Medical Education Uberon, representing animal anatomical structures UMBEL, a lightweight reference structure of 20,000 subject concept classes and their relationships derived from OpenCyc WordNet, a lexical reference system YAMATO, Yet Another More Advanced Top-level Ontology YSO – General Finnish Ontology The W3C Linking Open Data community project coordinates attempts to converge different ontologies into worldwide Semantic Web. == Libraries == The development of ontologies has led to the emergence of services providing lists or directories of ontologies called ontology libraries. The following are libraries of human-selected ontologies. COLORE is an open repository of first-order ontologies in Common Logic with formal links between ontologies in the repository. DAML Ontology Library maintains a legacy of ontologies in DAML. Ontology Design Patterns portal is a wiki repository of reusable components and practices for ontology design, and also maintains a list of exemplary ontologies. Protégé Ontology Library contains a set of OWL, Frame-based and other format ontologies. SchemaWeb is a directory of RDF schemata expressed in RDFS, OWL and DAML+OIL. The following are both directories and search engines. OBO Foundry is a suite of interoperable reference ontologies in biology and biomedicine. Bioportal (ontology repository of NCBO) Linked Open Vocabularies OntoSelect Ontology Library offers similar services for RDF/S, DAML and OWL ontologies. Ontaria is a "searchable and browsable directory of semantic web data" with a focus on RDF vocabularies with OWL ontologies. (NB Project "on hold" since 2004). Swoogle is a directory and search engine for all RDF resources available on the Web, including ontologies. Open Ontology Repository initiative ROMULUS is a foundational ontology repository aimed at improving semantic interoperability. Currently there are three foundational ontologies in the repository: DOLCE, BFO and GFO. == Examples of applications == In general, ontologies can be used beneficially in several fields. Enterprise applications. A more concrete example is SAPPHIRE (Health care) or Situational Awareness and Preparedness for Public Health Incidences and Reasoning Engines which is a semantics-based health information system capable of tracking and evaluating situations and occurrences that may affect public health. Geographic information systems bring together data from different sources and benefit therefore from ontological metadata which helps to connect the semantics of the data. Domain-specific ontologies are extremely important in biomedical research, which requires named entity disambiguation of various biomedical terms and abbreviations that have the same string of characters but represent different biomedical concepts. For example, CSF can represent Colony Stimulating Factor or Cerebral Spinal Fluid, both of which are represented by the same term, CSF, in biomedical literature. This is why a large number of public ontologies are related to the life sciences. Life science data science tools that fail to implement these types of biomedical ontologies will not be able to accurately determine causal relationships between concepts. == See also == Related philosophical concepts Alphabet of human thought Characteristica universalis Interoperability Level of measurement Metalanguage Natural semantic metalanguage == References == == Further reading == Oberle, D.; Guarino, N.; Staab, S. (2009). "What is an Ontology?" (PDF). Handbook on Ontologies. pp. 1–17. doi:10.1007/978-3-540-92673-3_0. ISBN 978-3-540-70999-2. S2CID 8522608. Fensel, D.; van Harmelen, F.; Horrocks, I.; McGuinness, D.L.; Patel-Schneider, P.F. (2001). "OIL: an ontology infrastructure for the Semantic Web". IEEE Intelligent Systems. 16 (2): 38–45. doi:10.1109/5254.920598. Gangemi, A.; Presutti, V. "Ontology Design Patterns" (PDF). Staab & Studer 2009. Golemati, M.; Katifori, A.; Vassilakis, C.; Lepouras, G.; Halatsis, C. (2007). "Creating an Ontology for the User Profile# Method and Applications" (PDF). Proceedings of the First IEEE International Conference on Research Challenges in Information Science (RCIS), Morocco 2007. CiteSeerX 10.1.1.74.9399. Archived from the original (PDF) on 2008-12-17. Mizoguchi, R. (2004). "Tutorial on ontological engineering: Part 3: Advanced course of ontological engineering" (PDF). New Gener Comput. 22: 193–220. doi:10.1007/BF03040960. S2CID 23747079. Archived from the original (PDF) on 2013-03-09. Retrieved 2009-06-08. Gruber, T. R. (1993). "A translation approach to portable ontology specifications" (PDF). Knowledge Acquisition. 5 (2): 199–220. CiteSeerX 10.1.1.101.7493. doi:10.1006/knac.1993.1008. S2CID 15709015. Maedche, A.; Staab, S. (2001). "Ontology learning for the Semantic Web". IEEE Intelligent Systems. 16 (2): 72–79. doi:10.1109/5254.920602. S2CID 1411149. Noy, Natalya F.; McGuinness, Deborah L. (March 2001). "Ontology Development 101: A Guide to Creating Your First Ontology". Stanford Knowledge Systems Laboratory Technical Report KSL-01-05, Stanford Medical Informatics Technical Report SMI-2001-0880. Archived from the original on 2010-07-14. Chaminda Abeysiriwardana, Prabath; Kodituwakku, Saluka R (2012). "Ontology Based Information Extraction for Disease Intelligence". International Journal of Research in Computer Science. 2 (6): 7–19. arXiv:1211.3497. Bibcode:2012arXiv1211.3497C. doi:10.7815/ijorcs.26.2012.051 (inactive 8 December 2024). S2CID 11297019.{{cite journal}}: CS1 maint: DOI inactive as of December 2024 (link) Razmerita, L.; Angehrn, A.; Maedche, A. (2003). "Ontology-Based User Modeling for Knowledge Management Systems". User Modeling 2003. Lecture Notes in Computer Science. Vol. 2702. Springer. pp. 213–7. CiteSeerX 10.1.1.102.4591. doi:10.1007/3-540-44963-9_29. ISBN 3-540-44963-9. Soylu, A.; De Causmaecker, Patrick (2009). "Merging model driven and ontology driven system development approaches pervasive computing perspective". Proceedings of the 24th International Symposium on Computer and Information Sciences. pp. 730–5. doi:10.1109/ISCIS.2009.5291915. ISBN 978-1-4244-5021-3. S2CID 2267593. Smith, B. (2008). "Ontology (Science)". In Eschenbach, C.; Gruninger, M. (eds.). Formal Ontology in Information Systems, Proceedings of FOIS 2008. ISO Press. pp. 21–35. CiteSeerX 10.1.1.681.2599. Staab, S.; Studer, R., eds. (2009). "What is an Ontology?". Handbook on Ontologies (2nd ed.). Springer. pp. 1–17. doi:10.1007/978-3-540-92673-3_0. ISBN 978-3-540-92673-3. S2CID 8522608. Uschold, Mike; Gruninger, M. (1996). "Ontologies: Principles, Methods and Applications". Knowledge Engineering Review. 11 (2): 93–136. CiteSeerX 10.1.1.111.5903. doi:10.1017/S0269888900007797. S2CID 2618234. Pidcock, W. "What are the differences between a vocabulary, a taxonomy, a thesaurus, an ontology, and a meta-model?". Archived from the original on 2009-10-14. Yudelson, M.; Gavrilova, T.; Brusilovsky, P. (2005). "Towards User Modeling Meta-ontology". User Modeling 2005. Lecture Notes in Computer Science. Vol. 3538. Springer. pp. 448–452. CiteSeerX 10.1.1.86.7079. doi:10.1007/11527886_62. ISBN 978-3-540-31878-1. Movshovitz-Attias, Dana; Cohen, William W. (2012). "Bootstrapping Biomedical Ontologies for Scientific Text using NELL" (PDF). Proceedings of the 2012 Workshop on Biomedical Natural Language Processing. Association for Computational Linguistics. pp. 11–19. CiteSeerX 10.1.1.376.2874. == External links == Knowledge Representation at Open Directory Project Library of ontologies (Archive, Unmaintained) GoPubMed using Ontologies for searching ONTOLOG (a.k.a. "Ontolog Forum") - an Open, International, Virtual Community of Practice on Ontology, Ontological Engineering and Semantic Technology Use of Ontologies in Natural Language Processing Ontology Summit - an annual series of events (first started in 2006) that involves the ontology community and communities related to each year's theme chosen for the summit. Standardization of Ontologies
Wikipedia/Ontology_(computer_science)
Oceanography (from Ancient Greek ὠκεανός (ōkeanós) 'ocean' and γραφή (graphḗ) 'writing'), also known as oceanology, sea science, ocean science, and marine science, is the scientific study of the ocean, including its physics, chemistry, biology, and geology. It is an Earth science, which covers a wide range of topics, including ocean currents, waves, and geophysical fluid dynamics; fluxes of various chemical substances and physical properties within the ocean and across its boundaries; ecosystem dynamics; and plate tectonics and seabed geology. Oceanographers draw upon a wide range of disciplines to deepen their understanding of the world’s oceans, incorporating insights from astronomy, biology, chemistry, geography, geology, hydrology, meteorology and physics. == History == === Early history === Humans first acquired knowledge of the waves and currents of the seas and oceans in pre-historic times. Observations on tides were recorded by Aristotle and Strabo in 384–322 BC. Early exploration of the oceans was primarily for cartography and mainly limited to its surfaces and of the animals that fishermen brought up in nets, though depth soundings by lead line were taken. The Portuguese campaign of Atlantic navigation is the earliest example of a systematic scientific large project, sustained over many decades, studying the currents and winds of the Atlantic. The work of Pedro Nunes (1502–1578) is remembered in the navigation context for the determination of the loxodromic curve: the shortest course between two points on the surface of a sphere represented onto a two-dimensional map. When he published his "Treatise of the Sphere" (1537), mostly a commentated translation of earlier work by others, he included a treatise on geometrical and astronomic methods of navigation. There he states clearly that Portuguese navigations were not an adventurous endeavour: "nam se fezeram indo a acertar: mas partiam os nossos mareantes muy ensinados e prouidos de estromentos e regras de astrologia e geometria que sam as cousas que os cosmographos ham dadar apercebidas (...) e leuaua cartas muy particularmente rumadas e na ja as de que os antigos vsauam" (were not done by chance: but our seafarers departed well taught and provided with instruments and rules of astrology (astronomy) and geometry which were matters the cosmographers would provide (...) and they took charts with exact routes and no longer those used by the ancient). His credibility rests on being personally involved in the instruction of pilots and senior seafarers from 1527 onwards by Royal appointment, along with his recognized competence as mathematician and astronomer. The main problem in navigating back from the south of the Canary Islands (or south of Boujdour) by sail alone, is due to the change in the regime of winds and currents: the North Atlantic gyre and the Equatorial counter current will push south along the northwest bulge of Africa, while the uncertain winds where the Northeast trades meet the Southeast trades (the doldrums) leave a sailing ship to the mercy of the currents. Together, prevalent current and wind make northwards progress very difficult or impossible. It was to overcome this problem and clear the passage to India around Africa as a viable maritime trade route, that a systematic plan of exploration was devised by the Portuguese. The return route from regions south of the Canaries became the 'volta do largo' or 'volta do mar'. The 'rediscovery' of the Azores islands in 1427 is merely a reflection of the heightened strategic importance of the islands, now sitting on the return route from the western coast of Africa (sequentially called 'volta de Guiné' and 'volta da Mina'); and the references to the Sargasso Sea (also called at the time 'Mar da Baga'), to the west of the Azores, in 1436, reveals the western extent of the return route. This is necessary, under sail, to make use of the southeasterly and northeasterly winds away from the western coast of Africa, up to the northern latitudes where the westerly winds will bring the seafarers towards the western coasts of Europe. The secrecy involving the Portuguese navigations, with the death penalty for the leaking of maps and routes, concentrated all sensitive records in the Royal Archives, completely destroyed by the Lisbon earthquake of 1775. However, the systematic nature of the Portuguese campaign, mapping the currents and winds of the Atlantic, is demonstrated by the understanding of the seasonal variations, with expeditions setting sail at different times of the year taking different routes to take account of seasonal predominate winds. This happens from as early as late 15th century and early 16th: Bartolomeu Dias followed the African coast on his way south in August 1487, while Vasco da Gama would take an open sea route from the latitude of Sierra Leone, spending three months in the open sea of the South Atlantic to profit from the southwards deflection of the southwesterly on the Brazilian side (and the Brazilian current going southward - Gama departed in July 1497); and Pedro Álvares Cabral (departing March 1500) took an even larger arch to the west, from the latitude of Cape Verde, thus avoiding the summer monsoon (which would have blocked the route taken by Gama at the time he set sail). Furthermore, there were systematic expeditions pushing into the western Northern Atlantic (Teive, 1454; Vogado, 1462; Teles, 1474; Ulmo, 1486). The documents relating to the supplying of ships, and the ordering of sun declination tables for the southern Atlantic for as early as 1493–1496, all suggest a well-planned and systematic activity happening during the decade long period between Bartolomeu Dias finding the southern tip of Africa, and Gama's departure; additionally, there are indications of further travels by Bartolomeu Dias in the area. The most significant consequence of this systematic knowledge was the negotiation of the Treaty of Tordesillas in 1494, moving the line of demarcation 270 leagues to the west (from 100 to 370 leagues west of the Azores), bringing what is now Brazil into the Portuguese area of domination. The knowledge gathered from open sea exploration allowed for the well-documented extended periods of sail without sight of land, not by accident but as pre-determined planned route; for example, 30 days for Bartolomeu Dias culminating on Mossel Bay, the three months Gama spent in the South Atlantic to use the Brazil current (southward), or the 29 days Cabral took from Cape Verde up to landing in Monte Pascoal, Brazil. The Danish expedition to Arabia 1761–67 can be said to be the world's first oceanographic expedition, as the ship Grønland had on board a group of scientists, including naturalist Peter Forsskål, who was assigned an explicit task by the king, Frederik V, to study and describe the marine life in the open sea, including finding the cause of mareel, or milky seas. For this purpose, the expedition was equipped with nets and scrapers, specifically designed to collect samples from the open waters and the bottom at great depth. Although Juan Ponce de León in 1513 first identified the Gulf Stream, and the current was well known to mariners, Benjamin Franklin made the first scientific study of it and gave it its name. Franklin measured water temperatures during several Atlantic crossings and correctly explained the Gulf Stream's cause. Franklin and Timothy Folger printed the first map of the Gulf Stream in 1769–1770. Information on the currents of the Pacific Ocean was gathered by explorers of the late 18th century, including James Cook and Louis Antoine de Bougainville. James Rennell wrote the first scientific textbooks on oceanography, detailing the current flows of the Atlantic and Indian oceans. During a voyage around the Cape of Good Hope in 1777, he mapped "the banks and currents at the Lagullas". He was also the first to understand the nature of the intermittent current near the Isles of Scilly, (now known as Rennell's Current). The tides and currents of the ocean are distinct. Tides are the rise and fall of sea levels created by the combination of the gravitational forces of the Moon along with the Sun (the Sun just in a much lesser extent) and are also caused by the Earth and Moon orbiting each other. An ocean current is a continuous, directed movement of seawater generated by a number of forces acting upon the water, including wind, the Coriolis effect, breaking waves, cabbeling, and temperature and salinity differences. Sir James Clark Ross took the first modern sounding in deep sea in 1840, and Charles Darwin published a paper on reefs and the formation of atolls as a result of the second voyage of HMS Beagle in 1831–1836. Robert FitzRoy published a four-volume report of Beagle's three voyages. In 1841–1842 Edward Forbes undertook dredging in the Aegean Sea that founded marine ecology. The first superintendent of the United States Naval Observatory (1842–1861), Matthew Fontaine Maury devoted his time to the study of marine meteorology, navigation, and charting prevailing winds and currents. His 1855 textbook Physical Geography of the Sea was one of the first comprehensive oceanography studies. Many nations sent oceanographic observations to Maury at the Naval Observatory, where he and his colleagues evaluated the information and distributed the results worldwide. === Modern oceanography === Knowledge of the oceans remained confined to the topmost few fathoms of the water and a small amount of the bottom, mainly in shallow areas. Almost nothing was known of the ocean depths. The British Royal Navy's efforts to chart all of the world's coastlines in the mid-19th century reinforced the vague idea that most of the ocean was very deep, although little more was known. As exploration ignited both popular and scientific interest in the polar regions and Africa, so too did the mysteries of the unexplored oceans. The seminal event in the founding of the modern science of oceanography was the 1872–1876 Challenger expedition. As the first true oceanographic cruise, this expedition laid the groundwork for an entire academic and research discipline. In response to a recommendation from the Royal Society, the British Government announced in 1871 an expedition to explore world's oceans and conduct appropriate scientific investigation. Charles Wyville Thomson and Sir John Murray launched the Challenger expedition. Challenger, leased from the Royal Navy, was modified for scientific work and equipped with separate laboratories for natural history and chemistry. Under the scientific supervision of Thomson, Challenger travelled nearly 70,000 nautical miles (130,000 km) surveying and exploring. On her journey circumnavigating the globe, 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations were taken. Around 4,700 new species of marine life were discovered. The result was the Report Of The Scientific Results of the Exploring Voyage of H.M.S. Challenger during the years 1873–76. Murray, who supervised the publication, described the report as "the greatest advance in the knowledge of our planet since the celebrated discoveries of the fifteenth and sixteenth centuries". He went on to found the academic discipline of oceanography at the University of Edinburgh, which remained the centre for oceanographic research well into the 20th century. Murray was the first to study marine trenches and in particular the Mid-Atlantic Ridge, and map the sedimentary deposits in the oceans. He tried to map out the world's ocean currents based on salinity and temperature observations, and was the first to correctly understand the nature of coral reef development. In the late 19th century, other Western nations also sent out scientific expeditions (as did private individuals and institutions). The first purpose-built oceanographic ship, Albatros, was built in 1882. In 1893, Fridtjof Nansen allowed his ship, Fram, to be frozen in the Arctic ice. This enabled him to obtain oceanographic, meteorological and astronomical data at a stationary spot over an extended period. In 1881 the geographer John Francon Williams published a seminal book, Geography of the Oceans. Between 1907 and 1911 Otto Krümmel published the Handbuch der Ozeanographie, which became influential in awakening public interest in oceanography. The four-month 1910 North Atlantic expedition headed by John Murray and Johan Hjort was the most ambitious research oceanographic and marine zoological project ever mounted until then, and led to the classic 1912 book The Depths of the Ocean. The first acoustic measurement of sea depth was made in 1914. Between 1925 and 1927 the "Meteor" expedition gathered 70,000 ocean depth measurements using an echo sounder, surveying the Mid-Atlantic Ridge. In 1934, Easter Ellen Cupp, the first woman to have earned a PhD (at Scripps) in the United States, completed a major work on diatoms that remained the standard taxonomy in the field until well after her death in 1999. In 1940, Cupp was let go from her position at Scripps. Sverdrup specifically commended Cupp as a conscientious and industrious worker and commented that his decision was no reflection on her ability as a scientist. Sverdrup used the instructor billet vacated by Cupp to employ Marston Sargent, a biologist studying marine algae, which was not a new research program at Scripps. Financial pressures did not prevent Sverdrup from retaining the services of two other young post-doctoral students, Walter Munk and Roger Revelle. Cupp's partner, Dorothy Rosenbury, found her a position teaching high school, where she remained for the rest of her career. (Russell, 2000) Sverdrup, Johnson and Fleming published The Oceans in 1942, which was a major landmark. The Sea (in three volumes, covering physical oceanography, seawater and geology) edited by M.N. Hill was published in 1962, while Rhodes Fairbridge's Encyclopedia of Oceanography was published in 1966. The Great Global Rift, running along the Mid Atlantic Ridge, was discovered by Maurice Ewing and Bruce Heezen in 1953 and mapped by Heezen and Marie Tharp using bathymetric data; in 1954 a mountain range under the Arctic Ocean was found by the Arctic Institute of the USSR. The theory of seafloor spreading was developed in 1960 by Harry Hammond Hess. The Ocean Drilling Program started in 1966. Deep-sea vents were discovered in 1977 by Jack Corliss and Robert Ballard in the submersible DSV Alvin. In the 1950s, Auguste Piccard invented the bathyscaphe and used the bathyscaphe Trieste to investigate the ocean's depths. The United States nuclear submarine Nautilus made the first journey under the ice to the North Pole in 1958. In 1962 the FLIP (Floating Instrument Platform), a 355-foot (108 m) spar buoy, was first deployed. In 1968, Tanya Atwater led the first all-woman oceanographic expedition. Until that time, gender policies restricted women oceanographers from participating in voyages to a significant extent. From the 1970s, there has been much emphasis on the application of large scale computers to oceanography to allow numerical predictions of ocean conditions and as a part of overall environmental change prediction. Early techniques included analog computers (such as the Ishiguro Storm Surge Computer) generally now replaced by numerical methods (e.g. SLOSH.) An oceanographic buoy array was established in the Pacific to allow prediction of El Niño events. 1990 saw the start of the World Ocean Circulation Experiment (WOCE) which continued until 2002. Geosat seafloor mapping data became available in 1995. Study of the oceans is critical to understanding shifts in Earth's energy balance along with related global and regional changes in climate, the biosphere and biogeochemistry. The atmosphere and ocean are linked because of evaporation and precipitation as well as thermal flux (and solar insolation). Recent studies have advanced knowledge on ocean acidification, ocean heat content, ocean currents, sea level rise, the oceanic carbon cycle, the water cycle, Arctic sea ice decline, coral bleaching, marine heatwaves, extreme weather, coastal erosion and many other phenomena in regards to ongoing climate change and climate feedbacks. In general, understanding the world ocean through further scientific study enables better stewardship and sustainable utilization of Earth's resources. The Intergovernmental Oceanographic Commission reports that 1.7% of the total national research expenditure of its members is focused on ocean science. == Branches == The study of oceanography is divided into these five branches: === Biological oceanography === Biological oceanography investigates the ecology and biology of marine organisms in the context of the physical, chemical and geological characteristics of their ocean environment. === Chemical oceanography === Chemical oceanography is the study of the chemistry of the ocean. Whereas chemical oceanography is primarily occupied with the study and understanding of seawater properties and its changes, ocean chemistry focuses primarily on the geochemical cycles. The following is a central topic investigated by chemical oceanography. ==== Ocean acidification ==== Ocean acidification describes the decrease in ocean pH that is caused by anthropogenic carbon dioxide (CO2) emissions into the atmosphere. Seawater is slightly alkaline and had a preindustrial pH of about 8.2. More recently, anthropogenic activities have steadily increased the carbon dioxide content of the atmosphere; about 30–40% of the added CO2 is absorbed by the oceans, forming carbonic acid and lowering the pH (now below 8.1) through ocean acidification. The pH is expected to reach 7.7 by the year 2100. An important element for the skeletons of marine animals is calcium, but calcium carbonate becomes more soluble with pressure, so carbonate shells and skeletons dissolve below the carbonate compensation depth. Calcium carbonate becomes more soluble at lower pH, so ocean acidification is likely to affect marine organisms with calcareous shells, such as oysters, clams, sea urchins and corals, and the carbonate compensation depth will rise closer to the sea surface. Affected planktonic organisms will include pteropods, coccolithophorids and foraminifera, all important in the food chain. In tropical regions, corals are likely to be severely affected as they become less able to build their calcium carbonate skeletons, in turn adversely impacting other reef dwellers. The current rate of ocean chemistry change seems to be unprecedented in Earth's geological history, making it unclear how well marine ecosystems will adapt to the shifting conditions of the near future. Of particular concern is the manner in which the combination of acidification with the expected additional stressors of higher ocean temperatures and lower oxygen levels will impact the seas. === Geological oceanography === Geological oceanography is the study of the geology of the ocean floor including plate tectonics and paleoceanography. === Physical oceanography === Physical oceanography studies the ocean's physical attributes including temperature-salinity structure, mixing, surface waves, internal waves, surface tides, internal tides, and currents. The following are central topics investigated by physical oceanography. ==== Seismic Oceanography ==== ==== Ocean currents ==== Since the early ocean expeditions in oceanography, a major interest was the study of ocean currents and temperature measurements. The tides, the Coriolis effect, changes in direction and strength of wind, salinity, and temperature are the main factors determining ocean currents. The thermohaline circulation (THC) (thermo- referring to temperature and -haline referring to salt content) connects the ocean basins and is primarily dependent on the density of sea water. It is becoming more common to refer to this system as the 'meridional overturning circulation' because it more accurately accounts for other driving factors beyond temperature and salinity. Examples of sustained currents are the Gulf Stream and the Kuroshio Current which are wind-driven western boundary currents. ==== Ocean heat content ==== Oceanic heat content (OHC) refers to the extra heat stored in the ocean from changes in Earth's energy balance. The increase in the ocean heat play an important role in sea level rise, because of thermal expansion. Ocean warming accounts for 90% of the energy accumulation associated with global warming since 1971. === Paleoceanography === Paleoceanography is the study of the history of the oceans in the geologic past with regard to circulation, chemistry, biology, geology and patterns of sedimentation and biological productivity. Paleoceanographic studies using environment models and different proxies enable the scientific community to assess the role of the oceanic processes in the global climate by the reconstruction of past climate at various intervals. Paleoceanographic research is also intimately tied to palaeoclimatology. == Oceanographic institutions == The earliest international organizations of oceanography were founded at the turn of the 20th century, starting with the International Council for the Exploration of the Sea created in 1902, followed in 1919 by the Mediterranean Science Commission. Marine research institutes were already in existence, starting with the Stazione Zoologica Anton Dohrn in Naples, Italy (1872), the Biological Station of Roscoff, France (1876), the Arago Laboratory in Banyuls-sur-mer, France (1882), the Laboratory of the Marine Biological Association in Plymouth, UK (1884), the Norwegian Institute for Marine Research in Bergen, Norway (1900), the Laboratory für internationale Meeresforschung, Kiel, Germany (1902). On the other side of the Atlantic, the Scripps Institution of Oceanography was founded in 1903, followed by the Woods Hole Oceanographic Institution in 1930, the Virginia Institute of Marine Science in 1938, the Lamont–Doherty Earth Observatory at Columbia University in 1949, and later the School of Oceanography at University of Washington. In Australia, the Australian Institute of Marine Science (AIMS), established in 1972 soon became a key player in marine tropical research. In 1921 the International Hydrographic Bureau, called since 1970 the International Hydrographic Organization, was established to develop hydrographic and nautical charting standards. == Related disciplines == == See also == == References == === Sources and further reading === Boling Guo, Daiwen Huang. Infinite-Dimensional Dynamical Systems in Atmospheric and Oceanic Science, 2014, World Scientific Publishing, ISBN 978-981-4590-37-2. Sample Chapter Hamblin, Jacob Darwin (2005) Oceanographers and the Cold War: Disciples of Marine Science. University of Washington Press. ISBN 978-0-295-98482-7 Lang, Michael A., Ian G. Macintyre, and Klaus Rützler, eds. Proceedings of the Smithsonian Marine Science Symposium. Smithsonian Contributions to the Marine Sciences, no. 38. Washington, D.C.: Smithsonian Institution Scholarly Press (2009) Roorda, Eric Paul, ed. The Ocean Reader: History, Culture, Politics (Duke University Press, 2020) 523 pp. online review Steele, J., K. Turekian and S. Thorpe. (2001). Encyclopedia of Ocean Sciences. San Diego: Academic Press. (6 vols.) ISBN 0-12-227430-X Sverdrup, Keith A., Duxbury, Alyn C., Duxbury, Alison B. (2006). Fundamentals of Oceanography, McGraw-Hill, ISBN 0-07-282678-9 Russell, Joellen Louise. Easter Ellen Cupp, 2000, Regents of the University of California. == External links == NASA Jet Propulsion Laboratory – Physical Oceanography Distributed Active Archive Center (PO.DAAC). A data centre responsible for archiving and distributing data about the physical state of the ocean. Scripps Institution of Oceanography. One of the world's oldest, largest, and most important centres for ocean and Earth science research, education, and public service. Woods Hole Oceanographic Institution (WHOI). One of the world's largest private, non-profit ocean research, engineering and education organizations. British Oceanographic Data Centre. A source of oceanographic data and information. NOAA Ocean and Weather Data Navigator. Plot and download ocean data. Freeview Video 'Voyage to the Bottom of the Deep Deep Sea' Oceanography Programme by the Vega Science Trust and the BBC/Open University. Atlas of Spanish Oceanography by InvestigAdHoc. Glossary of Physical Oceanography and Related Disciplines by Steven K. Baum, Department of Oceanography, Texas A&M University Barcelona-Ocean.com . Inspiring Education in Marine Sciences CFOO: Sea Atlas. A source of oceanographic live data (buoy monitoring) and education for South African coasts. Oceanography on In Our Time at the BBC Memorial website for USNS Bowditch, USNS Dutton, USNS Michelson and USNS H. H. Hess
Wikipedia/Oceanography
Geographic information science (GIScience, GISc) or geoinformation science is a scientific discipline at the crossroads of computational science, social science, and natural science that studies geographic information, including how it represents phenomena in the real world, how it represents the way humans understand the world, and how it can be captured, organized, and analyzed. It is a sub-field of geography, specifically part of technical geography. It has applications to both physical geography and human geography, although its techniques can be applied to many other fields of study as well as many different industries. As a field of study or profession, it can be contrasted with geographic information systems (GIS), which are the actual repositories of geospatial data, the software tools for carrying out relevant tasks, and the profession of GIS users. That said, one of the major goals of GIScience is to find practical ways to improve GIS data, software, and professional practice; it is more focused on how gis is applied in real life as opposed to being a geographic information system tool in and of itself. The field is also sometimes called geographical information science. British geographer Michael Goodchild defined this area in the 1990s and summarized its core interests, including spatial analysis, visualization, and the representation of uncertainty. GIScience is conceptually related to geomatics, information science, computer science, and data science, but it claims the status of an independent scientific discipline. Recent developments in the field have expanded its focus to include studies on human dynamics in hybrid physical-virtual worlds, quantum GIScience, the development of smart cities, and the social and environmental impacts of technological innovations. These advancements indicate a growing intersection of GIScience with contemporary societal and technological issues. Overlapping disciplines are: geocomputation, geoinformatics, geomatics and geovisualization. Other related terms are geographic data science (after data science) and geographic information science and technology (GISci&T), with job titles geospatial information scientists and technologists. == Definitions == Since its inception in the 1990s, the boundaries between GIScience and cognate disciplines are contested, and different communities might disagree on what GIScience is and what it studies. In particular, Goodchild stated that "information science can be defined as the systematic study according to scientific principles of the nature and properties of information. Geographic information science is the subset of/or information science that is about geographic information." Another influential definition is that by geographic information scientist (GIScientist) David Mark, which states:Geographic Information Science (GIScience) is the basic research field that seeks to redefine geographic concepts and their use in the context of geographic information systems. GIScience also examines the impacts of GIS on individuals and society, and the influences of society on GIS. GIScience re-examines some of the most fundamental themes in traditional spatially oriented fields such as geography, cartography, and geodesy, while incorporating more recent developments in cognitive and information science. It also overlaps with and draws from more specialized research fields such as computer science, statistics, mathematics, and psychology, and contributes to progress in those fields. It supports research in political science and anthropology, and draws on those fields in studies of geographic information and society. In 2009, Goodchild summarized the history of GIScience and its achievements and open challenges. == See also == Category:Geographic information scientists Geographic Information Science and Technology Body of Knowledge Geostatistics Organizations Association of Geographic Information Laboratories for Europe National Center for Geographic Information and Analysis UCSB Center for Spatial Studies University Consortium for Geographic Information Science United States Geospatial Intelligence Foundation Journals GIScience & Remote Sensing International Journal of Applied Earth Observation and Geoinformation International Journal of Geographical Information Science Journal of Spatial Information Science == References == == External links == Official website of GIScience List of GIScience Conferences Archived 2023-05-30 at the Wayback Machine Conference on Spatial Information Theory (COSIT)
Wikipedia/Geographic_information_science
In geodesy, conversion among different geographic coordinate systems is made necessary by the different geographic coordinate systems in use across the world and over time. Coordinate conversion is composed of a number of different types of conversion: format change of geographic coordinates, conversion of coordinate systems, or transformation to different geodetic datums. Geographic coordinate conversion has applications in cartography, surveying, navigation and geographic information systems. In geodesy, geographic coordinate conversion is defined as translation among different coordinate formats or map projections all referenced to the same geodetic datum. A geographic coordinate transformation is a translation among different geodetic datums. Both geographic coordinate conversion and transformation will be considered in this article. This article assumes readers are already familiar with the content in the articles geographic coordinate system and geodetic datum. == Change of units and format == Informally, specifying a geographic location usually means giving the location's latitude and longitude. The numerical values for latitude and longitude can occur in a number of different units or formats: sexagesimal degree: degrees, minutes, and seconds : 40° 26′ 46″ N 79° 58′ 56″ W degrees and decimal minutes: 40° 26.767′ N 79° 58.933′ W decimal degrees: +40.446 -79.982 There are 60 minutes in a degree and 60 seconds in a minute. Therefore, to convert from a degrees minutes seconds format to a decimal degrees format, one may use the formula d e c i m a l d e g r e e s = d e g r e e s + m i n u t e s 60 + s e c o n d s 3600 {\displaystyle {\rm {{decimal\ degrees}={\rm {{degrees}+{\frac {\rm {minutes}}{60}}+{\frac {\rm {seconds}}{3600}}}}}}} . To convert back from decimal degree format to degrees minutes seconds format, a b s D e g r e e s = | d e c i m a l d e g r e e s | f l o o r A b s D e g r e e s = ⌊ a b s D e g r e e s ⌋ d e g r e e s = sgn ⁡ ( d e c i m a l d e g r e e s ) × f l o o r A b s D e g r e e s m i n u t e s = ⌊ 60 × ( a b s D e g r e e s − f l o o r A b s D e g r e e s ) ⌋ s e c o n d s = 3600 × ( a b s D e g r e e s − f l o o r A b s D e g r e e s ) − 60 × m i n u t e s {\displaystyle {\begin{aligned}{\rm {absDegrees}}&=|{\rm {{decimal\ degrees}|}}\\{\rm {floorAbsDegrees}}&=\lfloor {\rm {{absDegrees}\rfloor }}\\{\rm {degrees}}&=\operatorname {sgn}({\rm {{decimal\ degrees})\times {\rm {floorAbsDegrees}}}}\\{\rm {minutes}}&=\lfloor 60\times ({\rm {{absDegrees}-{\rm {{floorAbsDegrees})\rfloor }}}}\\{\rm {seconds}}&=3600\times ({\rm {{absDegrees}-{\rm {{floorAbsDegrees})-60\times {\rm {minutes}}}}}}\\\end{aligned}}} where a b s D e g r e e s {\displaystyle {\rm {absDegrees}}} and f l o o r A b s D e g r e e s {\displaystyle {\rm {floorAbsDegrees}}} are just temporary variables to handle both positive and negative values properly. == Coordinate system conversion == A coordinate system conversion is a conversion from one coordinate system to another, with both coordinate systems based on the same geodetic datum. Common conversion tasks include conversion between geodetic and earth-centered, earth-fixed (ECEF) coordinates and conversion from one type of map projection to another. === From geodetic to ECEF coordinates === Geodetic coordinates (latitude ϕ {\displaystyle \ \phi } , longitude λ {\displaystyle \ \lambda } , height h {\displaystyle h} ) can be converted into ECEF coordinates using the following equation: X = ( N ( ϕ ) + h ) cos ⁡ ϕ cos ⁡ λ Y = ( N ( ϕ ) + h ) cos ⁡ ϕ sin ⁡ λ Z = ( b 2 a 2 N ( ϕ ) + h ) sin ⁡ ϕ = ( ( 1 − e 2 ) N ( ϕ ) + h ) sin ⁡ ϕ = ( ( 1 − f ) 2 N ( ϕ ) + h ) sin ⁡ ϕ {\displaystyle {\begin{aligned}X&=\left(N(\phi )+h\right)\cos {\phi }\cos {\lambda }\\Y&=\left(N(\phi )+h\right)\cos {\phi }\sin {\lambda }\\Z&=\left({\frac {b^{2}}{a^{2}}}N(\phi )+h\right)\sin {\phi }\\&=\left((1-e^{2})N(\phi )+h\right)\sin {\phi }\\&=\left((1-f)^{2}N(\phi )+h\right)\sin {\phi }\end{aligned}}} where N ( ϕ ) = a 2 a 2 cos 2 ⁡ ϕ + b 2 sin 2 ⁡ ϕ = a 1 − e 2 sin 2 ⁡ ϕ = a 1 − e 2 1 + cot 2 ⁡ ϕ , {\displaystyle N(\phi )={\frac {a^{2}}{\sqrt {a^{2}\cos ^{2}\phi +b^{2}\sin ^{2}\phi }}}={\frac {a}{\sqrt {1-e^{2}\sin ^{2}\phi }}}={\frac {a}{\sqrt {1-{\frac {e^{2}}{1+\cot ^{2}\phi }}}}},} and a {\displaystyle a} and b {\displaystyle b} are the equatorial radius (semi-major axis) and the polar radius (semi-minor axis), respectively. e 2 = 1 − b 2 a 2 {\displaystyle e^{2}=1-{\frac {b^{2}}{a^{2}}}} is the square of the first numerical eccentricity of the ellipsoid. f = 1 − b a {\displaystyle f=1-{\frac {b}{a}}} is the flattening of the ellipsoid. The prime vertical radius of curvature N ( ϕ ) {\displaystyle \,N(\phi )} is the distance from the surface to the Z-axis along the ellipsoid normal. ==== Properties ==== The following condition holds for the longitude in the same way as in the geocentric coordinates system: X cos ⁡ λ − Y sin ⁡ λ = 0. {\displaystyle {\frac {X}{\cos \lambda }}-{\frac {Y}{\sin \lambda }}=0.} And the following holds for the latitude: p cos ⁡ ϕ − Z sin ⁡ ϕ − e 2 N ( ϕ ) = 0 , {\displaystyle {\frac {p}{\cos \phi }}-{\frac {Z}{\sin \phi }}-e^{2}N(\phi )=0,} where p = X 2 + Y 2 {\displaystyle p={\sqrt {X^{2}+Y^{2}}}} , as the parameter h {\displaystyle h} is eliminated by subtracting p cos ⁡ ϕ = N + h {\displaystyle {\frac {p}{\cos \phi }}=N+h} and Z sin ⁡ ϕ = b 2 a 2 N + h . {\displaystyle {\frac {Z}{\sin \phi }}={\frac {b^{2}}{a^{2}}}N+h.} The following holds furthermore, derived from dividing above equations: Z p cot ⁡ ϕ = 1 − e 2 N N + h . {\displaystyle {\frac {Z}{p}}\cot \phi =1-{\frac {e^{2}N}{N+h}}.} ==== Orthogonality ==== The orthogonality of the coordinates is confirmed via differentiation: ( d X d Y d Z ) = ( − sin ⁡ λ − sin ⁡ ϕ cos ⁡ λ cos ⁡ ϕ cos ⁡ λ cos ⁡ λ − sin ⁡ ϕ sin ⁡ λ cos ⁡ ϕ sin ⁡ λ 0 cos ⁡ ϕ sin ⁡ ϕ ) ( d E d N d U ) , ( d E d N d U ) = ( ( N ( ϕ ) + h ) cos ⁡ ϕ 0 0 0 M ( ϕ ) + h 0 0 0 1 ) ( d λ d ϕ d h ) , {\displaystyle {\begin{aligned}{\begin{pmatrix}dX\\dY\\dZ\end{pmatrix}}&={\begin{pmatrix}-\sin \lambda &-\sin \phi \cos \lambda &\cos \phi \cos \lambda \\\cos \lambda &-\sin \phi \sin \lambda &\cos \phi \sin \lambda \\0&\cos \phi &\sin \phi \\\end{pmatrix}}{\begin{pmatrix}dE\\dN\\dU\end{pmatrix}},\\[3pt]{\begin{pmatrix}dE\\dN\\dU\end{pmatrix}}&={\begin{pmatrix}\left(N(\phi )+h\right)\cos \phi &0&0\\0&M(\phi )+h&0\\0&0&1\\\end{pmatrix}}{\begin{pmatrix}d\lambda \\d\phi \\dh\end{pmatrix}},\end{aligned}}} where M ( ϕ ) = a ( 1 − e 2 ) ( 1 − e 2 sin 2 ⁡ ϕ ) 3 2 = N ( ϕ ) 1 − e 2 1 − e 2 sin 2 ⁡ ϕ {\displaystyle M(\phi )={\frac {a\left(1-e^{2}\right)}{\left(1-e^{2}\sin ^{2}\phi \right)^{\frac {3}{2}}}}=N(\phi ){\frac {1-e^{2}}{1-e^{2}\sin ^{2}\phi }}} (see also "Meridian arc on the ellipsoid"). === From ECEF to geodetic coordinates === ==== Conversion for the longitude ==== The conversion of ECEF coordinates to longitude is: λ = atan2 ⁡ ( Y , X ) {\displaystyle \lambda =\operatorname {atan2} (Y,X)} . where atan2 is the quadrant-resolving arc-tangent function. The geocentric longitude and geodetic longitude have the same value; this is true for Earth and other similar shaped planets because they have a large amount of rotational symmetry around their spin axis (see triaxial ellipsoidal longitude for a generalization). ==== Simple iterative conversion for latitude and height ==== The conversion for the latitude and height involves a circular relationship involving N, which is a function of latitude: Z p cot ⁡ ϕ = 1 − e 2 N N + h {\displaystyle {\frac {Z}{p}}\cot \phi =1-{\frac {e^{2}N}{N+h}}} , h = p cos ⁡ ϕ − N {\displaystyle h={\frac {p}{\cos \phi }}-N} . It can be solved iteratively, for example, starting with a first guess h≈0 then updating N. More elaborate methods are shown below. The procedure is, however, sensitive to small accuracy due to N {\displaystyle N} and h {\displaystyle h} being maybe 106 apart. ==== Newton–Raphson method ==== The following Bowring's irrational geodetic-latitude equation, derived simply from the above properties, is efficient to be solved by Newton–Raphson iteration method: κ − 1 − e 2 a κ p 2 + ( 1 − e 2 ) Z 2 κ 2 = 0 , {\displaystyle \kappa -1-{\frac {e^{2}a\kappa }{\sqrt {p^{2}+\left(1-e^{2}\right)Z^{2}\kappa ^{2}}}}=0,} where κ = p Z tan ⁡ ϕ {\displaystyle \kappa ={\frac {p}{Z}}\tan \phi } and p = X 2 + Y 2 {\displaystyle p={\sqrt {X^{2}+Y^{2}}}} as before. The height is calculated as: h = e − 2 ( κ − 1 − κ 0 − 1 ) p 2 + Z 2 κ 2 , κ 0 ≜ ( 1 − e 2 ) − 1 . {\displaystyle {\begin{aligned}h&=e^{-2}\left(\kappa ^{-1}-{\kappa _{0}}^{-1}\right){\sqrt {p^{2}+Z^{2}\kappa ^{2}}},\\\kappa _{0}&\triangleq \left(1-e^{2}\right)^{-1}.\end{aligned}}} The iteration can be transformed into the following calculation: κ i + 1 = c i + ( 1 − e 2 ) Z 2 κ i 3 c i − p 2 = 1 + p 2 + ( 1 − e 2 ) Z 2 κ i 3 c i − p 2 , {\displaystyle \kappa _{i+1}={\frac {c_{i}+\left(1-e^{2}\right)Z^{2}\kappa _{i}^{3}}{c_{i}-p^{2}}}=1+{\frac {p^{2}+\left(1-e^{2}\right)Z^{2}\kappa _{i}^{3}}{c_{i}-p^{2}}},} where c i = ( p 2 + ( 1 − e 2 ) Z 2 κ i 2 ) 3 2 a e 2 . {\displaystyle c_{i}={\frac {\left(p^{2}+\left(1-e^{2}\right)Z^{2}\kappa _{i}^{2}\right)^{\frac {3}{2}}}{ae^{2}}}.} The constant κ 0 {\displaystyle \,\kappa _{0}} is a good starter value for the iteration when h ≈ 0 {\displaystyle h\approx 0} . Bowring showed that the single iteration produces a sufficiently accurate solution. He used extra trigonometric functions in his original formulation. ==== Ferrari's solution ==== The quartic equation of κ {\displaystyle \kappa } , derived from the above, can be solved by Ferrari's solution to yield: ζ = ( 1 − e 2 ) z 2 a 2 , ρ = 1 6 ( p 2 a 2 + ζ − e 4 ) , s = e 4 ζ p 2 4 ρ 3 a 2 , t = 1 + s + s ( s + 2 ) 3 , u = ρ ( t + 1 + 1 t ) , v = u 2 + e 4 ζ , w = e 2 u + v − ζ 2 v , κ = 1 + e 2 u + v + w 2 + w u + v . {\displaystyle {\begin{aligned}\zeta &=\left(1-e^{2}\right){\frac {z^{2}}{a^{2}}},\\[4pt]\rho &={\frac {1}{6}}\left({\frac {p^{2}}{a^{2}}}+\zeta -e^{4}\right),\\[4pt]s&={\frac {e^{4}\zeta p^{2}}{4\rho ^{3}a^{2}}},\\[4pt]t&={\sqrt[{3}]{1+s+{\sqrt {s(s+2)}}}},\\[4pt]u&=\rho \left(t+1+{\frac {1}{t}}\right),\\[4pt]v&={\sqrt {u^{2}+e^{4}\zeta }},\\[4pt]w&=e^{2}{\frac {u+v-\zeta }{2v}},\\[4pt]\kappa &=1+e^{2}{\frac {{\sqrt {u+v+w^{2}}}+w}{u+v}}.\end{aligned}}} ===== The application of Ferrari's solution ===== A number of techniques and algorithms are available but the most accurate, according to Zhu, is the following procedure established by Heikkinen, as cited by Zhu. This overlaps with above. It is assumed that geodetic parameters { a , b , e } {\displaystyle \{a,\,b,\,e\}} are known a = 6378137.0 m. Earth Equatorial Radius b = 6356752.3142 m. Earth Polar Radius e 2 = a 2 − b 2 a 2 e ′ 2 = a 2 − b 2 b 2 p = X 2 + Y 2 F = 54 b 2 Z 2 G = p 2 + ( 1 − e 2 ) Z 2 − e 2 ( a 2 − b 2 ) c = e 4 F p 2 G 3 s = 1 + c + c 2 + 2 c 3 k = s + 1 + 1 s P = F 3 k 2 G 2 Q = 1 + 2 e 4 P r 0 = − P e 2 p 1 + Q + 1 2 a 2 ( 1 + 1 Q ) − P ( 1 − e 2 ) Z 2 Q ( 1 + Q ) − 1 2 P p 2 U = ( p − e 2 r 0 ) 2 + Z 2 V = ( p − e 2 r 0 ) 2 + ( 1 − e 2 ) Z 2 z 0 = b 2 Z a V h = U ( 1 − b 2 a V ) ϕ = arctan ⁡ [ Z + e ′ 2 z 0 p ] λ = arctan2 ⁡ [ Y , X ] {\displaystyle {\begin{aligned}a&=6378137.0{\text{ m. Earth Equatorial Radius}}\\[3pt]b&=6356752.3142{\text{ m. Earth Polar Radius}}\\[3pt]e^{2}&={\frac {a^{2}-b^{2}}{a^{2}}}\\[3pt]e'^{2}&={\frac {a^{2}-b^{2}}{b^{2}}}\\[3pt]p&={\sqrt {X^{2}+Y^{2}}}\\[3pt]F&=54b^{2}Z^{2}\\[3pt]G&=p^{2}+\left(1-e^{2}\right)Z^{2}-e^{2}\left(a^{2}-b^{2}\right)\\[3pt]c&={\frac {e^{4}Fp^{2}}{G^{3}}}\\[3pt]s&={\sqrt[{3}]{1+c+{\sqrt {c^{2}+2c}}}}\\[3pt]k&=s+1+{\frac {1}{s}}\\[3pt]P&={\frac {F}{3k^{2}G^{2}}}\\[3pt]Q&={\sqrt {1+2e^{4}P}}\\[3pt]r_{0}&={\frac {-Pe^{2}p}{1+Q}}+{\sqrt {{\frac {1}{2}}a^{2}\left(1+{\frac {1}{Q}}\right)-{\frac {P\left(1-e^{2}\right)Z^{2}}{Q(1+Q)}}-{\frac {1}{2}}Pp^{2}}}\\[3pt]U&={\sqrt {\left(p-e^{2}r_{0}\right)^{2}+Z^{2}}}\\[3pt]V&={\sqrt {\left(p-e^{2}r_{0}\right)^{2}+\left(1-e^{2}\right)Z^{2}}}\\[3pt]z_{0}&={\frac {b^{2}Z}{aV}}\\[3pt]h&=U\left(1-{\frac {b^{2}}{aV}}\right)\\[3pt]\phi &=\arctan \left[{\frac {Z+e'^{2}z_{0}}{p}}\right]\\[3pt]\lambda &=\operatorname {arctan2} [Y,\,X]\end{aligned}}} Note: arctan2[Y, X] is the four-quadrant inverse tangent function. ==== Power series ==== For small e2 the power series κ = ∑ i ≥ 0 α i e 2 i {\displaystyle \kappa =\sum _{i\geq 0}\alpha _{i}e^{2i}} starts with α 0 = 1 ; α 1 = a Z 2 + p 2 ; α 2 = a Z 2 Z 2 + p 2 + 2 a 2 p 2 2 ( Z 2 + p 2 ) 2 . {\displaystyle {\begin{aligned}\alpha _{0}&=1;\\\alpha _{1}&={\frac {a}{\sqrt {Z^{2}+p^{2}}}};\\\alpha _{2}&={\frac {aZ^{2}{\sqrt {Z^{2}+p^{2}}}+2a^{2}p^{2}}{2\left(Z^{2}+p^{2}\right)^{2}}}.\end{aligned}}} === Geodetic to/from ENU coordinates === To convert from geodetic coordinates to local tangent plane (ENU) coordinates is a two-stage process: Convert geodetic coordinates to ECEF coordinates Convert ECEF coordinates to local ENU coordinates ==== From ECEF to ENU ==== To transform from ECEF coordinates to the local coordinates we need a local reference point. Typically, this might be the location of a radar. If a radar is located at { X r , Y r , Z r } {\displaystyle \left\{X_{r},\,Y_{r},\,Z_{r}\right\}} and an aircraft at { X p , Y p , Z p } {\displaystyle \left\{X_{p},\,Y_{p},\,Z_{p}\right\}} , then the vector pointing from the radar to the aircraft in the ENU frame is [ x y z ] = [ − sin ⁡ λ r cos ⁡ λ r 0 − sin ⁡ ϕ r cos ⁡ λ r − sin ⁡ ϕ r sin ⁡ λ r cos ⁡ ϕ r cos ⁡ ϕ r cos ⁡ λ r cos ⁡ ϕ r sin ⁡ λ r sin ⁡ ϕ r ] [ X p − X r Y p − Y r Z p − Z r ] {\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}={\begin{bmatrix}-\sin \lambda _{r}&\cos \lambda _{r}&0\\-\sin \phi _{r}\cos \lambda _{r}&-\sin \phi _{r}\sin \lambda _{r}&\cos \phi _{r}\\\cos \phi _{r}\cos \lambda _{r}&\cos \phi _{r}\sin \lambda _{r}&\sin \phi _{r}\end{bmatrix}}{\begin{bmatrix}X_{p}-X_{r}\\Y_{p}-Y_{r}\\Z_{p}-Z_{r}\end{bmatrix}}} Note: ϕ {\displaystyle \ \phi } is the geodetic latitude; the geocentric latitude is inappropriate for representing vertical direction for the local tangent plane and must be converted if necessary. ==== From ENU to ECEF ==== This is just the inversion of the ECEF to ENU transformation so [ X p Y p Z p ] = [ − sin ⁡ λ r − sin ⁡ ϕ r cos ⁡ λ r cos ⁡ ϕ r cos ⁡ λ r cos ⁡ λ r − sin ⁡ ϕ r sin ⁡ λ r cos ⁡ ϕ r sin ⁡ λ r 0 cos ⁡ ϕ r sin ⁡ ϕ r ] [ x y z ] + [ X r Y r Z r ] {\displaystyle {\begin{bmatrix}X_{p}\\Y_{p}\\Z_{p}\end{bmatrix}}={\begin{bmatrix}-\sin \lambda _{r}&-\sin \phi _{r}\cos \lambda _{r}&\cos \phi _{r}\cos \lambda _{r}\\\cos \lambda _{r}&-\sin \phi _{r}\sin \lambda _{r}&\cos \phi _{r}\sin \lambda _{r}\\0&\cos \phi _{r}&\sin \phi _{r}\end{bmatrix}}{\begin{bmatrix}x\\y\\z\end{bmatrix}}+{\begin{bmatrix}X_{r}\\Y_{r}\\Z_{r}\end{bmatrix}}} === Conversion across map projections === Conversion of coordinates and map positions among different map projections reference to the same datum may be accomplished either through direct translation formulas from one projection to another, or by first converting from a projection A {\displaystyle A} to an intermediate coordinate system, such as ECEF, then converting from ECEF to projection B {\displaystyle B} . The formulas involved can be complex and in some cases, such as in the ECEF to geodetic conversion above, the conversion has no closed-form solution and approximate methods must be used. References such as the DMA Technical Manual 8358.1 and the USGS paper Map Projections: A Working Manual contain formulas for conversion of map projections. It is common to use computer programs to perform coordinate conversion tasks, such as with the DoD and NGA supported GEOTRANS program. == Datum transformations == Transformations among datums can be accomplished in a number of ways. There are transformations that directly convert geodetic coordinates from one datum to another. There are more indirect transforms that convert from geodetic coordinates to ECEF coordinates, transform the ECEF coordinates from one datum to another, then transform ECEF coordinates of the new datum back to geodetic coordinates. There are also grid-based transformations that directly transform from one (datum, map projection) pair to another (datum, map projection) pair. === Helmert transformation === Use of the Helmert transform in the transformation from geodetic coordinates of datum A {\displaystyle A} to geodetic coordinates of datum B {\displaystyle B} occurs in the context of a three-step process: Convert from geodetic coordinates to ECEF coordinates for datum A {\displaystyle A} Apply the Helmert transform, with the appropriate A → B {\displaystyle A\to B} transform parameters, to transform from datum A {\displaystyle A} ECEF coordinates to datum B {\displaystyle B} ECEF coordinates Convert from ECEF coordinates to geodetic coordinates for datum B {\displaystyle B} In terms of ECEF XYZ vectors, the Helmert transform has the form (position vector transformation convention and very small rotation angles simplification) [ X B Y B Z B ] = [ c x c y c z ] + ( 1 + s × 10 − 6 ) [ 1 − r z r y r z 1 − r x − r y r x 1 ] [ X A Y A Z A ] . {\displaystyle {\begin{bmatrix}X_{B}\\Y_{B}\\Z_{B}\end{bmatrix}}={\begin{bmatrix}c_{x}\\c_{y}\\c_{z}\end{bmatrix}}+\left(1+s\times 10^{-6}\right){\begin{bmatrix}1&-r_{z}&r_{y}\\r_{z}&1&-r_{x}\\-r_{y}&r_{x}&1\end{bmatrix}}{\begin{bmatrix}X_{A}\\Y_{A}\\Z_{A}\end{bmatrix}}.} The Helmert transform is a seven-parameter transform with three translation (shift) parameters c x , c y , c z {\displaystyle c_{x},\,c_{y},\,c_{z}} , three rotation parameters r x , r y , r z {\displaystyle r_{x},\,r_{y},\,r_{z}} and one scaling (dilation) parameter s {\displaystyle s} . The Helmert transform is an approximate method that is accurate when the transform parameters are small relative to the magnitudes of the ECEF vectors. Under these conditions, the transform is considered reversible. A fourteen-parameter Helmert transform, with linear time dependence for each parameter,: 131-133  can be used to capture the time evolution of geographic coordinates dues to geomorphic processes, such as continental drift and earthquakes. This has been incorporated into software, such as the Horizontal Time Dependent Positioning (HTDP) tool from the U.S. NGS. === Molodensky-Badekas transformation === To eliminate the coupling between the rotations and translations of the Helmert transform, three additional parameters can be introduced to give a new XYZ center of rotation closer to coordinates being transformed. This ten-parameter model is called the Molodensky-Badekas transformation and should not be confused with the more basic Molodensky transform.: 133-134  Like the Helmert transform, using the Molodensky-Badekas transform is a three-step process: Convert from geodetic coordinates to ECEF coordinates for datum A {\displaystyle A} Apply the Molodensky-Badekas transform, with the appropriate A → B {\displaystyle A\to B} transform parameters, to transform from datum A {\displaystyle A} ECEF coordinates to datum B {\displaystyle B} ECEF coordinates Convert from ECEF coordinates to geodetic coordinates for datum B {\displaystyle B} The transform has the form [ X B Y B Z B ] = [ X A Y A Z A ] + [ Δ X A Δ Y A Δ Z A ] + [ 1 − r z r y r z 1 − r x − r y r x 1 ] [ X A − X A 0 Y A − Y A 0 Z A − Z A 0 ] + Δ S [ X A − X A 0 Y A − Y A 0 Z A − Z A 0 ] . {\displaystyle {\begin{bmatrix}X_{B}\\Y_{B}\\Z_{B}\end{bmatrix}}={\begin{bmatrix}X_{A}\\Y_{A}\\Z_{A}\end{bmatrix}}+{\begin{bmatrix}\Delta X_{A}\\\Delta Y_{A}\\\Delta Z_{A}\end{bmatrix}}+{\begin{bmatrix}1&-r_{z}&r_{y}\\r_{z}&1&-r_{x}\\-r_{y}&r_{x}&1\end{bmatrix}}{\begin{bmatrix}X_{A}-X_{A}^{0}\\Y_{A}-Y_{A}^{0}\\Z_{A}-Z_{A}^{0}\end{bmatrix}}+\Delta S{\begin{bmatrix}X_{A}-X_{A}^{0}\\Y_{A}-Y_{A}^{0}\\Z_{A}-Z_{A}^{0}\end{bmatrix}}.} where ( X A 0 , Y A 0 , Z A 0 ) {\displaystyle \left(X_{A}^{0},\,Y_{A}^{0},\,Z_{A}^{0}\right)} is the origin for the rotation and scaling transforms and Δ S {\displaystyle \Delta S} is the scaling factor. The Molodensky-Badekas transform is used to transform local geodetic datums to a global geodetic datum, such as WGS 84. Unlike the Helmert transform, the Molodensky-Badekas transform is not reversible due to the rotational origin being associated with the original datum.: 134  === Molodensky transformation === The Molodensky transformation converts directly between geodetic coordinate systems of different datums without the intermediate step of converting to geocentric coordinates (ECEF). It requires the three shifts between the datum centers and the differences between the reference ellipsoid semi-major axes and flattening parameters. The Molodensky transform is used by the National Geospatial-Intelligence Agency (NGA) in their standard TR8350.2 and the NGA supported GEOTRANS program. The Molodensky method was popular before the advent of modern computers and the method is part of many geodetic programs. === Grid-based method === Grid-based transformations directly convert map coordinates from one (map-projection, geodetic datum) pair to map coordinates of another (map-projection, geodetic datum) pair. An example is the NADCON method for transforming from the North American Datum (NAD) 1927 to the NAD 1983 datum. The High Accuracy Reference Network (HARN), a high accuracy version of the NADCON transforms, have an accuracy of approximately 5 centimeters. The National Transformation version 2 (NTv2) is a Canadian version of NADCON for transforming between NAD 1927 and NAD 1983. HARNs are also known as NAD 83/91 and High Precision Grid Networks (HPGN). Subsequently, Australia and New Zealand adopted the NTv2 format to create grid-based methods for transforming among their own local datums. Like the multiple regression equation transform, grid-based methods use a low-order interpolation method for converting map coordinates, but in two dimensions instead of three. The NOAA provides a software tool (as part of the NGS Geodetic Toolkit) for performing NADCON transformations. === Multiple regression equations === Datum transformations through the use of empirical multiple regression methods were created to achieve higher accuracy results over small geographic regions than the standard Molodensky transformations. MRE transforms are used to transform local datums over continent-sized or smaller regions to global datums, such as WGS 84. The standard NIMA TM 8350.2, Appendix D, lists MRE transforms from several local datums to WGS 84, with accuracies of about 2 meters. The MREs are a direct transformation of geodetic coordinates with no intermediate ECEF step. Geodetic coordinates ϕ B , λ B , h B {\displaystyle \phi _{B},\,\lambda _{B},\,h_{B}} in the new datum B {\displaystyle B} are modeled as polynomials of up to the ninth degree in the geodetic coordinates ϕ A , λ A , h A {\displaystyle \phi _{A},\,\lambda _{A},\,h_{A}} of the original datum A {\displaystyle A} . For instance, the change in ϕ B {\displaystyle \phi _{B}} could be parameterized as (with only up to quadratic terms shown): 9  Δ ϕ = a 0 + a 1 U + a 2 V + a 3 U 2 + a 4 U V + a 5 V 2 + ⋯ {\displaystyle \Delta \phi =a_{0}+a_{1}U+a_{2}V+a_{3}U^{2}+a_{4}UV+a_{5}V^{2}+\cdots } where a i , {\displaystyle a_{i},} parameters fitted by multiple regression U = K ( ϕ A − ϕ m ) V = K ( λ A − λ m ) {\displaystyle {\begin{aligned}U&=K(\phi _{A}-\phi _{m})\\V&=K(\lambda _{A}-\lambda _{m})\\\end{aligned}}} K , {\displaystyle K,} scale factor ϕ m , λ m , {\displaystyle \phi _{m},\,\lambda _{m},} origin of the datum, A . {\displaystyle A.} with similar equations for Δ λ {\displaystyle \Delta \lambda } and Δ h {\displaystyle \Delta h} . Given a sufficient number of ( A , B ) {\displaystyle (A,\,B)} coordinate pairs for landmarks in both datums for good statistics, multiple regression methods are used to fit the parameters of these polynomials. The polynomials, along with the fitted coefficients, form the multiple regression equations. == See also == Gauss–Krüger coordinate system List of map projections Spatial reference system Topocentric coordinate system Universal polar stereographic coordinate system Universal Transverse Mercator coordinate system Geographical distance == References ==
Wikipedia/Geographic_coordinate_conversion
Extract, transform, load (ETL) is a three-phase computing process where data is extracted from an input source, transformed (including cleaning), and loaded into an output data container. The data can be collected from one or more sources and it can also be output to one or more destinations. ETL processing is typically executed using software applications but it can also be done manually by system operators. ETL software typically automates the entire process and can be run manually or on recurring schedules either as single jobs or aggregated into a batch of jobs. A properly designed ETL system extracts data from source systems and enforces data type and data validity standards and ensures it conforms structurally to the requirements of the output. Some ETL systems can also deliver data in a presentation-ready format so that application developers can build applications and end users can make decisions. The ETL process is often used in data warehousing. ETL systems commonly integrate data from multiple applications (systems), typically developed and supported by different vendors or hosted on separate computer hardware. The separate systems containing the original data are frequently managed and operated by different stakeholders. For example, a cost accounting system may combine data from payroll, sales, and purchasing. Data extraction involves extracting data from homogeneous or heterogeneous sources; data transformation processes data by data cleaning and transforming it into a proper storage format/structure for the purposes of querying and analysis; finally, data loading describes the insertion of data into the final target database such as an operational data store, a data mart, data lake or a data warehouse. ETL and its variant ELT (extract, load, transform), are increasingly used in cloud-based data warehousing. Applications involve not only batch processing, but also real-time streaming. == Phases == === Extract === ETL processing involves extracting the data from the source system(s). In many cases, this represents the most important aspect of ETL, since extracting data correctly sets the stage for the success of subsequent processes. Most data-warehousing projects combine data from different source systems. Each separate system may also use a different data organization and/or format. Common data-source formats include relational databases, flat-file databases, XML, and JSON, but may also include non-relational database structures such as IBM Information Management System or other data structures such as Virtual Storage Access Method (VSAM) or Indexed Sequential Access Method (ISAM), or even formats fetched from outside sources by means such as a web crawler or data scraping. The streaming of the extracted data source and loading on-the-fly to the destination database is another way of performing ETL when no intermediate data storage is required. An intrinsic part of the extraction involves data validation to confirm whether the data pulled from the sources has the correct/expected values in a given domain (such as a pattern/default or list of values). If the data fails the validation rules, it is rejected entirely or in part. The rejected data is ideally reported back to the source system for further analysis to identify and to rectify incorrect records or perform data wrangling. === Transform === In the data transformation stage, a series of rules or functions are applied to the extracted data in order to prepare it for loading into the end target. An important function of transformation is data cleansing, which aims to pass only "proper" data to the target. The challenge when different systems interact is in the relevant systems' interfacing and communicating. Character sets that may be available in one system may not be in others. In other cases, one or more of the following transformation types may be required to meet the business and technical needs of the server or data warehouse: Selecting only certain columns to load: (or selecting null columns not to load). For example, if the source data has three columns (aka "attributes"), roll_no, age, and salary, then the selection may take only roll_no and salary. Or, the selection mechanism may ignore all those records where salary is not present (salary = null). Translating coded values: (e.g., if the source system codes male as "1" and female as "2", but the warehouse codes male as "M" and female as "F") Encoding free-form values: (e.g., mapping "Male" to "M") Deriving a new calculated value: (e.g., sale_amount = qty * unit_price) Sorting or ordering the data based on a list of columns to improve search performance Joining data from multiple sources (e.g., lookup, merge) and deduplicating the data Aggregating (for example, rollup – summarizing multiple rows of data – total sales for each store, and for each region, etc.) Generating surrogate-key values Transposing or pivoting (turning multiple columns into multiple rows or vice versa) Splitting a column into multiple columns (e.g., converting a comma-separated list, specified as a string in one column, into individual values in different columns) Disaggregating repeating columns Looking up and validating the relevant data from tables or referential files Applying any form of data validation; failed validation may result in a full rejection of the data, partial rejection, or no rejection at all, and thus none, some, or all of the data is handed over to the next step depending on the rule design and exception handling; many of the above transformations may result in exceptions, e.g., when a code translation parses an unknown code in the extracted data === Load === The load phase loads the data into the end target, which can be any data store including a simple delimited flat file or a data warehouse. Depending on the requirements of the organization, this process varies widely. Some data warehouses may overwrite existing information with cumulative information; updating extracted data is frequently done on a daily, weekly, or monthly basis. Other data warehouses (or even other parts of the same data warehouse) may add new data in a historical form at regular intervals – for example, hourly. To understand this, consider a data warehouse that is required to maintain sales records of the last year. This data warehouse overwrites any data older than a year with newer data. However, the entry of data for any one year window is made in a historical manner. The timing and scope to replace or append are strategic design choices dependent on the time available and the business needs. More complex systems can maintain a history and audit trail of all changes to the data loaded in the data warehouse. As the load phase interacts with a database, the constraints defined in the database schema – as well as in triggers activated upon data load – apply (for example, uniqueness, referential integrity, mandatory fields), which also contribute to the overall data quality performance of the ETL process. For example, a financial institution might have information on a customer in several departments and each department might have that customer's information listed in a different way. The membership department might list the customer by name, whereas the accounting department might list the customer by number. ETL can bundle all of these data elements and consolidate them into a uniform presentation, such as for storing in a database or data warehouse. Another way that companies use ETL is to move information to another application permanently. For instance, the new application might use another database vendor and most likely a very different database schema. ETL can be used to transform the data into a format suitable for the new application to use. An example would be an expense and cost recovery system such as used by accountants, consultants, and law firms. The data usually ends up in the time and billing system, although some businesses may also utilize the raw data for employee productivity reports to Human Resources (personnel dept.) or equipment usage reports to Facilities Management. === Additional phases === A real-life ETL cycle may consist of additional execution steps, for example: Cycle initiation Build reference data Extract (from sources) Validate Transform (clean, apply business rules, check for data integrity, create aggregates or disaggregates) Stage (load into staging tables, if used) Audit reports (for example, on compliance with business rules. Also, in case of failure, helps to diagnose/repair) Publish (to target tables) Archive == Design Challenges == ETL processes can involve considerable complexity, and significant operational problems can occur with improperly designed ETL systems. === Data variations === The range of data values or data quality in an operational system may exceed the expectations of designers at the time validation and transformation rules are specified. Data profiling of a source during data analysis can identify the data conditions that must be managed by transform rules specifications, leading to an amendment of validation rules explicitly and implicitly implemented in the ETL process. Data warehouses are typically assembled from a variety of data sources with different formats and purposes. As such, ETL is a key process to bring all the data together in a standard, homogeneous environment. Design analysis should establish the scalability of an ETL system across the lifetime of its usage – including understanding the volumes of data that must be processed within service level agreements. The time available to extract from source systems may change, which may mean the same amount of data may have to be processed in less time. Some ETL systems have to scale to process terabytes of data to update data warehouses with tens of terabytes of data. Increasing volumes of data may require designs that can scale from daily batch to multiple-day micro batch to integration with message queues or real-time change-data-capture for continuous transformation and update. === Uniqueness of keys === Unique keys play an important part in all relational databases, as they tie everything together. A unique key is a column that identifies a given entity, whereas a foreign key is a column in another table that refers to a primary key. Keys can comprise several columns, in which case they are composite keys. In many cases, the primary key is an auto-generated integer that has no meaning for the business entity being represented, but solely exists for the purpose of the relational database – commonly referred to as a surrogate key. As there is usually more than one data source getting loaded into the warehouse, the keys are an important concern to be addressed. For example: customers might be represented in several data sources, with their Social Security number as the primary key in one source, their phone number in another, and a surrogate in the third. Yet a data warehouse may require the consolidation of all the customer information into one dimension. A recommended way to deal with the concern involves adding a warehouse surrogate key, which is used as a foreign key from the fact table. Usually, updates occur to a dimension's source data, which obviously must be reflected in the data warehouse. If the primary key of the source data is required for reporting, the dimension already contains that piece of information for each row. If the source data uses a surrogate key, the warehouse must keep track of it even though it is never used in queries or reports; it is done by creating a lookup table that contains the warehouse surrogate key and the originating key. This way, the dimension is not polluted with surrogates from various source systems, while the ability to update is preserved. The lookup table is used in different ways depending on the nature of the source data. There are 5 types to consider; three are included here: Type 1 The dimension row is simply updated to match the current state of the source system; the warehouse does not capture history; the lookup table is used to identify the dimension row to update or overwrite Type 2 A new dimension row is added with the new state of the source system; a new surrogate key is assigned; source key is no longer unique in the lookup table Fully logged A new dimension row is added with the new state of the source system, while the previous dimension row is updated to reflect it is no longer active and time of deactivation. === Performance === ETL vendors benchmark their record-systems at multiple TB (terabytes) per hour (or ~1 GB per second) using powerful servers with multiple CPUs, multiple hard drives, multiple gigabit-network connections, and much memory. In real life, the slowest part of an ETL process usually occurs in the database load phase. Databases may perform slowly because they have to take care of concurrency, integrity maintenance, and indices. Thus, for better performance, it may make sense to employ: Direct path extract method or bulk unload whenever is possible (instead of querying the database) to reduce the load on source system while getting high-speed extract Most of the transformation processing outside of the database Bulk load operations whenever possible Still, even using bulk operations, database access is usually the bottleneck in the ETL process. Some common methods used to increase performance are: Partition tables (and indices): try to keep partitions similar in size (watch for null values that can skew the partitioning) Do all validation in the ETL layer before the load: disable integrity checking (disable constraint ...) in the target database tables during the load Disable triggers (disable trigger ...) in the target database tables during the load: simulate their effect as a separate step Generate IDs in the ETL layer (not in the database) Drop the indices (on a table or partition) before the load – and recreate them after the load (SQL: drop index ...; create index ...) Use parallel bulk load when possible – works well when the table is partitioned or there are no indices (Note: attempting to do parallel loads into the same table (partition) usually causes locks – if not on the data rows, then on indices) If a requirement exists to do insertions, updates, or deletions, find out which rows should be processed in which way in the ETL layer, and then process these three operations in the database separately; you often can do bulk load for inserts, but updates and deletes commonly go through an API (using SQL) Whether to do certain operations in the database or outside may involve a trade-off. For example, removing duplicates using distinct may be slow in the database; thus, it makes sense to do it outside. On the other side, if using distinct significantly (x100) decreases the number of rows to be extracted, then it makes sense to remove duplications as early as possible in the database before unloading data. A common source of problems in ETL is a big number of dependencies among ETL jobs. For example, job "B" cannot start while job "A" is not finished. One can usually achieve better performance by visualizing all processes on a graph, and trying to reduce the graph making maximum use of parallelism, and making "chains" of consecutive processing as short as possible. Again, partitioning of big tables and their indices can really help. Another common issue occurs when the data are spread among several databases, and processing is done in those databases sequentially. Sometimes database replication may be involved as a method of copying data between databases – it can significantly slow down the whole process. The common solution is to reduce the processing graph to only three layers: Sources Central ETL layer Targets This approach allows processing to take maximum advantage of parallelism. For example, if you need to load data into two databases, you can run the loads in parallel (instead of loading into the first – and then replicating into the second). Sometimes processing must take place sequentially. For example, dimensional (reference) data are needed before one can get and validate the rows for main "fact" tables. === Parallel computing === Some ETL software implementations include parallel processing. This enables a number of methods to improve overall performance of ETL when dealing with large volumes of data. ETL applications implement three main types of parallelism: Data: By splitting a single sequential file into smaller data files to provide parallel access Pipeline: allowing the simultaneous running of several components on the same data stream, e.g. looking up a value on record 1 at the same time as adding two fields on record 2 Component: The simultaneous running of multiple processes on different data streams in the same job, e.g. sorting one input file while removing duplicates on another file All three types of parallelism usually operate combined in a single job or task. An additional difficulty comes with making sure that the data being uploaded is relatively consistent. Because multiple source databases may have different update cycles (some may be updated every few minutes, while others may take days or weeks), an ETL system may be required to hold back certain data until all sources are synchronized. Likewise, where a warehouse may have to be reconciled to the contents in a source system or with the general ledger, establishing synchronization and reconciliation points becomes necessary. === Failure recovery === Data warehousing procedures usually subdivide a big ETL process into smaller pieces running sequentially or in parallel. To keep track of data flows, it makes sense to tag each data row with "row_id", and tag each piece of the process with "run_id". In case of a failure, having these IDs help to roll back and rerun the failed piece. Best practice also calls for checkpoints, which are states when certain phases of the process are completed. Once at a checkpoint, it is a good idea to write everything to disk, clean out some temporary files, log the state, etc. == Implementations == An established ETL framework may improve connectivity and scalability. A good ETL tool must be able to communicate with the many different relational databases and read the various file formats used throughout an organization. ETL tools have started to migrate into enterprise application integration, or even enterprise service bus, systems that now cover much more than just the extraction, transformation, and loading of data. Many ETL vendors now have data profiling, data quality, and metadata capabilities. A common use case for ETL tools include converting CSV files to formats readable by relational databases. A typical translation of millions of records is facilitated by ETL tools that enable users to input CSV-like data feeds/files and import them into a database with as little code as possible. ETL tools are typically used by a broad range of professionals – from students in computer science looking to quickly import large data sets to database architects in charge of company account management, ETL tools have become a convenient tool that can be relied on to get maximum performance. ETL tools in most cases contain a GUI that helps users conveniently transform data, using a visual data mapper, as opposed to writing large programs to parse files and modify data types. While ETL tools have traditionally been for developers and IT staff, research firm Gartner wrote that the new trend is to provide these capabilities to business users so they can themselves create connections and data integrations when needed, rather than going to the IT staff. Gartner refers to these non-technical users as Citizen Integrators. == Variations == === In online transaction processing === In online transaction processing (OLTP) applications, changes from individual OLTP instances are detected and logged into a snapshot, or batch, of updates. An ETL instance can be used to periodically collect all of these batches, transform them into a common format, and load them into a data lake or warehouse. === Virtual ETL === Data virtualization can be used to advance ETL processing. The application of data virtualization to ETL allowed solving the most common ETL tasks of data migration and application integration for multiple dispersed data sources. Virtual ETL operates with the abstracted representation of the objects or entities gathered from the variety of relational, semi-structured, and unstructured data sources. ETL tools can leverage object-oriented modeling and work with entities' representations persistently stored in a centrally located hub-and-spoke architecture. Such a collection that contains representations of the entities or objects gathered from the data sources for ETL processing is called a metadata repository and it can reside in memory or be made persistent. By using a persistent metadata repository, ETL tools can transition from one-time projects to persistent middleware, performing data harmonization and data profiling consistently and in near-real time. === Extract, load, transform (ELT) === Extract, load, transform (ELT) is a variant of ETL where the extracted data is loaded into the target system first. The architecture for the analytics pipeline shall also consider where to cleanse and enrich data as well as how to conform dimensions. Some of the benefits of an ELT process include speed and the ability to more easily handle both unstructured and structured data. Ralph Kimball and Joe Caserta's book The Data Warehouse ETL Toolkit, (Wiley, 2004), which is used as a textbook for courses teaching ETL processes in data warehousing, addressed this issue. Cloud-based data warehouses like Amazon Redshift, Google BigQuery, Microsoft Azure Synapse Analytics and Snowflake Inc. have been able to provide highly scalable computing power. This lets businesses forgo preload transformations and replicate raw data into their data warehouses, where it can transform them as needed using SQL. After having used ELT, data may be processed further and stored in a data mart. Most data integration tools skew towards ETL, while ELT is popular in database and data warehouse appliances. Similarly, it is possible to perform TEL (Transform, Extract, Load) where data is first transformed on a blockchain (as a way of recording changes to data, e.g., token burning) before extracting and loading into another data store. == See also == Architectural pattern (EA reference architecture) CMS Pipelines Create, read, update and delete (CRUD) Data cleansing Data integration Data mart Data mesh, a domain-oriented data architecture Data migration Data transformation (computing) Electronic data interchange (EDI) Enterprise architecture Legal Electronic Data Exchange Standard (LEDES) Metadata discovery Online analytical processing (OLAP) Online transaction processing (OLTP) Spatial ETL == References ==
Wikipedia/Extract,_transform,_load
In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches. Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The Fourier transform of a Gaussian function is another Gaussian function. Joseph Fourier introduced sine and cosine transforms (which correspond to the imaginary and real components of the modern Fourier transform) in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation. The Fourier transform can be formally defined as an improper Riemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory. For example, many relatively simple applications use the Dirac delta function, which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint. The Fourier transform can also be generalized to functions of several variables on Euclidean space, sending a function of 3-dimensional 'position space' to a function of 3-dimensional momentum (or a function of space and time to a function of 4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as in quantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possibly vector-valued. Still further generalization is possible to functions on groups, which, besides the original Fourier transform on R or Rn, notably includes the discrete-time Fourier transform (DTFT, group = Z), the discrete Fourier transform (DFT, group = Z mod N) and the Fourier series or circular Fourier transform (group = S1, the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handle periodic functions. The fast Fourier transform (FFT) is an algorithm for computing the DFT. == Definition == The Fourier transform of a complex-valued (Lebesgue) integrable function f ( x ) {\displaystyle f(x)} on the real line, is the complex valued function f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} , defined by the integral Evaluating the Fourier transform for all values of ξ {\displaystyle \xi } produces the frequency-domain function, and it converges at all frequencies to a continuous function tending to zero at infinity. If f ( x ) {\displaystyle f(x)} decays with all derivatives, i.e., lim | x | → ∞ f ( n ) ( x ) = 0 , ∀ n ∈ N , {\displaystyle \lim _{|x|\to \infty }f^{(n)}(x)=0,\quad \forall n\in \mathbb {N} ,} then f ^ {\displaystyle {\widehat {f}}} converges for all frequencies and, by the Riemann–Lebesgue lemma, f ^ {\displaystyle {\widehat {f}}} also decays with all derivatives. First introduced in Fourier's Analytical Theory of Heat., the corresponding inversion formula for "sufficiently nice" functions is given by the Fourier inversion theorem, i.e., The functions f {\displaystyle f} and f ^ {\displaystyle {\widehat {f}}} are referred to as a Fourier transform pair. A common notation for designating transform pairs is: f ( x ) ⟷ F f ^ ( ξ ) , {\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ {\widehat {f}}(\xi ),} for example rect ⁡ ( x ) ⟷ F sinc ⁡ ( ξ ) . {\displaystyle \operatorname {rect} (x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ \operatorname {sinc} (\xi ).} By analogy, the Fourier series can be regarded as an abstract Fourier transform on the group Z {\displaystyle \mathbb {Z} } of integers. That is, the synthesis of a sequence of complex numbers c n {\displaystyle c_{n}} is defined by the Fourier transform f ( x ) = ∑ n = − ∞ ∞ c n e i 2 π n P x , {\displaystyle f(x)=\sum _{n=-\infty }^{\infty }c_{n}\,e^{i2\pi {\tfrac {n}{P}}x},} such that c n {\displaystyle c_{n}} are given by the inversion formula, i.e., the analysis c n = 1 P ∫ − P / 2 P / 2 f ( x ) e − i 2 π n P x d x , {\displaystyle c_{n}={\frac {1}{P}}\int _{-P/2}^{P/2}f(x)\,e^{-i2\pi {\frac {n}{P}}x}\,dx,} for some complex-valued, P {\displaystyle P} -periodic function f ( x ) {\displaystyle f(x)} defined on a bounded interval [ − P / 2 , P / 2 ] ∈ R {\displaystyle [-P/2,P/2]\in \mathbb {R} } . When P → ∞ , {\displaystyle P\to \infty ,} the constituent frequencies are a continuum: n P → ξ ∈ R , {\displaystyle {\tfrac {n}{P}}\to \xi \in \mathbb {R} ,} and c n → f ^ ( ξ ) ∈ C {\displaystyle c_{n}\to {\hat {f}}(\xi )\in \mathbb {C} } . In other words, on the finite interval [ − P / 2 , P / 2 ] {\displaystyle [-P/2,P/2]} the function f ( x ) {\displaystyle f(x)} has a discrete decomposition in the periodic functions e i 2 π x n / P {\displaystyle e^{i2\pi xn/P}} . On the infinite interval ( − ∞ , ∞ ) {\displaystyle (-\infty ,\infty )} the function f ( x ) {\displaystyle f(x)} has a continuous decomposition in periodic functions e i 2 π x ξ {\displaystyle e^{i2\pi x\xi }} . === Lebesgue integrable functions === A measurable function f : R → C {\displaystyle f:\mathbb {R} \to \mathbb {C} } is called (Lebesgue) integrable if the Lebesgue integral of its absolute value is finite: ‖ f ‖ 1 = ∫ R | f ( x ) | d x < ∞ . {\displaystyle \|f\|_{1}=\int _{\mathbb {R} }|f(x)|\,dx<\infty .} If f {\displaystyle f} is Lebesgue integrable then the Fourier transform, given by Eq.1, is well-defined for all ξ ∈ R {\displaystyle \xi \in \mathbb {R} } . Furthermore, f ^ ∈ L ∞ ∩ C ( R ) {\displaystyle {\widehat {f}}\in L^{\infty }\cap C(\mathbb {R} )} is bounded, uniformly continuous and (by the Riemann–Lebesgue lemma) zero at infinity. The space L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} is the space of measurable functions for which the norm ‖ f ‖ 1 {\displaystyle \|f\|_{1}} is finite, modulo the equivalence relation of equality almost everywhere. The Fourier transform on L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} is one-to-one. However, there is no easy characterization of the image, and thus no easy characterization of the inverse transform. In particular, Eq.2 is no longer valid, as it was stated only under the hypothesis that f ( x ) {\displaystyle f(x)} decayed with all derivatives. While Eq.1 defines the Fourier transform for (complex-valued) functions in L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} , it is not well-defined for other integrability classes, most importantly the space of square-integrable functions L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . For example, the function f ( x ) = ( 1 + x 2 ) − 1 / 2 {\displaystyle f(x)=(1+x^{2})^{-1/2}} is in L 2 {\displaystyle L^{2}} but not L 1 {\displaystyle L^{1}} and therefore the Lebesgue integral Eq.1 does not exist. However, the Fourier transform on the dense subspace L 1 ∩ L 2 ( R ) ⊂ L 2 ( R ) {\displaystyle L^{1}\cap L^{2}(\mathbb {R} )\subset L^{2}(\mathbb {R} )} admits a unique continuous extension to a unitary operator on L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . This extension is important in part because, unlike the case of L 1 {\displaystyle L^{1}} , the Fourier transform is an automorphism of the space L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . In such cases, the Fourier transform can be obtained explicitly by regularizing the integral, and then passing to a limit. In practice, the integral is often regarded as an improper integral instead of a proper Lebesgue integral, but sometimes for convergence one needs to use weak limit or principal value instead of the (pointwise) limits implicit in an improper integral. Titchmarsh (1986) and Dym & McKean (1985) each gives three rigorous ways of extending the Fourier transform to square integrable functions using this procedure. A general principle in working with the L 2 {\displaystyle L^{2}} Fourier transform is that Gaussians are dense in L 1 ∩ L 2 {\displaystyle L^{1}\cap L^{2}} , and the various features of the Fourier transform, such as its unitarity, are easily inferred for Gaussians. Many of the properties of the Fourier transform, can then be proven from two facts about Gaussians: that e − π x 2 {\displaystyle e^{-\pi x^{2}}} is its own Fourier transform; and that the Gaussian integral ∫ − ∞ ∞ e − π x 2 d x = 1. {\displaystyle \int _{-\infty }^{\infty }e^{-\pi x^{2}}\,dx=1.} A feature of the L 1 {\displaystyle L^{1}} Fourier transform is that it is a homomorphism of Banach algebras from L 1 {\displaystyle L^{1}} equipped with the convolution operation to the Banach algebra of continuous functions under the L ∞ {\displaystyle L^{\infty }} (supremum) norm. The conventions chosen in this article are those of harmonic analysis, and are characterized as the unique conventions such that the Fourier transform is both unitary on L2 and an algebra homomorphism from L1 to L∞, without renormalizing the Lebesgue measure. === Angular frequency (ω) === When the independent variable ( x {\displaystyle x} ) represents time (often denoted by t {\displaystyle t} ), the transform variable ( ξ {\displaystyle \xi } ) represents frequency (often denoted by f {\displaystyle f} ). For example, if time is measured in seconds, then frequency is in hertz. The Fourier transform can also be written in terms of angular frequency, ω = 2 π ξ , {\displaystyle \omega =2\pi \xi ,} whose units are radians per second. The substitution ξ = ω 2 π {\displaystyle \xi ={\tfrac {\omega }{2\pi }}} into Eq.1 produces this convention, where function f ^ {\displaystyle {\widehat {f}}} is relabeled f 1 ^ : {\displaystyle {\widehat {f_{1}}}:} f 3 ^ ( ω ) ≜ ∫ − ∞ ∞ f ( x ) ⋅ e − i ω x d x = f 1 ^ ( ω 2 π ) , f ( x ) = 1 2 π ∫ − ∞ ∞ f 3 ^ ( ω ) ⋅ e i ω x d ω . {\displaystyle {\begin{aligned}{\widehat {f_{3}}}(\omega )&\triangleq \int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }{\widehat {f_{3}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}} Unlike the Eq.1 definition, the Fourier transform is no longer a unitary transformation, and there is less symmetry between the formulas for the transform and its inverse. Those properties are restored by splitting the 2 π {\displaystyle 2\pi } factor evenly between the transform and its inverse, which leads to another convention: f 2 ^ ( ω ) ≜ 1 2 π ∫ − ∞ ∞ f ( x ) ⋅ e − i ω x d x = 1 2 π f 1 ^ ( ω 2 π ) , f ( x ) = 1 2 π ∫ − ∞ ∞ f 2 ^ ( ω ) ⋅ e i ω x d ω . {\displaystyle {\begin{aligned}{\widehat {f_{2}}}(\omega )&\triangleq {\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\frac {1}{\sqrt {2\pi }}}\ \ {\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\widehat {f_{2}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}} Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites. == Background == === History === In 1822, Fourier claimed (see Joseph Fourier § The Analytic Theory of Heat) that any function, whether continuous or discontinuous, can be expanded into a series of sines. That important work was corrected and expanded upon by others to provide the foundation for the various forms of the Fourier transform used since. === Complex sinusoids === In general, the coefficients f ^ ( ξ ) {\displaystyle {\widehat {f}}(\xi )} are complex numbers, which have two equivalent forms (see Euler's formula): f ^ ( ξ ) = A e i θ ⏟ polar coordinate form = A cos ⁡ ( θ ) + i A sin ⁡ ( θ ) ⏟ rectangular coordinate form . {\displaystyle {\widehat {f}}(\xi )=\underbrace {Ae^{i\theta }} _{\text{polar coordinate form}}=\underbrace {A\cos(\theta )+iA\sin(\theta )} _{\text{rectangular coordinate form}}.} The product with e i 2 π ξ x {\displaystyle e^{i2\pi \xi x}} (Eq.2) has these forms: f ^ ( ξ ) ⋅ e i 2 π ξ x = A e i θ ⋅ e i 2 π ξ x = A e i ( 2 π ξ x + θ ) ⏟ polar coordinate form = A cos ⁡ ( 2 π ξ x + θ ) + i A sin ⁡ ( 2 π ξ x + θ ) ⏟ rectangular coordinate form . {\displaystyle {\begin{aligned}{\widehat {f}}(\xi )\cdot e^{i2\pi \xi x}&=Ae^{i\theta }\cdot e^{i2\pi \xi x}\\&=\underbrace {Ae^{i(2\pi \xi x+\theta )}} _{\text{polar coordinate form}}\\&=\underbrace {A\cos(2\pi \xi x+\theta )+iA\sin(2\pi \xi x+\theta )} _{\text{rectangular coordinate form}}.\end{aligned}}} which conveys both amplitude and phase of frequency ξ . {\displaystyle \xi .} Likewise, the intuitive interpretation of Eq.1 is that multiplying f ( x ) {\displaystyle f(x)} by e − i 2 π ξ x {\displaystyle e^{-i2\pi \xi x}} has the effect of subtracting ξ {\displaystyle \xi } from every frequency component of function f ( x ) . {\displaystyle f(x).} Only the component that was at frequency ξ {\displaystyle \xi } can produce a non-zero value of the infinite integral, because (at least formally) all the other shifted components are oscillatory and integrate to zero. (see § Example) It is noteworthy how easily the product was simplified using the polar form, and how easily the rectangular form was deduced by an application of Euler's formula. === Negative frequency === Euler's formula introduces the possibility of negative ξ . {\displaystyle \xi .} And Eq.1 is defined ∀ ξ ∈ R . {\displaystyle \forall \xi \in \mathbb {R} .} Only certain complex-valued f ( x ) {\displaystyle f(x)} have transforms f ^ = 0 , ∀ ξ < 0 {\displaystyle {\widehat {f}}=0,\ \forall \ \xi <0} (See Analytic signal. A simple example is e i 2 π ξ 0 x ( ξ 0 > 0 ) . {\displaystyle e^{i2\pi \xi _{0}x}\ (\xi _{0}>0).} ) But negative frequency is necessary to characterize all other complex-valued f ( x ) , {\displaystyle f(x),} found in signal processing, partial differential equations, radar, nonlinear optics, quantum mechanics, and others. For a real-valued f ( x ) , {\displaystyle f(x),} Eq.1 has the symmetry property f ^ ( − ξ ) = f ^ ∗ ( ξ ) {\displaystyle {\widehat {f}}(-\xi )={\widehat {f}}^{*}(\xi )} (see § Conjugation below). This redundancy enables Eq.2 to distinguish f ( x ) = cos ⁡ ( 2 π ξ 0 x ) {\displaystyle f(x)=\cos(2\pi \xi _{0}x)} from e i 2 π ξ 0 x . {\displaystyle e^{i2\pi \xi _{0}x}.} But of course it cannot tell us the actual sign of ξ 0 , {\displaystyle \xi _{0},} because cos ⁡ ( 2 π ξ 0 x ) {\displaystyle \cos(2\pi \xi _{0}x)} and cos ⁡ ( 2 π ( − ξ 0 ) x ) {\displaystyle \cos(2\pi (-\xi _{0})x)} are indistinguishable on just the real numbers line. === Fourier transform for periodic functions === The Fourier transform of a periodic function cannot be defined using the integral formula directly. In order for integral in Eq.1 to be defined the function must be absolutely integrable. Instead it is common to use Fourier series. It is possible to extend the definition to include periodic functions by viewing them as tempered distributions. This makes it possible to see a connection between the Fourier series and the Fourier transform for periodic functions that have a convergent Fourier series. If f ( x ) {\displaystyle f(x)} is a periodic function, with period P {\displaystyle P} , that has a convergent Fourier series, then: f ^ ( ξ ) = ∑ n = − ∞ ∞ c n ⋅ δ ( ξ − n P ) , {\displaystyle {\widehat {f}}(\xi )=\sum _{n=-\infty }^{\infty }c_{n}\cdot \delta \left(\xi -{\tfrac {n}{P}}\right),} where c n {\displaystyle c_{n}} are the Fourier series coefficients of f {\displaystyle f} , and δ {\displaystyle \delta } is the Dirac delta function. In other words, the Fourier transform is a Dirac comb function whose teeth are multiplied by the Fourier series coefficients. === Sampling the Fourier transform === The Fourier transform of an integrable function f {\displaystyle f} can be sampled at regular intervals of arbitrary length 1 P . {\displaystyle {\tfrac {1}{P}}.} These samples can be deduced from one cycle of a periodic function f P {\displaystyle f_{P}} which has Fourier series coefficients proportional to those samples by the Poisson summation formula: f P ( x ) ≜ ∑ n = − ∞ ∞ f ( x + n P ) = 1 P ∑ k = − ∞ ∞ f ^ ( k P ) e i 2 π k P x , ∀ k ∈ Z {\displaystyle f_{P}(x)\triangleq \sum _{n=-\infty }^{\infty }f(x+nP)={\frac {1}{P}}\sum _{k=-\infty }^{\infty }{\widehat {f}}\left({\tfrac {k}{P}}\right)e^{i2\pi {\frac {k}{P}}x},\quad \forall k\in \mathbb {Z} } The integrability of f {\displaystyle f} ensures the periodic summation converges. Therefore, the samples f ^ ( k P ) {\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)} can be determined by Fourier series analysis: f ^ ( k P ) = ∫ P f P ( x ) ⋅ e − i 2 π k P x d x . {\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)=\int _{P}f_{P}(x)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx.} When f ( x ) {\displaystyle f(x)} has compact support, f P ( x ) {\displaystyle f_{P}(x)} has a finite number of terms within the interval of integration. When f ( x ) {\displaystyle f(x)} does not have compact support, numerical evaluation of f P ( x ) {\displaystyle f_{P}(x)} requires an approximation, such as tapering f ( x ) {\displaystyle f(x)} or truncating the number of terms. == Units == The frequency variable must have inverse units to the units of the original function's domain (typically named t {\displaystyle t} or x {\displaystyle x} ). For example, if t {\displaystyle t} is measured in seconds, ξ {\displaystyle \xi } should be in cycles per second or hertz. If the scale of time is in units of 2 π {\displaystyle 2\pi } seconds, then another Greek letter ω {\displaystyle \omega } is typically used instead to represent angular frequency (where ω = 2 π ξ {\displaystyle \omega =2\pi \xi } ) in units of radians per second. If using x {\displaystyle x} for units of length, then ξ {\displaystyle \xi } must be in inverse length, e.g., wavenumbers. That is to say, there are two versions of the real line: one which is the range of t {\displaystyle t} and measured in units of t , {\displaystyle t,} and the other which is the range of ξ {\displaystyle \xi } and measured in inverse units to the units of t . {\displaystyle t.} These two distinct versions of the real line cannot be equated with each other. Therefore, the Fourier transform goes from one space of functions to a different space of functions: functions which have a different domain of definition. In general, ξ {\displaystyle \xi } must always be taken to be a linear form on the space of its domain, which is to say that the second real line is the dual space of the first real line. See the article on linear algebra for a more formal explanation and for more details. This point of view becomes essential in generalizations of the Fourier transform to general symmetry groups, including the case of Fourier series. That there is no one preferred way (often, one says "no canonical way") to compare the two versions of the real line which are involved in the Fourier transform—fixing the units on one line does not force the scale of the units on the other line—is the reason for the plethora of rival conventions on the definition of the Fourier transform. The various definitions resulting from different choices of units differ by various constants. In other conventions, the Fourier transform has i in the exponent instead of −i, and vice versa for the inversion formula. This convention is common in modern physics and is the default for Wolfram Alpha, and does not mean that the frequency has become negative, since there is no canonical definition of positivity for frequency of a complex wave. It simply means that f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} is the amplitude of the wave e − i 2 π ξ x {\displaystyle e^{-i2\pi \xi x}} instead of the wave e i 2 π ξ x {\displaystyle e^{i2\pi \xi x}} (the former, with its minus sign, is often seen in the time dependence for sinusoidal plane-wave solutions of the electromagnetic wave equation, or in the time dependence for quantum wave functions). Many of the identities involving the Fourier transform remain valid in those conventions, provided all terms that explicitly involve i have it replaced by −i. In electrical engineering the letter j is typically used for the imaginary unit instead of i because i is used for current. When using dimensionless units, the constant factors might not be written in the transform definition. For instance, in probability theory, the characteristic function Φ of the probability density function f of a random variable X of continuous type is defined without a negative sign in the exponential, and since the units of x are ignored, there is no 2π either: ϕ ( λ ) = ∫ − ∞ ∞ f ( x ) e i λ x d x . {\displaystyle \phi (\lambda )=\int _{-\infty }^{\infty }f(x)e^{i\lambda x}\,dx.} In probability theory and mathematical statistics, the use of the Fourier—Stieltjes transform is preferred, because many random variables are not of continuous type, and do not possess a density function, and one must treat not functions but distributions, i.e., measures which possess "atoms". From the higher point of view of group characters, which is much more abstract, all these arbitrary choices disappear, as will be explained in the later section of this article, which treats the notion of the Fourier transform of a function on a locally compact Abelian group. == Properties == Let f ( x ) {\displaystyle f(x)} and h ( x ) {\displaystyle h(x)} represent integrable functions Lebesgue-measurable on the real line satisfying: ∫ − ∞ ∞ | f ( x ) | d x < ∞ . {\displaystyle \int _{-\infty }^{\infty }|f(x)|\,dx<\infty .} We denote the Fourier transforms of these functions as f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} and h ^ ( ξ ) {\displaystyle {\hat {h}}(\xi )} respectively. === Basic properties === The Fourier transform has the following basic properties: ==== Linearity ==== a f ( x ) + b h ( x ) ⟺ F a f ^ ( ξ ) + b h ^ ( ξ ) ; a , b ∈ C {\displaystyle a\ f(x)+b\ h(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ a\ {\widehat {f}}(\xi )+b\ {\widehat {h}}(\xi );\quad \ a,b\in \mathbb {C} } ==== Time shifting ==== f ( x − x 0 ) ⟺ F e − i 2 π x 0 ξ f ^ ( ξ ) ; x 0 ∈ R {\displaystyle f(x-x_{0})\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ e^{-i2\pi x_{0}\xi }\ {\widehat {f}}(\xi );\quad \ x_{0}\in \mathbb {R} } ==== Frequency shifting ==== e i 2 π ξ 0 x f ( x ) ⟺ F f ^ ( ξ − ξ 0 ) ; ξ 0 ∈ R {\displaystyle e^{i2\pi \xi _{0}x}f(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(\xi -\xi _{0});\quad \ \xi _{0}\in \mathbb {R} } ==== Time scaling ==== f ( a x ) ⟺ F 1 | a | f ^ ( ξ a ) ; a ≠ 0 {\displaystyle f(ax)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\frac {1}{|a|}}{\widehat {f}}\left({\frac {\xi }{a}}\right);\quad \ a\neq 0} The case a = − 1 {\displaystyle a=-1} leads to the time-reversal property: f ( − x ) ⟺ F f ^ ( − ξ ) {\displaystyle f(-x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(-\xi )} ==== Symmetry ==== When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform: T i m e d o m a i n f = f RE + f RO + i f IE + i f IO ⏟ ⇕ F ⇕ F ⇕ F ⇕ F ⇕ F F r e q u e n c y d o m a i n f ^ = f ^ RE + i f ^ IO ⏞ + i f ^ IE + f ^ RO {\displaystyle {\begin{array}{rlcccccccc}{\mathsf {Time\ domain}}&f&=&f_{_{\text{RE}}}&+&f_{_{\text{RO}}}&+&i\ f_{_{\text{IE}}}&+&\underbrace {i\ f_{_{\text{IO}}}} \\&{\Bigg \Updownarrow }{\mathcal {F}}&&{\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}\\{\mathsf {Frequency\ domain}}&{\widehat {f}}&=&{\widehat {f}}_{_{\text{RE}}}&+&\overbrace {i\ {\widehat {f}}_{_{\text{IO}}}\,} &+&i\ {\widehat {f}}_{_{\text{IE}}}&+&{\widehat {f}}_{_{\text{RO}}}\end{array}}} From this, various relationships are apparent, for example: The transform of a real-valued function ( f R E + f R O ) {\displaystyle (f_{_{RE}}+f_{_{RO}})} is the conjugate symmetric function f ^ R E + i f ^ I O . {\displaystyle {\hat {f}}_{RE}+i\ {\hat {f}}_{IO}.} Conversely, a conjugate symmetric transform implies a real-valued time-domain. The transform of an imaginary-valued function ( i f I E + i f I O ) {\displaystyle (i\ f_{_{IE}}+i\ f_{_{IO}})} is the conjugate antisymmetric function f ^ R O + i f ^ I E , {\displaystyle {\hat {f}}_{RO}+i\ {\hat {f}}_{IE},} and the converse is true. The transform of a conjugate symmetric function ( f R E + i f I O ) {\displaystyle (f_{_{RE}}+i\ f_{_{IO}})} is the real-valued function f ^ R E + f ^ R O , {\displaystyle {\hat {f}}_{RE}+{\hat {f}}_{RO},} and the converse is true. The transform of a conjugate antisymmetric function ( f R O + i f I E ) {\displaystyle (f_{_{RO}}+i\ f_{_{IE}})} is the imaginary-valued function i f ^ I E + i f ^ I O , {\displaystyle i\ {\hat {f}}_{IE}+i{\hat {f}}_{IO},} and the converse is true. ==== Conjugation ==== ( f ( x ) ) ∗ ⟺ F ( f ^ ( − ξ ) ) ∗ {\displaystyle {\bigl (}f(x){\bigr )}^{*}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ \left({\widehat {f}}(-\xi )\right)^{*}} (Note: the ∗ denotes complex conjugation.) In particular, if f {\displaystyle f} is real, then f ^ {\displaystyle {\widehat {f}}} is even symmetric (aka Hermitian function): f ^ ( − ξ ) = ( f ^ ( ξ ) ) ∗ . {\displaystyle {\widehat {f}}(-\xi )={\bigl (}{\widehat {f}}(\xi ){\bigr )}^{*}.} And if f {\displaystyle f} is purely imaginary, then f ^ {\displaystyle {\widehat {f}}} is odd symmetric: f ^ ( − ξ ) = − ( f ^ ( ξ ) ) ∗ . {\displaystyle {\widehat {f}}(-\xi )=-({\widehat {f}}(\xi ))^{*}.} ==== Real and imaginary parts ==== Re ⁡ { f ( x ) } ⟺ F 1 2 ( f ^ ( ξ ) + ( f ^ ( − ξ ) ) ∗ ) {\displaystyle \operatorname {Re} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2}}\left({\widehat {f}}(\xi )+{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)} Im ⁡ { f ( x ) } ⟺ F 1 2 i ( f ^ ( ξ ) − ( f ^ ( − ξ ) ) ∗ ) {\displaystyle \operatorname {Im} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2i}}\left({\widehat {f}}(\xi )-{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)} ==== Zero frequency component ==== Substituting ξ = 0 {\displaystyle \xi =0} in the definition, we obtain: f ^ ( 0 ) = ∫ − ∞ ∞ f ( x ) d x . {\displaystyle {\widehat {f}}(0)=\int _{-\infty }^{\infty }f(x)\,dx.} The integral of f {\displaystyle f} over its domain is known as the average value or DC bias of the function. === Uniform continuity and the Riemann–Lebesgue lemma === The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties. The Fourier transform f ^ {\displaystyle {\hat {f}}} of any integrable function f {\displaystyle f} is uniformly continuous and ‖ f ^ ‖ ∞ ≤ ‖ f ‖ 1 {\displaystyle \left\|{\hat {f}}\right\|_{\infty }\leq \left\|f\right\|_{1}} By the Riemann–Lebesgue lemma, f ^ ( ξ ) → 0 as | ξ | → ∞ . {\displaystyle {\hat {f}}(\xi )\to 0{\text{ as }}|\xi |\to \infty .} However, f ^ {\displaystyle {\hat {f}}} need not be integrable. For example, the Fourier transform of the rectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improper integrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutely convergent. It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both f {\displaystyle f} and f ^ {\displaystyle {\hat {f}}} are integrable, the inverse equality f ( x ) = ∫ − ∞ ∞ f ^ ( ξ ) e i 2 π x ξ d ξ {\displaystyle f(x)=\int _{-\infty }^{\infty }{\hat {f}}(\xi )e^{i2\pi x\xi }\,d\xi } holds for almost every x. As a result, the Fourier transform is injective on L1(R). === Plancherel theorem and Parseval's theorem === Let f(x) and g(x) be integrable, and let f̂(ξ) and ĝ(ξ) be their Fourier transforms. If f(x) and g(x) are also square-integrable, then the Parseval formula follows: ⟨ f , g ⟩ L 2 = ∫ − ∞ ∞ f ( x ) g ( x ) ¯ d x = ∫ − ∞ ∞ f ^ ( ξ ) g ^ ( ξ ) ¯ d ξ , {\displaystyle \langle f,g\rangle _{L^{2}}=\int _{-\infty }^{\infty }f(x){\overline {g(x)}}\,dx=\int _{-\infty }^{\infty }{\hat {f}}(\xi ){\overline {{\hat {g}}(\xi )}}\,d\xi ,} where the bar denotes complex conjugation. The Plancherel theorem, which follows from the above, states that ‖ f ‖ L 2 2 = ∫ − ∞ ∞ | f ( x ) | 2 d x = ∫ − ∞ ∞ | f ^ ( ξ ) | 2 d ξ . {\displaystyle \|f\|_{L^{2}}^{2}=\int _{-\infty }^{\infty }\left|f(x)\right|^{2}\,dx=\int _{-\infty }^{\infty }\left|{\hat {f}}(\xi )\right|^{2}\,d\xi .} Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to a unitary operator on L2(R). On L1(R) ∩ L2(R), this extension agrees with original Fourier transform defined on L1(R), thus enlarging the domain of the Fourier transform to L1(R) + L2(R) (and consequently to Lp(R) for 1 ≤ p ≤ 2). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. The terminology of these formulas is not quite standardised. Parseval's theorem was proved only for Fourier series, and was first proved by Lyapunov. But Parseval's formula makes sense for the Fourier transform as well, and so even though in the context of the Fourier transform it was proved by Plancherel, it is still often referred to as Parseval's formula, or Parseval's relation, or even Parseval's theorem. See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups. === Convolution theorem === The Fourier transform translates between convolution and multiplication of functions. If f(x) and g(x) are integrable functions with Fourier transforms f̂(ξ) and ĝ(ξ) respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms f̂(ξ) and ĝ(ξ) (under other conventions for the definition of the Fourier transform a constant factor may appear). This means that if: h ( x ) = ( f ∗ g ) ( x ) = ∫ − ∞ ∞ f ( y ) g ( x − y ) d y , {\displaystyle h(x)=(f*g)(x)=\int _{-\infty }^{\infty }f(y)g(x-y)\,dy,} where ∗ denotes the convolution operation, then: h ^ ( ξ ) = f ^ ( ξ ) g ^ ( ξ ) . {\displaystyle {\hat {h}}(\xi )={\hat {f}}(\xi )\,{\hat {g}}(\xi ).} In linear time invariant (LTI) system theory, it is common to interpret g(x) as the impulse response of an LTI system with input f(x) and output h(x), since substituting the unit impulse for f(x) yields h(x) = g(x). In this case, ĝ(ξ) represents the frequency response of the system. Conversely, if f(x) can be decomposed as the product of two square integrable functions p(x) and q(x), then the Fourier transform of f(x) is given by the convolution of the respective Fourier transforms p̂(ξ) and q̂(ξ). === Cross-correlation theorem === In an analogous manner, it can be shown that if h(x) is the cross-correlation of f(x) and g(x): h ( x ) = ( f ⋆ g ) ( x ) = ∫ − ∞ ∞ f ( y ) ¯ g ( x + y ) d y {\displaystyle h(x)=(f\star g)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}g(x+y)\,dy} then the Fourier transform of h(x) is: h ^ ( ξ ) = f ^ ( ξ ) ¯ g ^ ( ξ ) . {\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}\,{\hat {g}}(\xi ).} As a special case, the autocorrelation of function f(x) is: h ( x ) = ( f ⋆ f ) ( x ) = ∫ − ∞ ∞ f ( y ) ¯ f ( x + y ) d y {\displaystyle h(x)=(f\star f)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}f(x+y)\,dy} for which h ^ ( ξ ) = f ^ ( ξ ) ¯ f ^ ( ξ ) = | f ^ ( ξ ) | 2 . {\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}{\hat {f}}(\xi )=\left|{\hat {f}}(\xi )\right|^{2}.} === Differentiation === Suppose f(x) is an absolutely continuous differentiable function, and both f and its derivative f′ are integrable. Then the Fourier transform of the derivative is given by f ′ ^ ( ξ ) = F { d d x f ( x ) } = i 2 π ξ f ^ ( ξ ) . {\displaystyle {\widehat {f'\,}}(\xi )={\mathcal {F}}\left\{{\frac {d}{dx}}f(x)\right\}=i2\pi \xi {\hat {f}}(\xi ).} More generally, the Fourier transformation of the nth derivative f(n) is given by f ( n ) ^ ( ξ ) = F { d n d x n f ( x ) } = ( i 2 π ξ ) n f ^ ( ξ ) . {\displaystyle {\widehat {f^{(n)}}}(\xi )={\mathcal {F}}\left\{{\frac {d^{n}}{dx^{n}}}f(x)\right\}=(i2\pi \xi )^{n}{\hat {f}}(\xi ).} Analogously, F { d n d ξ n f ^ ( ξ ) } = ( i 2 π x ) n f ( x ) {\displaystyle {\mathcal {F}}\left\{{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi )\right\}=(i2\pi x)^{n}f(x)} , so F { x n f ( x ) } = ( i 2 π ) n d n d ξ n f ^ ( ξ ) . {\displaystyle {\mathcal {F}}\left\{x^{n}f(x)\right\}=\left({\frac {i}{2\pi }}\right)^{n}{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi ).} By applying the Fourier transform and using these formulas, some ordinary differential equations can be transformed into algebraic equations, which are much easier to solve. These formulas also give rise to the rule of thumb "f(x) is smooth if and only if f̂(ξ) quickly falls to 0 for |ξ| → ∞." By using the analogous rules for the inverse Fourier transform, one can also say "f(x) quickly falls to 0 for |x| → ∞ if and only if f̂(ξ) is smooth." === Eigenfunctions === The Fourier transform is a linear transform which has eigenfunctions obeying F [ ψ ] = λ ψ , {\displaystyle {\mathcal {F}}[\psi ]=\lambda \psi ,} with λ ∈ C . {\displaystyle \lambda \in \mathbb {C} .} A set of eigenfunctions is found by noting that the homogeneous differential equation [ U ( 1 2 π d d x ) + U ( x ) ] ψ ( x ) = 0 {\displaystyle \left[U\left({\frac {1}{2\pi }}{\frac {d}{dx}}\right)+U(x)\right]\psi (x)=0} leads to eigenfunctions ψ ( x ) {\displaystyle \psi (x)} of the Fourier transform F {\displaystyle {\mathcal {F}}} as long as the form of the equation remains invariant under Fourier transform. In other words, every solution ψ ( x ) {\displaystyle \psi (x)} and its Fourier transform ψ ^ ( ξ ) {\displaystyle {\hat {\psi }}(\xi )} obey the same equation. Assuming uniqueness of the solutions, every solution ψ ( x ) {\displaystyle \psi (x)} must therefore be an eigenfunction of the Fourier transform. The form of the equation remains unchanged under Fourier transform if U ( x ) {\displaystyle U(x)} can be expanded in a power series in which for all terms the same factor of either one of ± 1 , ± i {\displaystyle \pm 1,\pm i} arises from the factors i n {\displaystyle i^{n}} introduced by the differentiation rules upon Fourier transforming the homogeneous differential equation because this factor may then be cancelled. The simplest allowable U ( x ) = x {\displaystyle U(x)=x} leads to the standard normal distribution. More generally, a set of eigenfunctions is also found by noting that the differentiation rules imply that the ordinary differential equation [ W ( i 2 π d d x ) + W ( x ) ] ψ ( x ) = C ψ ( x ) {\displaystyle \left[W\left({\frac {i}{2\pi }}{\frac {d}{dx}}\right)+W(x)\right]\psi (x)=C\psi (x)} with C {\displaystyle C} constant and W ( x ) {\displaystyle W(x)} being a non-constant even function remains invariant in form when applying the Fourier transform F {\displaystyle {\mathcal {F}}} to both sides of the equation. The simplest example is provided by W ( x ) = x 2 {\displaystyle W(x)=x^{2}} which is equivalent to considering the Schrödinger equation for the quantum harmonic oscillator. The corresponding solutions provide an important choice of an orthonormal basis for L2(R) and are given by the "physicist's" Hermite functions. Equivalently one may use ψ n ( x ) = 2 4 n ! e − π x 2 H e n ( 2 x π ) , {\displaystyle \psi _{n}(x)={\frac {\sqrt[{4}]{2}}{\sqrt {n!}}}e^{-\pi x^{2}}\mathrm {He} _{n}\left(2x{\sqrt {\pi }}\right),} where Hen(x) are the "probabilist's" Hermite polynomials, defined as H e n ( x ) = ( − 1 ) n e 1 2 x 2 ( d d x ) n e − 1 2 x 2 . {\displaystyle \mathrm {He} _{n}(x)=(-1)^{n}e^{{\frac {1}{2}}x^{2}}\left({\frac {d}{dx}}\right)^{n}e^{-{\frac {1}{2}}x^{2}}.} Under this convention for the Fourier transform, we have that ψ ^ n ( ξ ) = ( − i ) n ψ n ( ξ ) . {\displaystyle {\hat {\psi }}_{n}(\xi )=(-i)^{n}\psi _{n}(\xi ).} In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fourier transform on L2(R). However, this choice of eigenfunctions is not unique. Because of F 4 = i d {\displaystyle {\mathcal {F}}^{4}=\mathrm {id} } there are only four different eigenvalues of the Fourier transform (the fourth roots of unity ±1 and ±i) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose L2(R) as a direct sum of four spaces H0, H1, H2, and H3 where the Fourier transform acts on Hek simply by multiplication by ik. Since the complete set of Hermite functions ψn provides a resolution of the identity they diagonalize the Fourier operator, i.e. the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed: F [ f ] ( ξ ) = ∫ d x f ( x ) ∑ n ≥ 0 ( − i ) n ψ n ( x ) ψ n ( ξ ) . {\displaystyle {\mathcal {F}}[f](\xi )=\int dxf(x)\sum _{n\geq 0}(-i)^{n}\psi _{n}(x)\psi _{n}(\xi )~.} This approach to define the Fourier transform was first proposed by Norbert Wiener. Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely the fractional Fourier transform used in time–frequency analysis. In physics, this transform was introduced by Edward Condon. This change of basis functions becomes possible because the Fourier transform is a unitary transform when using the right conventions. Consequently, under the proper conditions it may be expected to result from a self-adjoint generator N {\displaystyle N} via F [ ψ ] = e − i t N ψ . {\displaystyle {\mathcal {F}}[\psi ]=e^{-itN}\psi .} The operator N {\displaystyle N} is the number operator of the quantum harmonic oscillator written as N ≡ 1 2 ( x − ∂ ∂ x ) ( x + ∂ ∂ x ) = 1 2 ( − ∂ 2 ∂ x 2 + x 2 − 1 ) . {\displaystyle N\equiv {\frac {1}{2}}\left(x-{\frac {\partial }{\partial x}}\right)\left(x+{\frac {\partial }{\partial x}}\right)={\frac {1}{2}}\left(-{\frac {\partial ^{2}}{\partial x^{2}}}+x^{2}-1\right).} It can be interpreted as the generator of fractional Fourier transforms for arbitrary values of t, and of the conventional continuous Fourier transform F {\displaystyle {\mathcal {F}}} for the particular value t = π / 2 , {\displaystyle t=\pi /2,} with the Mehler kernel implementing the corresponding active transform. The eigenfunctions of N {\displaystyle N} are the Hermite functions ψ n ( x ) {\displaystyle \psi _{n}(x)} which are therefore also eigenfunctions of F . {\displaystyle {\mathcal {F}}.} Upon extending the Fourier transform to distributions the Dirac comb is also an eigenfunction of the Fourier transform. === Inversion and periodicity === Under suitable conditions on the function f {\displaystyle f} , it can be recovered from its Fourier transform f ^ {\displaystyle {\hat {f}}} . Indeed, denoting the Fourier transform operator by F {\displaystyle {\mathcal {F}}} , so F f := f ^ {\displaystyle {\mathcal {F}}f:={\hat {f}}} , then for suitable functions, applying the Fourier transform twice simply flips the function: ( F 2 f ) ( x ) = f ( − x ) {\displaystyle \left({\mathcal {F}}^{2}f\right)(x)=f(-x)} , which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yields F 4 ( f ) = f {\displaystyle {\mathcal {F}}^{4}(f)=f} , so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times: F 3 ( f ^ ) = f {\displaystyle {\mathcal {F}}^{3}\left({\hat {f}}\right)=f} . In particular the Fourier transform is invertible (under suitable conditions). More precisely, defining the parity operator P {\displaystyle {\mathcal {P}}} such that ( P f ) ( x ) = f ( − x ) {\displaystyle ({\mathcal {P}}f)(x)=f(-x)} , we have: F 0 = i d , F 1 = F , F 2 = P , F 3 = F − 1 = P ∘ F = F ∘ P , F 4 = i d {\displaystyle {\begin{aligned}{\mathcal {F}}^{0}&=\mathrm {id} ,\\{\mathcal {F}}^{1}&={\mathcal {F}},\\{\mathcal {F}}^{2}&={\mathcal {P}},\\{\mathcal {F}}^{3}&={\mathcal {F}}^{-1}={\mathcal {P}}\circ {\mathcal {F}}={\mathcal {F}}\circ {\mathcal {P}},\\{\mathcal {F}}^{4}&=\mathrm {id} \end{aligned}}} These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equality almost everywhere?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of the Fourier inversion theorem. This fourfold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in the time–frequency domain (considering time as the x-axis and frequency as the y-axis), and the Fourier transform can be generalized to the fractional Fourier transform, which involves rotations by other angles. This can be further generalized to linear canonical transformations, which can be visualized as the action of the special linear group SL2(R) on the time–frequency plane, with the preserved symplectic form corresponding to the uncertainty principle, below. This approach is particularly studied in signal processing, under time–frequency analysis. === Connection with the Heisenberg group === The Heisenberg group is a certain group of unitary operators on the Hilbert space L2(R) of square integrable complex valued functions f on the real line, generated by the translations (Ty f)(x) = f (x + y) and multiplication by ei2πξx, (Mξ f)(x) = ei2πξx f (x). These operators do not commute, as their (group) commutator is ( M ξ − 1 T y − 1 M ξ T y f ) ( x ) = e i 2 π ξ y f ( x ) {\displaystyle \left(M_{\xi }^{-1}T_{y}^{-1}M_{\xi }T_{y}f\right)(x)=e^{i2\pi \xi y}f(x)} which is multiplication by the constant (independent of x) ei2πξy ∈ U(1) (the circle group of unit modulus complex numbers). As an abstract group, the Heisenberg group is the three-dimensional Lie group of triples (x, ξ, z) ∈ R2 × U(1), with the group law ( x 1 , ξ 1 , t 1 ) ⋅ ( x 2 , ξ 2 , t 2 ) = ( x 1 + x 2 , ξ 1 + ξ 2 , t 1 t 2 e i 2 π ( x 1 ξ 1 + x 2 ξ 2 + x 1 ξ 2 ) ) . {\displaystyle \left(x_{1},\xi _{1},t_{1}\right)\cdot \left(x_{2},\xi _{2},t_{2}\right)=\left(x_{1}+x_{2},\xi _{1}+\xi _{2},t_{1}t_{2}e^{i2\pi \left(x_{1}\xi _{1}+x_{2}\xi _{2}+x_{1}\xi _{2}\right)}\right).} Denote the Heisenberg group by H1. The above procedure describes not only the group structure, but also a standard unitary representation of H1 on a Hilbert space, which we denote by ρ : H1 → B(L2(R)). Define the linear automorphism of R2 by J ( x ξ ) = ( − ξ x ) {\displaystyle J{\begin{pmatrix}x\\\xi \end{pmatrix}}={\begin{pmatrix}-\xi \\x\end{pmatrix}}} so that J2 = −I. This J can be extended to a unique automorphism of H1: j ( x , ξ , t ) = ( − ξ , x , t e − i 2 π ξ x ) . {\displaystyle j\left(x,\xi ,t\right)=\left(-\xi ,x,te^{-i2\pi \xi x}\right).} According to the Stone–von Neumann theorem, the unitary representations ρ and ρ ∘ j are unitarily equivalent, so there is a unique intertwiner W ∈ U(L2(R)) such that ρ ∘ j = W ρ W ∗ . {\displaystyle \rho \circ j=W\rho W^{*}.} This operator W is the Fourier transform. Many of the standard properties of the Fourier transform are immediate consequences of this more general framework. For example, the square of the Fourier transform, W2, is an intertwiner associated with J2 = −I, and so we have (W2f)(x) = f (−x) is the reflection of the original function f. == Complex domain == The integral for the Fourier transform f ^ ( ξ ) = ∫ − ∞ ∞ e − i 2 π ξ t f ( t ) d t {\displaystyle {\hat {f}}(\xi )=\int _{-\infty }^{\infty }e^{-i2\pi \xi t}f(t)\,dt} can be studied for complex values of its argument ξ. Depending on the properties of f, this might not converge off the real axis at all, or it might converge to a complex analytic function for all values of ξ = σ + iτ, or something in between. The Paley–Wiener theorem says that f is smooth (i.e., n-times differentiable for all positive integers n) and compactly supported if and only if f̂ (σ + iτ) is a holomorphic function for which there exists a constant a > 0 such that for any integer n ≥ 0, | ξ n f ^ ( ξ ) | ≤ C e a | τ | {\displaystyle \left\vert \xi ^{n}{\hat {f}}(\xi )\right\vert \leq Ce^{a\vert \tau \vert }} for some constant C. (In this case, f is supported on [−a, a].) This can be expressed by saying that f̂ is an entire function which is rapidly decreasing in σ (for fixed τ) and of exponential growth in τ (uniformly in σ). (If f is not smooth, but only L2, the statement still holds provided n = 0.) The space of such functions of a complex variable is called the Paley—Wiener space. This theorem has been generalised to semisimple Lie groups. If f is supported on the half-line t ≥ 0, then f is said to be "causal" because the impulse response function of a physically realisable filter must have this property, as no effect can precede its cause. Paley and Wiener showed that then f̂ extends to a holomorphic function on the complex lower half-plane τ < 0 which tends to zero as τ goes to infinity. The converse is false and it is not known how to characterise the Fourier transform of a causal function. === Laplace transform === The Fourier transform f̂(ξ) is related to the Laplace transform F(s), which is also used for the solution of differential equations and the analysis of filters. It may happen that a function f for which the Fourier integral does not converge on the real axis at all, nevertheless has a complex Fourier transform defined in some region of the complex plane. For example, if f(t) is of exponential growth, i.e., | f ( t ) | < C e a | t | {\displaystyle \vert f(t)\vert <Ce^{a\vert t\vert }} for some constants C, a ≥ 0, then f ^ ( i τ ) = ∫ − ∞ ∞ e 2 π τ t f ( t ) d t , {\displaystyle {\hat {f}}(i\tau )=\int _{-\infty }^{\infty }e^{2\pi \tau t}f(t)\,dt,} convergent for all 2πτ < −a, is the two-sided Laplace transform of f. The more usual version ("one-sided") of the Laplace transform is F ( s ) = ∫ 0 ∞ f ( t ) e − s t d t . {\displaystyle F(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt.} If f is also causal, and analytical, then: f ^ ( i τ ) = F ( − 2 π τ ) . {\displaystyle {\hat {f}}(i\tau )=F(-2\pi \tau ).} Thus, extending the Fourier transform to the complex domain means it includes the Laplace transform as a special case in the case of causal functions—but with the change of variable s = i2πξ. From another, perhaps more classical viewpoint, the Laplace transform by its form involves an additional exponential regulating term which lets it converge outside of the imaginary line where the Fourier transform is defined. As such it can converge for at most exponentially divergent series and integrals, whereas the original Fourier decomposition cannot, enabling analysis of systems with divergent or critical elements. Two particular examples from linear signal processing are the construction of allpass filter networks from critical comb and mitigating filters via exact pole-zero cancellation on the unit circle. Such designs are common in audio processing, where highly nonlinear phase response is sought for, as in reverb. Furthermore, when extended pulselike impulse responses are sought for signal processing work, the easiest way to produce them is to have one circuit which produces a divergent time response, and then to cancel its divergence through a delayed opposite and compensatory response. There, only the delay circuit in-between admits a classical Fourier description, which is critical. Both the circuits to the side are unstable, and do not admit a convergent Fourier decomposition. However, they do admit a Laplace domain description, with identical half-planes of convergence in the complex plane (or in the discrete case, the Z-plane), wherein their effects cancel. In modern mathematics the Laplace transform is conventionally subsumed under the aegis Fourier methods. Both of them are subsumed by the far more general, and more abstract, idea of harmonic analysis. === Inversion === Still with ξ = σ + i τ {\displaystyle \xi =\sigma +i\tau } , if f ^ {\displaystyle {\widehat {f}}} is complex analytic for a ≤ τ ≤ b, then ∫ − ∞ ∞ f ^ ( σ + i a ) e i 2 π ξ t d σ = ∫ − ∞ ∞ f ^ ( σ + i b ) e i 2 π ξ t d σ {\displaystyle \int _{-\infty }^{\infty }{\hat {f}}(\sigma +ia)e^{i2\pi \xi t}\,d\sigma =\int _{-\infty }^{\infty }{\hat {f}}(\sigma +ib)e^{i2\pi \xi t}\,d\sigma } by Cauchy's integral theorem. Therefore, the Fourier inversion formula can use integration along different lines, parallel to the real axis. Theorem: If f(t) = 0 for t < 0, and |f(t)| < Cea|t| for some constants C, a > 0, then f ( t ) = ∫ − ∞ ∞ f ^ ( σ + i τ ) e i 2 π ξ t d σ , {\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}(\sigma +i\tau )e^{i2\pi \xi t}\,d\sigma ,} for any τ < −⁠a/2π⁠. This theorem implies the Mellin inversion formula for the Laplace transformation, f ( t ) = 1 i 2 π ∫ b − i ∞ b + i ∞ F ( s ) e s t d s {\displaystyle f(t)={\frac {1}{i2\pi }}\int _{b-i\infty }^{b+i\infty }F(s)e^{st}\,ds} for any b > a, where F(s) is the Laplace transform of f(t). The hypotheses can be weakened, as in the results of Carleson and Hunt, to f(t) e−at being L1, provided that f be of bounded variation in a closed neighborhood of t (cf. Dini test), the value of f at t be taken to be the arithmetic mean of the left and right limits, and that the integrals be taken in the sense of Cauchy principal values. L2 versions of these inversion formulas are also available. == Fourier transform on Euclidean space == The Fourier transform can be defined in any arbitrary number of dimensions n. As with the one-dimensional case, there are many conventions. For an integrable function f(x), this article takes the definition: f ^ ( ξ ) = F ( f ) ( ξ ) = ∫ R n f ( x ) e − i 2 π ξ ⋅ x d x {\displaystyle {\hat {f}}({\boldsymbol {\xi }})={\mathcal {F}}(f)({\boldsymbol {\xi }})=\int _{\mathbb {R} ^{n}}f(\mathbf {x} )e^{-i2\pi {\boldsymbol {\xi }}\cdot \mathbf {x} }\,d\mathbf {x} } where x and ξ are n-dimensional vectors, and x · ξ is the dot product of the vectors. Alternatively, ξ can be viewed as belonging to the dual vector space R n ⋆ {\displaystyle \mathbb {R} ^{n\star }} , in which case the dot product becomes the contraction of x and ξ, usually written as ⟨x, ξ⟩. All of the basic properties listed above hold for the n-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the Riemann–Lebesgue lemma holds. === Uncertainty principle === Generally speaking, the more concentrated f(x) is, the more spread out its Fourier transform f̂(ξ) must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we squeeze a function in x, its Fourier transform stretches out in ξ. It is not possible to arbitrarily concentrate both a function and its Fourier transform. The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an uncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to the symplectic form on the time–frequency domain: from the point of view of the linear canonical transformation, the Fourier transform is rotation by 90° in the time–frequency domain, and preserves the symplectic form. Suppose f(x) is an integrable and square-integrable function. Without loss of generality, assume that f(x) is normalized: ∫ − ∞ ∞ | f ( x ) | 2 d x = 1. {\displaystyle \int _{-\infty }^{\infty }|f(x)|^{2}\,dx=1.} It follows from the Plancherel theorem that f̂(ξ) is also normalized. The spread around x = 0 may be measured by the dispersion about zero defined by D 0 ( f ) = ∫ − ∞ ∞ x 2 | f ( x ) | 2 d x . {\displaystyle D_{0}(f)=\int _{-\infty }^{\infty }x^{2}|f(x)|^{2}\,dx.} In probability terms, this is the second moment of |f(x)|2 about zero. The uncertainty principle states that, if f(x) is absolutely continuous and the functions x·f(x) and f′(x) are square integrable, then D 0 ( f ) D 0 ( f ^ ) ≥ 1 16 π 2 . {\displaystyle D_{0}(f)D_{0}({\hat {f}})\geq {\frac {1}{16\pi ^{2}}}.} The equality is attained only in the case f ( x ) = C 1 e − π x 2 σ 2 ∴ f ^ ( ξ ) = σ C 1 e − π σ 2 ξ 2 {\displaystyle {\begin{aligned}f(x)&=C_{1}\,e^{-\pi {\frac {x^{2}}{\sigma ^{2}}}}\\\therefore {\hat {f}}(\xi )&=\sigma C_{1}\,e^{-\pi \sigma ^{2}\xi ^{2}}\end{aligned}}} where σ > 0 is arbitrary and C1 = ⁠4√2/√σ⁠ so that f is L2-normalized. In other words, where f is a (normalized) Gaussian function with variance σ2/2π, centered at zero, and its Fourier transform is a Gaussian function with variance σ−2/2π. Gaussian functions are examples of Schwartz functions (see the discussion on tempered distributions below). In fact, this inequality implies that: ( ∫ − ∞ ∞ ( x − x 0 ) 2 | f ( x ) | 2 d x ) ( ∫ − ∞ ∞ ( ξ − ξ 0 ) 2 | f ^ ( ξ ) | 2 d ξ ) ≥ 1 16 π 2 , ∀ x 0 , ξ 0 ∈ R . {\displaystyle \left(\int _{-\infty }^{\infty }(x-x_{0})^{2}|f(x)|^{2}\,dx\right)\left(\int _{-\infty }^{\infty }(\xi -\xi _{0})^{2}\left|{\hat {f}}(\xi )\right|^{2}\,d\xi \right)\geq {\frac {1}{16\pi ^{2}}},\quad \forall x_{0},\xi _{0}\in \mathbb {R} .} In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, up to a factor of the Planck constant. With this constant properly taken into account, the inequality above becomes the statement of the Heisenberg uncertainty principle. A stronger uncertainty principle is the Hirschman uncertainty principle, which is expressed as: H ( | f | 2 ) + H ( | f ^ | 2 ) ≥ log ⁡ ( e 2 ) {\displaystyle H\left(\left|f\right|^{2}\right)+H\left(\left|{\hat {f}}\right|^{2}\right)\geq \log \left({\frac {e}{2}}\right)} where H(p) is the differential entropy of the probability density function p(x): H ( p ) = − ∫ − ∞ ∞ p ( x ) log ⁡ ( p ( x ) ) d x {\displaystyle H(p)=-\int _{-\infty }^{\infty }p(x)\log {\bigl (}p(x){\bigr )}\,dx} where the logarithms may be in any base that is consistent. The equality is attained for a Gaussian, as in the previous case. === Sine and cosine transforms === Fourier's original formulation of the transform did not use complex numbers, but rather sines and cosines. Statisticians and others still use this form. An absolutely integrable function f for which Fourier inversion holds can be expanded in terms of genuine frequencies (avoiding negative frequencies, which are sometimes considered hard to interpret physically) λ by f ( t ) = ∫ 0 ∞ ( a ( λ ) cos ⁡ ( 2 π λ t ) + b ( λ ) sin ⁡ ( 2 π λ t ) ) d λ . {\displaystyle f(t)=\int _{0}^{\infty }{\bigl (}a(\lambda )\cos(2\pi \lambda t)+b(\lambda )\sin(2\pi \lambda t){\bigr )}\,d\lambda .} This is called an expansion as a trigonometric integral, or a Fourier integral expansion. The coefficient functions a and b can be found by using variants of the Fourier cosine transform and the Fourier sine transform (the normalisations are, again, not standardised): a ( λ ) = 2 ∫ − ∞ ∞ f ( t ) cos ⁡ ( 2 π λ t ) d t {\displaystyle a(\lambda )=2\int _{-\infty }^{\infty }f(t)\cos(2\pi \lambda t)\,dt} and b ( λ ) = 2 ∫ − ∞ ∞ f ( t ) sin ⁡ ( 2 π λ t ) d t . {\displaystyle b(\lambda )=2\int _{-\infty }^{\infty }f(t)\sin(2\pi \lambda t)\,dt.} Older literature refers to the two transform functions, the Fourier cosine transform, a, and the Fourier sine transform, b. The function f can be recovered from the sine and cosine transform using f ( t ) = 2 ∫ 0 ∞ ∫ − ∞ ∞ f ( τ ) cos ⁡ ( 2 π λ ( τ − t ) ) d τ d λ . {\displaystyle f(t)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(\tau )\cos {\bigl (}2\pi \lambda (\tau -t){\bigr )}\,d\tau \,d\lambda .} together with trigonometric identities. This is referred to as Fourier's integral formula. === Spherical harmonics === Let the set of homogeneous harmonic polynomials of degree k on Rn be denoted by Ak. The set Ak consists of the solid spherical harmonics of degree k. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if f(x) = e−π|x|2P(x) for some P(x) in Ak, then f̂(ξ) = i−k f(ξ). Let the set Hk be the closure in L2(Rn) of linear combinations of functions of the form f(|x|)P(x) where P(x) is in Ak. The space L2(Rn) is then a direct sum of the spaces Hk and the Fourier transform maps each space Hk to itself and is possible to characterize the action of the Fourier transform on each space Hk. Let f(x) = f0(|x|)P(x) (with P(x) in Ak), then f ^ ( ξ ) = F 0 ( | ξ | ) P ( ξ ) {\displaystyle {\hat {f}}(\xi )=F_{0}(|\xi |)P(\xi )} where F 0 ( r ) = 2 π i − k r − n + 2 k − 2 2 ∫ 0 ∞ f 0 ( s ) J n + 2 k − 2 2 ( 2 π r s ) s n + 2 k 2 d s . {\displaystyle F_{0}(r)=2\pi i^{-k}r^{-{\frac {n+2k-2}{2}}}\int _{0}^{\infty }f_{0}(s)J_{\frac {n+2k-2}{2}}(2\pi rs)s^{\frac {n+2k}{2}}\,ds.} Here J(n + 2k − 2)/2 denotes the Bessel function of the first kind with order ⁠n + 2k − 2/2⁠. When k = 0 this gives a useful formula for the Fourier transform of a radial function. This is essentially the Hankel transform. Moreover, there is a simple recursion relating the cases n + 2 and n allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one. === Restriction problems === In higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general class of square integrable functions. As such, the restriction of the Fourier transform of an L2(Rn) function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in Lp for 1 < p < 2. It is possible in some cases to define the restriction of a Fourier transform to a set S, provided S has non-zero curvature. The case when S is the unit sphere in Rn is of particular interest. In this case the Tomas–Stein restriction theorem states that the restriction of the Fourier transform to the unit sphere in Rn is a bounded operator on Lp provided 1 ≤ p ≤ ⁠2n + 2/n + 3⁠. One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets ER indexed by R ∈ (0,∞): such as balls of radius R centered at the origin, or cubes of side 2R. For a given integrable function f, consider the function fR defined by: f R ( x ) = ∫ E R f ^ ( ξ ) e i 2 π x ⋅ ξ d ξ , x ∈ R n . {\displaystyle f_{R}(x)=\int _{E_{R}}{\hat {f}}(\xi )e^{i2\pi x\cdot \xi }\,d\xi ,\quad x\in \mathbb {R} ^{n}.} Suppose in addition that f ∈ Lp(Rn). For n = 1 and 1 < p < ∞, if one takes ER = (−R, R), then fR converges to f in Lp as R tends to infinity, by the boundedness of the Hilbert transform. Naively one may hope the same holds true for n > 1. In the case that ER is taken to be a cube with side length R, then convergence still holds. Another natural candidate is the Euclidean ball ER = {ξ : |ξ| < R}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in Lp(Rn). For n ≥ 2 it is a celebrated theorem of Charles Fefferman that the multiplier for the unit ball is never bounded unless p = 2. In fact, when p ≠ 2, this shows that not only may fR fail to converge to f in Lp, but for some functions f ∈ Lp(Rn), fR is not even an element of Lp. == Fourier transform on function spaces == The definition of the Fourier transform naturally extends from L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} to L 1 ( R n ) {\displaystyle L^{1}(\mathbb {R} ^{n})} . That is, if f ∈ L 1 ( R n ) {\displaystyle f\in L^{1}(\mathbb {R} ^{n})} then the Fourier transform F : L 1 ( R n ) → L ∞ ( R n ) {\displaystyle {\mathcal {F}}:L^{1}(\mathbb {R} ^{n})\to L^{\infty }(\mathbb {R} ^{n})} is given by f ( x ) ↦ f ^ ( ξ ) = ∫ R n f ( x ) e − i 2 π ξ ⋅ x d x , ∀ ξ ∈ R n . {\displaystyle f(x)\mapsto {\hat {f}}(\xi )=\int _{\mathbb {R} ^{n}}f(x)e^{-i2\pi \xi \cdot x}\,dx,\quad \forall \xi \in \mathbb {R} ^{n}.} This operator is bounded as sup ξ ∈ R n | f ^ ( ξ ) | ≤ ∫ R n | f ( x ) | d x , {\displaystyle \sup _{\xi \in \mathbb {R} ^{n}}\left\vert {\hat {f}}(\xi )\right\vert \leq \int _{\mathbb {R} ^{n}}\vert f(x)\vert \,dx,} which shows that its operator norm is bounded by 1. The Riemann–Lebesgue lemma shows that if f ∈ L 1 ( R n ) {\displaystyle f\in L^{1}(\mathbb {R} ^{n})} then its Fourier transform actually belongs to the space of continuous functions which vanish at infinity, i.e., f ^ ∈ C 0 ( R n ) ⊂ L ∞ ( R n ) {\displaystyle {\hat {f}}\in C_{0}(\mathbb {R} ^{n})\subset L^{\infty }(\mathbb {R} ^{n})} . Furthermore, the image of L 1 {\displaystyle L^{1}} under F {\displaystyle {\mathcal {F}}} is a strict subset of C 0 ( R n ) {\displaystyle C_{0}(\mathbb {R} ^{n})} . Similarly to the case of one variable, the Fourier transform can be defined on L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} . The Fourier transform in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} is no longer given by an ordinary Lebesgue integral, although it can be computed by an improper integral, i.e., f ^ ( ξ ) = lim R → ∞ ∫ | x | ≤ R f ( x ) e − i 2 π ξ ⋅ x d x {\displaystyle {\hat {f}}(\xi )=\lim _{R\to \infty }\int _{|x|\leq R}f(x)e^{-i2\pi \xi \cdot x}\,dx} where the limit is taken in the L2 sense. Furthermore, F : L 2 ( R n ) → L 2 ( R n ) {\displaystyle {\mathcal {F}}:L^{2}(\mathbb {R} ^{n})\to L^{2}(\mathbb {R} ^{n})} is a unitary operator. For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for any f, g ∈ L2(Rn) we have ∫ R n f ( x ) F g ( x ) d x = ∫ R n F f ( x ) g ( x ) d x . {\displaystyle \int _{\mathbb {R} ^{n}}f(x){\mathcal {F}}g(x)\,dx=\int _{\mathbb {R} ^{n}}{\mathcal {F}}f(x)g(x)\,dx.} In particular, the image of L2(Rn) is itself under the Fourier transform. === On other Lp === For 1 < p < 2 {\displaystyle 1<p<2} , the Fourier transform can be defined on L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} by Marcinkiewicz interpolation, which amounts to decomposing such functions into a fat tail part in L2 plus a fat body part in L1. In each of these spaces, the Fourier transform of a function in Lp(Rn) is in Lq(Rn), where q = ⁠p/p − 1⁠ is the Hölder conjugate of p (by the Hausdorff–Young inequality). However, except for p = 2, the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions in Lp for the range 2 < p < ∞ requires the study of distributions. In fact, it can be shown that there are functions in Lp with p > 2 so that the Fourier transform is not defined as a function. === Tempered distributions === One might consider enlarging the domain of the Fourier transform from L 1 + L 2 {\displaystyle L^{1}+L^{2}} by considering generalized functions, or distributions. A distribution on R n {\displaystyle \mathbb {R} ^{n}} is a continuous linear functional on the space C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} of compactly supported smooth functions (i.e. bump functions), equipped with a suitable topology. Since C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} is dense in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} , the Plancherel theorem allows one to extend the definition of the Fourier transform to general functions in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} by continuity arguments. The strategy is then to consider the action of the Fourier transform on C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} and pass to distributions by duality. The obstruction to doing this is that the Fourier transform does not map C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} to C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} . In fact the Fourier transform of an element in C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} can not vanish on an open set; see the above discussion on the uncertainty principle. The Fourier transform can also be defined for tempered distributions S ′ ( R n ) {\displaystyle {\mathcal {S}}'(\mathbb {R} ^{n})} , dual to the space of Schwartz functions S ( R n ) {\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})} . A Schwartz function is a smooth function that decays at infinity, along with all of its derivatives, hence C c ∞ ( R n ) ⊂ S ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})\subset {\mathcal {S}}(\mathbb {R} ^{n})} and: F : C c ∞ ( R n ) → S ( R n ) ∖ C c ∞ ( R n ) . {\displaystyle {\mathcal {F}}:C_{c}^{\infty }(\mathbb {R} ^{n})\rightarrow S(\mathbb {R} ^{n})\setminus C_{c}^{\infty }(\mathbb {R} ^{n}).} The Fourier transform is an automorphism of the Schwartz space and, by duality, also an automorphism of the space of tempered distributions. The tempered distributions include well-behaved functions of polynomial growth, distributions of compact support as well as all the integrable functions mentioned above. For the definition of the Fourier transform of a tempered distribution, let f {\displaystyle f} and g {\displaystyle g} be integrable functions, and let f ^ {\displaystyle {\hat {f}}} and g ^ {\displaystyle {\hat {g}}} be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula, ∫ R n f ^ ( x ) g ( x ) d x = ∫ R n f ( x ) g ^ ( x ) d x . {\displaystyle \int _{\mathbb {R} ^{n}}{\hat {f}}(x)g(x)\,dx=\int _{\mathbb {R} ^{n}}f(x){\hat {g}}(x)\,dx.} Every integrable function f {\displaystyle f} defines (induces) a distribution T f {\displaystyle T_{f}} by the relation T f ( ϕ ) = ∫ R n f ( x ) ϕ ( x ) d x , ∀ ϕ ∈ S ( R n ) . {\displaystyle T_{f}(\phi )=\int _{\mathbb {R} ^{n}}f(x)\phi (x)\,dx,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).} So it makes sense to define the Fourier transform of a tempered distribution T f ∈ S ′ ( R ) {\displaystyle T_{f}\in {\mathcal {S}}'(\mathbb {R} )} by the duality: ⟨ T ^ f , ϕ ⟩ = ⟨ T f , ϕ ^ ⟩ , ∀ ϕ ∈ S ( R n ) . {\displaystyle \langle {\widehat {T}}_{f},\phi \rangle =\langle T_{f},{\widehat {\phi }}\rangle ,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).} Extending this to all tempered distributions T {\displaystyle T} gives the general definition of the Fourier transform. Distributions can be differentiated and the above-mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions. == Generalizations == === Fourier–Stieltjes transform on measurable spaces === The Fourier transform of a finite Borel measure μ on Rn is given by the continuous function: μ ^ ( ξ ) = ∫ R n e − i 2 π x ⋅ ξ d μ , {\displaystyle {\hat {\mu }}(\xi )=\int _{\mathbb {R} ^{n}}e^{-i2\pi x\cdot \xi }\,d\mu ,} and called the Fourier-Stieltjes transform due to its connection with the Riemann-Stieltjes integral representation of (Radon) measures. If μ {\displaystyle \mu } is the probability distribution of a random variable X {\displaystyle X} then its Fourier–Stieltjes transform is, by definition, a characteristic function. If, in addition, the probability distribution has a probability density function, this definition is subject to the usual Fourier transform. Stated more generally, when μ {\displaystyle \mu } is absolutely continuous with respect to the Lebesgue measure, i.e., d μ = f ( x ) d x , {\displaystyle d\mu =f(x)dx,} then μ ^ ( ξ ) = f ^ ( ξ ) , {\displaystyle {\hat {\mu }}(\xi )={\hat {f}}(\xi ),} and the Fourier-Stieltjes transform reduces to the usual definition of the Fourier transform. That is, the notable difference with the Fourier transform of integrable functions is that the Fourier-Stieltjes transform need not vanish at infinity, i.e., the Riemann–Lebesgue lemma fails for measures. Bochner's theorem characterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle. One example of a finite Borel measure that is not a function is the Dirac measure. Its Fourier transform is a constant function (whose value depends on the form of the Fourier transform used). === Locally compact abelian groups === The Fourier transform may be generalized to any locally compact abelian group, i.e., an abelian group that is also a locally compact Hausdorff space such that the group operation is continuous. If G is a locally compact abelian group, it has a translation invariant measure μ, called Haar measure. For a locally compact abelian group G, the set of irreducible, i.e. one-dimensional, unitary representations are called its characters. With its natural group structure and the topology of uniform convergence on compact sets (that is, the topology induced by the compact-open topology on the space of all continuous functions from G {\displaystyle G} to the circle group), the set of characters Ĝ is itself a locally compact abelian group, called the Pontryagin dual of G. For a function f in L1(G), its Fourier transform is defined by f ^ ( ξ ) = ∫ G ξ ( x ) f ( x ) d μ for any ξ ∈ G ^ . {\displaystyle {\hat {f}}(\xi )=\int _{G}\xi (x)f(x)\,d\mu \quad {\text{for any }}\xi \in {\hat {G}}.} The Riemann–Lebesgue lemma holds in this case; f̂(ξ) is a function vanishing at infinity on Ĝ. The Fourier transform on T = R/Z is an example; here T is a locally compact abelian group, and the Haar measure μ on T can be thought of as the Lebesgue measure on [0,1). Consider the representation of T on the complex plane C that is a 1-dimensional complex vector space. There are a group of representations (which are irreducible since C is 1-dim) { e k : T → G L 1 ( C ) = C ∗ ∣ k ∈ Z } {\displaystyle \{e_{k}:T\rightarrow GL_{1}(C)=C^{*}\mid k\in Z\}} where e k ( x ) = e i 2 π k x {\displaystyle e_{k}(x)=e^{i2\pi kx}} for x ∈ T {\displaystyle x\in T} . The character of such representation, that is the trace of e k ( x ) {\displaystyle e_{k}(x)} for each x ∈ T {\displaystyle x\in T} and k ∈ Z {\displaystyle k\in Z} , is e i 2 π k x {\displaystyle e^{i2\pi kx}} itself. In the case of representation of finite group, the character table of the group G are rows of vectors such that each row is the character of one irreducible representation of G, and these vectors form an orthonormal basis of the space of class functions that map from G to C by Schur's lemma. Now the group T is no longer finite but still compact, and it preserves the orthonormality of character table. Each row of the table is the function e k ( x ) {\displaystyle e_{k}(x)} of x ∈ T , {\displaystyle x\in T,} and the inner product between two class functions (all functions being class functions since T is abelian) f , g ∈ L 2 ( T , d μ ) {\displaystyle f,g\in L^{2}(T,d\mu )} is defined as ⟨ f , g ⟩ = 1 | T | ∫ [ 0 , 1 ) f ( y ) g ¯ ( y ) d μ ( y ) {\textstyle \langle f,g\rangle ={\frac {1}{|T|}}\int _{[0,1)}f(y){\overline {g}}(y)d\mu (y)} with the normalizing factor | T | = 1 {\displaystyle |T|=1} . The sequence { e k ∣ k ∈ Z } {\displaystyle \{e_{k}\mid k\in Z\}} is an orthonormal basis of the space of class functions L 2 ( T , d μ ) {\displaystyle L^{2}(T,d\mu )} . For any representation V of a finite group G, χ v {\displaystyle \chi _{v}} can be expressed as the span ∑ i ⟨ χ v , χ v i ⟩ χ v i {\textstyle \sum _{i}\left\langle \chi _{v},\chi _{v_{i}}\right\rangle \chi _{v_{i}}} ( V i {\displaystyle V_{i}} are the irreps of G), such that ⟨ χ v , χ v i ⟩ = 1 | G | ∑ g ∈ G χ v ( g ) χ ¯ v i ( g ) {\textstyle \left\langle \chi _{v},\chi _{v_{i}}\right\rangle ={\frac {1}{|G|}}\sum _{g\in G}\chi _{v}(g){\overline {\chi }}_{v_{i}}(g)} . Similarly for G = T {\displaystyle G=T} and f ∈ L 2 ( T , d μ ) {\displaystyle f\in L^{2}(T,d\mu )} , f ( x ) = ∑ k ∈ Z f ^ ( k ) e k {\textstyle f(x)=\sum _{k\in Z}{\hat {f}}(k)e_{k}} . The Pontriagin dual T ^ {\displaystyle {\hat {T}}} is { e k } ( k ∈ Z ) {\displaystyle \{e_{k}\}(k\in Z)} and for f ∈ L 2 ( T , d μ ) {\displaystyle f\in L^{2}(T,d\mu )} , f ^ ( k ) = 1 | T | ∫ [ 0 , 1 ) f ( y ) e − i 2 π k y d y {\textstyle {\hat {f}}(k)={\frac {1}{|T|}}\int _{[0,1)}f(y)e^{-i2\pi ky}dy} is its Fourier transform for e k ∈ T ^ {\displaystyle e_{k}\in {\hat {T}}} . === Gelfand transform === The Fourier transform is also a special case of Gelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above. Given an abelian locally compact Hausdorff topological group G, as before we consider space L1(G), defined using a Haar measure. With convolution as multiplication, L1(G) is an abelian Banach algebra. It also has an involution * given by f ∗ ( g ) = f ( g − 1 ) ¯ . {\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}}.} Taking the completion with respect to the largest possibly C*-norm gives its enveloping C*-algebra, called the group C*-algebra C*(G) of G. (Any C*-norm on L1(G) is bounded by the L1 norm, therefore their supremum exists.) Given any abelian C*-algebra A, the Gelfand transform gives an isomorphism between A and C0(A^), where A^ is the multiplicative linear functionals, i.e. one-dimensional representations, on A with the weak-* topology. The map is simply given by a ↦ ( φ ↦ φ ( a ) ) {\displaystyle a\mapsto {\bigl (}\varphi \mapsto \varphi (a){\bigr )}} It turns out that the multiplicative linear functionals of C*(G), after suitable identification, are exactly the characters of G, and the Gelfand transform, when restricted to the dense subset L1(G) is the Fourier–Pontryagin transform. === Compact non-abelian groups === The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators. The Fourier transform on compact groups is a major tool in representation theory and non-commutative harmonic analysis. Let G be a compact Hausdorff topological group. Let Σ denote the collection of all isomorphism classes of finite-dimensional irreducible unitary representations, along with a definite choice of representation U(σ) on the Hilbert space Hσ of finite dimension dσ for each σ ∈ Σ. If μ is a finite Borel measure on G, then the Fourier–Stieltjes transform of μ is the operator on Hσ defined by ⟨ μ ^ ξ , η ⟩ H σ = ∫ G ⟨ U ¯ g ( σ ) ξ , η ⟩ d μ ( g ) {\displaystyle \left\langle {\hat {\mu }}\xi ,\eta \right\rangle _{H_{\sigma }}=\int _{G}\left\langle {\overline {U}}_{g}^{(\sigma )}\xi ,\eta \right\rangle \,d\mu (g)} where U(σ) is the complex-conjugate representation of U(σ) acting on Hσ. If μ is absolutely continuous with respect to the left-invariant probability measure λ on G, represented as d μ = f d λ {\displaystyle d\mu =f\,d\lambda } for some f ∈ L1(λ), one identifies the Fourier transform of f with the Fourier–Stieltjes transform of μ. The mapping μ ↦ μ ^ {\displaystyle \mu \mapsto {\hat {\mu }}} defines an isomorphism between the Banach space M(G) of finite Borel measures (see rca space) and a closed subspace of the Banach space C∞(Σ) consisting of all sequences E = (Eσ) indexed by Σ of (bounded) linear operators Eσ : Hσ → Hσ for which the norm ‖ E ‖ = sup σ ∈ Σ ‖ E σ ‖ {\displaystyle \|E\|=\sup _{\sigma \in \Sigma }\left\|E_{\sigma }\right\|} is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism of C*-algebras into a subspace of C∞(Σ). Multiplication on M(G) is given by convolution of measures and the involution * defined by f ∗ ( g ) = f ( g − 1 ) ¯ , {\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}},} and C∞(Σ) has a natural C*-algebra structure as Hilbert space operators. The Peter–Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: if f ∈ L2(G), then f ( g ) = ∑ σ ∈ Σ d σ tr ⁡ ( f ^ ( σ ) U g ( σ ) ) {\displaystyle f(g)=\sum _{\sigma \in \Sigma }d_{\sigma }\operatorname {tr} \left({\hat {f}}(\sigma )U_{g}^{(\sigma )}\right)} where the summation is understood as convergent in the L2 sense. The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of noncommutative geometry. In this context, a categorical generalization of the Fourier transform to noncommutative groups is Tannaka–Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions. == Alternatives == In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), and standing waves are not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notably transients, or any signal of finite extent. As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms or time–frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, fractional Fourier transform, Synchrosqueezing Fourier transform, or other functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform. == Example == The following figures provide a visual illustration of how the Fourier transform's integral measures whether a frequency is present in a particular function. The first image depicts the function f ( t ) = cos ⁡ ( 2 π 3 t ) e − π t 2 , {\displaystyle f(t)=\cos(2\pi \ 3t)\ e^{-\pi t^{2}},} which is a 3 Hz cosine wave (the first term) shaped by a Gaussian envelope function (the second term) that smoothly turns the wave on and off. The next 2 images show the product f ( t ) e − i 2 π 3 t , {\displaystyle f(t)e^{-i2\pi 3t},} which must be integrated to calculate the Fourier transform at +3 Hz. The real part of the integrand has a non-negative average value, because the alternating signs of f ( t ) {\displaystyle f(t)} and Re ⁡ ( e − i 2 π 3 t ) {\displaystyle \operatorname {Re} (e^{-i2\pi 3t})} oscillate at the same rate and in phase, whereas f ( t ) {\displaystyle f(t)} and Im ⁡ ( e − i 2 π 3 t ) {\displaystyle \operatorname {Im} (e^{-i2\pi 3t})} oscillate at the same rate but with orthogonal phase. The absolute value of the Fourier transform at +3 Hz is 0.5, which is relatively large. When added to the Fourier transform at -3 Hz (which is identical because we started with a real signal), we find that the amplitude of the 3 Hz frequency component is 1. However, when you try to measure a frequency that is not present, both the real and imaginary component of the integral vary rapidly between positive and negative values. For instance, the red curve is looking for 5 Hz. The absolute value of its integral is nearly zero, indicating that almost no 5 Hz component was in the signal. The general situation is usually more complicated than this, but heuristically this is how the Fourier transform measures how much of an individual frequency is present in a function f ( t ) . {\displaystyle f(t).} To re-enforce an earlier point, the reason for the response at ξ = − 3 {\displaystyle \xi =-3} Hz is because cos ⁡ ( 2 π 3 t ) {\displaystyle \cos(2\pi 3t)} and cos ⁡ ( 2 π ( − 3 ) t ) {\displaystyle \cos(2\pi (-3)t)} are indistinguishable. The transform of e i 2 π 3 t ⋅ e − π t 2 {\displaystyle e^{i2\pi 3t}\cdot e^{-\pi t^{2}}} would have just one response, whose amplitude is the integral of the smooth envelope: e − π t 2 , {\displaystyle e^{-\pi t^{2}},} whereas Re ⁡ ( f ( t ) ⋅ e − i 2 π 3 t ) {\displaystyle \operatorname {Re} (f(t)\cdot e^{-i2\pi 3t})} is e − π t 2 ( 1 + cos ⁡ ( 2 π 6 t ) ) / 2. {\displaystyle e^{-\pi t^{2}}(1+\cos(2\pi 6t))/2.} == Applications == Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the time domain corresponds to multiplication by the frequency, so some differential equations are easier to analyze in the frequency domain. Also, convolution in the time domain corresponds to ordinary multiplication in the frequency domain (see Convolution theorem). After performing the desired operations, transformation of the result can be made back to the time domain. Harmonic analysis is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics. === Analysis of differential equations === Perhaps the most important use of the Fourier transformation is to solve partial differential equations. Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is ∂ 2 y ( x , t ) ∂ 2 x = ∂ y ( x , t ) ∂ t . {\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial y(x,t)}{\partial t}}.} The example we will give, a slightly more difficult one, is the wave equation in one dimension, ∂ 2 y ( x , t ) ∂ 2 x = ∂ 2 y ( x , t ) ∂ 2 t . {\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial ^{2}y(x,t)}{\partial ^{2}t}}.} As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions" y ( x , 0 ) = f ( x ) , ∂ y ( x , 0 ) ∂ t = g ( x ) . {\displaystyle y(x,0)=f(x),\qquad {\frac {\partial y(x,0)}{\partial t}}=g(x).} Here, f and g are given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutions y which satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution. It is easier to find the Fourier transform ŷ of the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. After ŷ is determined, we can apply the inverse Fourier transformation to find y. Fourier's method is as follows. First, note that any function of the forms cos ⁡ ( 2 π ξ ( x ± t ) ) or sin ⁡ ( 2 π ξ ( x ± t ) ) {\displaystyle \cos {\bigl (}2\pi \xi (x\pm t){\bigr )}{\text{ or }}\sin {\bigl (}2\pi \xi (x\pm t){\bigr )}} satisfies the wave equation. These are called the elementary solutions. Second, note that therefore any integral y ( x , t ) = ∫ 0 ∞ d ξ [ a + ( ξ ) cos ⁡ ( 2 π ξ ( x + t ) ) + a − ( ξ ) cos ⁡ ( 2 π ξ ( x − t ) ) + b + ( ξ ) sin ⁡ ( 2 π ξ ( x + t ) ) + b − ( ξ ) sin ⁡ ( 2 π ξ ( x − t ) ) ] {\displaystyle {\begin{aligned}y(x,t)=\int _{0}^{\infty }d\xi {\Bigl [}&a_{+}(\xi )\cos {\bigl (}2\pi \xi (x+t){\bigr )}+a_{-}(\xi )\cos {\bigl (}2\pi \xi (x-t){\bigr )}+{}\\&b_{+}(\xi )\sin {\bigl (}2\pi \xi (x+t){\bigr )}+b_{-}(\xi )\sin \left(2\pi \xi (x-t)\right){\Bigr ]}\end{aligned}}} satisfies the wave equation for arbitrary a+, a−, b+, b−. This integral may be interpreted as a continuous linear combination of solutions for the linear equation. Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform of a± and b± in the variable x. The third step is to examine how to find the specific unknown coefficient functions a± and b± that will lead to y satisfying the boundary conditions. We are interested in the values of these solutions at t = 0. So we will set t = 0. Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variable x) of both sides and obtain 2 ∫ − ∞ ∞ y ( x , 0 ) cos ⁡ ( 2 π ξ x ) d x = a + + a − {\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\cos(2\pi \xi x)\,dx=a_{+}+a_{-}} and 2 ∫ − ∞ ∞ y ( x , 0 ) sin ⁡ ( 2 π ξ x ) d x = b + + b − . {\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\sin(2\pi \xi x)\,dx=b_{+}+b_{-}.} Similarly, taking the derivative of y with respect to t and then applying the Fourier sine and cosine transformations yields 2 ∫ − ∞ ∞ ∂ y ( u , 0 ) ∂ t sin ⁡ ( 2 π ξ x ) d x = ( 2 π ξ ) ( − a + + a − ) {\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\sin(2\pi \xi x)\,dx=(2\pi \xi )\left(-a_{+}+a_{-}\right)} and 2 ∫ − ∞ ∞ ∂ y ( u , 0 ) ∂ t cos ⁡ ( 2 π ξ x ) d x = ( 2 π ξ ) ( b + − b − ) . {\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\cos(2\pi \xi x)\,dx=(2\pi \xi )\left(b_{+}-b_{-}\right).} These are four linear equations for the four unknowns a± and b±, in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found. In summary, we chose a set of elementary solutions, parametrized by ξ, of which the general solution would be a (continuous) linear combination in the form of an integral over the parameter ξ. But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functions f and g. But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functions a± and b± in terms of the given boundary conditions f and g. From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in both x and t rather than operate as Fourier did, who only transformed in the spatial variables. Note that ŷ must be considered in the sense of a distribution since y(x, t) is not going to be L1: as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation in x to multiplication by i2πξ and differentiation with respect to t to multiplication by i2πf where f is the frequency. Then the wave equation becomes an algebraic equation in ŷ: ξ 2 y ^ ( ξ , f ) = f 2 y ^ ( ξ , f ) . {\displaystyle \xi ^{2}{\hat {y}}(\xi ,f)=f^{2}{\hat {y}}(\xi ,f).} This is equivalent to requiring ŷ(ξ, f) = 0 unless ξ = ±f. Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviously f̂ = δ(ξ ± f) will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conic ξ2 − f2 = 0. We may as well consider the distributions supported on the conic that are given by distributions of one variable on the line ξ = f plus distributions on the line ξ = −f as follows: if Φ is any test function, ∬ y ^ ϕ ( ξ , f ) d ξ d f = ∫ s + ϕ ( ξ , ξ ) d ξ + ∫ s − ϕ ( ξ , − ξ ) d ξ , {\displaystyle \iint {\hat {y}}\phi (\xi ,f)\,d\xi \,df=\int s_{+}\phi (\xi ,\xi )\,d\xi +\int s_{-}\phi (\xi ,-\xi )\,d\xi ,} where s+, and s−, are distributions of one variable. Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (put Φ(ξ, f) = ei2π(xξ+tf), which is clearly of polynomial growth): y ( x , 0 ) = ∫ { s + ( ξ ) + s − ( ξ ) } e i 2 π ξ x + 0 d ξ {\displaystyle y(x,0)=\int {\bigl \{}s_{+}(\xi )+s_{-}(\xi ){\bigr \}}e^{i2\pi \xi x+0}\,d\xi } and ∂ y ( x , 0 ) ∂ t = ∫ { s + ( ξ ) − s − ( ξ ) } i 2 π ξ e i 2 π ξ x + 0 d ξ . {\displaystyle {\frac {\partial y(x,0)}{\partial t}}=\int {\bigl \{}s_{+}(\xi )-s_{-}(\xi ){\bigr \}}i2\pi \xi e^{i2\pi \xi x+0}\,d\xi .} Now, as before, applying the one-variable Fourier transformation in the variable x to these functions of x yields two equations in the two unknown distributions s± (which can be taken to be ordinary functions if the boundary conditions are L1 or L2). From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used. The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well. === Fourier-transform spectroscopy === The Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in magnetic resonance imaging (MRI) and mass spectrometry. === Quantum mechanics === The Fourier transform is useful in quantum mechanics in at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs of complementary variables, connected by the Heisenberg uncertainty principle. For example, in one dimension, the spatial variable q of, say, a particle, can only be measured by the quantum mechanical "position operator" at the cost of losing information about the momentum p of the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", of q or by a function of p but not by a function of both variables. The variable p is called the conjugate variable to q. In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to both p and q simultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with a p-axis and a q-axis called the phase space. In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, the q-axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing the p-axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such that ϕ ( p ) = ∫ d q ψ ( q ) e − i p q / h , {\displaystyle \phi (p)=\int dq\,\psi (q)e^{-ipq/h},} or, equivalently, ψ ( q ) = ∫ d p ϕ ( p ) e i p q / h . {\displaystyle \psi (q)=\int dp\,\phi (p)e^{ipq/h}.} Physically realisable states are L2, and so by the Plancherel theorem, their Fourier transforms are also L2. (Note that since q is in units of distance and p is in units of momentum, the presence of the Planck constant in the exponent makes the exponent dimensionless, as it should be.) Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberg uncertainty principle. The other use of the Fourier transform in both quantum mechanics and quantum field theory is to solve the applicable wave equation. In non-relativistic quantum mechanics, the Schrödinger equation for a time-varying wave function in one-dimension, not subject to external forces, is − ∂ 2 ∂ x 2 ψ ( x , t ) = i h 2 π ∂ ∂ t ψ ( x , t ) . {\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} This is the same as the heat equation except for the presence of the imaginary unit i. Fourier methods can be used to solve this equation. In the presence of a potential, given by the potential energy function V(x), the equation becomes − ∂ 2 ∂ x 2 ψ ( x , t ) + V ( x ) ψ ( x , t ) = i h 2 π ∂ ∂ t ψ ( x , t ) . {\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)+V(x)\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution of ψ given its values for t = 0. Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important. In relativistic quantum mechanics, the Schrödinger equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units, ( ∂ 2 ∂ x 2 + 1 ) ψ ( x , t ) = ∂ 2 ∂ t 2 ψ ( x , t ) . {\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+1\right)\psi (x,t)={\frac {\partial ^{2}}{\partial t^{2}}}\psi (x,t).} This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions. Finally, the number operator of the quantum harmonic oscillator can be interpreted, for example via the Mehler kernel, as the generator of the Fourier transform F {\displaystyle {\mathcal {F}}} . === Signal processing === The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function. The autocorrelation function R of a function f is defined by R f ( τ ) = lim T → ∞ 1 2 T ∫ − T T f ( t ) f ( t + τ ) d t . {\displaystyle R_{f}(\tau )=\lim _{T\rightarrow \infty }{\frac {1}{2T}}\int _{-T}^{T}f(t)f(t+\tau )\,dt.} This function is a function of the time-lag τ elapsing between the values of f to be correlated. For most functions f that occur in practice, R is a bounded even function of the time-lag τ and for typical noisy signals it turns out to be uniformly continuous with a maximum at τ = 0. The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values of f separated by a time lag. This is a way of searching for the correlation of f with its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, if f(t) represents the temperature at time t, one expects a strong correlation with the temperature at a time lag of 24 hours. It possesses a Fourier transform, P f ( ξ ) = ∫ − ∞ ∞ R f ( τ ) e − i 2 π ξ τ d τ . {\displaystyle P_{f}(\xi )=\int _{-\infty }^{\infty }R_{f}(\tau )e^{-i2\pi \xi \tau }\,d\tau .} This Fourier transform is called the power spectral density function of f. (Unless all periodic components are first filtered out from f, this integral will diverge, but it is easy to filter out such periodicities.) The power spectrum, as indicated by this density function P, measures the amount of variance contributed to the data by the frequency ξ. In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series (ANOVA). Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data. The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out. Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool. == Other notations == Other common notations for f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} include: f ~ ( ξ ) , F ( ξ ) , F ( f ) ( ξ ) , ( F f ) ( ξ ) , F ( f ) , F { f } , F ( f ( t ) ) , F { f ( t ) } . {\displaystyle {\tilde {f}}(\xi ),\ F(\xi ),\ {\mathcal {F}}\left(f\right)(\xi ),\ \left({\mathcal {F}}f\right)(\xi ),\ {\mathcal {F}}(f),\ {\mathcal {F}}\{f\},\ {\mathcal {F}}{\bigl (}f(t){\bigr )},\ {\mathcal {F}}{\bigl \{}f(t){\bigr \}}.} In the sciences and engineering it is also common to make substitutions like these: ξ → f , x → t , f → x , f ^ → X . {\displaystyle \xi \rightarrow f,\quad x\rightarrow t,\quad f\rightarrow x,\quad {\hat {f}}\rightarrow X.} So the transform pair f ( x ) ⟺ F f ^ ( ξ ) {\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ {\hat {f}}(\xi )} can become x ( t ) ⟺ F X ( f ) {\displaystyle x(t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ X(f)} A disadvantage of the capital letter notation is when expressing a transform such as f ⋅ g ^ {\displaystyle {\widehat {f\cdot g}}} or f ′ ^ , {\displaystyle {\widehat {f'}},} which become the more awkward F { f ⋅ g } {\displaystyle {\mathcal {F}}\{f\cdot g\}} and F { f ′ } . {\displaystyle {\mathcal {F}}\{f'\}.} In some contexts such as particle physics, the same symbol f {\displaystyle f} may be used for both for a function as well as it Fourier transform, with the two only distinguished by their argument I.e. f ( k 1 + k 2 ) {\displaystyle f(k_{1}+k_{2})} would refer to the Fourier transform because of the momentum argument, while f ( x 0 + π r → ) {\displaystyle f(x_{0}+\pi {\vec {r}})} would refer to the original function because of the positional argument. Although tildes may be used as in f ~ {\displaystyle {\tilde {f}}} to indicate Fourier transforms, tildes may also be used to indicate a modification of a quantity with a more Lorentz invariant form, such as d k ~ = d k ( 2 π ) 3 2 ω {\displaystyle {\tilde {dk}}={\frac {dk}{(2\pi )^{3}2\omega }}} , so care must be taken. Similarly, f ^ {\displaystyle {\hat {f}}} often denotes the Hilbert transform of f {\displaystyle f} . The interpretation of the complex function f̂(ξ) may be aided by expressing it in polar coordinate form f ^ ( ξ ) = A ( ξ ) e i φ ( ξ ) {\displaystyle {\hat {f}}(\xi )=A(\xi )e^{i\varphi (\xi )}} in terms of the two real functions A(ξ) and φ(ξ) where: A ( ξ ) = | f ^ ( ξ ) | , {\displaystyle A(\xi )=\left|{\hat {f}}(\xi )\right|,} is the amplitude and φ ( ξ ) = arg ⁡ ( f ^ ( ξ ) ) , {\displaystyle \varphi (\xi )=\arg \left({\hat {f}}(\xi )\right),} is the phase (see arg function). Then the inverse transform can be written: f ( x ) = ∫ − ∞ ∞ A ( ξ ) e i ( 2 π ξ x + φ ( ξ ) ) d ξ , {\displaystyle f(x)=\int _{-\infty }^{\infty }A(\xi )\ e^{i{\bigl (}2\pi \xi x+\varphi (\xi ){\bigr )}}\,d\xi ,} which is a recombination of all the frequency components of f(x). Each component is a complex sinusoid of the form e2πixξ whose amplitude is A(ξ) and whose initial phase angle (at x = 0) is φ(ξ). The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted F and F(f) is used to denote the Fourier transform of the function f. This mapping is linear, which means that F can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function f) can be used to write F f instead of F(f). Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value ξ for its variable, and this is denoted either as F f(ξ) or as (F f)(ξ). Notice that in the former case, it is implicitly understood that F is applied first to f and then the resulting function is evaluated at ξ, not the other way around. In mathematics and various applied sciences, it is often necessary to distinguish between a function f and the value of f when its variable equals x, denoted f(x). This means that a notation like F(f(x)) formally can be interpreted as the Fourier transform of the values of f at x. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, F ( rect ⁡ ( x ) ) = sinc ⁡ ( ξ ) {\displaystyle {\mathcal {F}}{\bigl (}\operatorname {rect} (x){\bigr )}=\operatorname {sinc} (\xi )} is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or F ( f ( x + x 0 ) ) = F ( f ( x ) ) e i 2 π x 0 ξ {\displaystyle {\mathcal {F}}{\bigl (}f(x+x_{0}){\bigr )}={\mathcal {F}}{\bigl (}f(x){\bigr )}\,e^{i2\pi x_{0}\xi }} is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function of x, not of x0. As discussed above, the characteristic function of a random variable is the same as the Fourier–Stieltjes transform of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined E ( e i t ⋅ X ) = ∫ e i t ⋅ x d μ X ( x ) . {\displaystyle E\left(e^{it\cdot X}\right)=\int e^{it\cdot x}\,d\mu _{X}(x).} As in the case of the "non-unitary angular frequency" convention above, the factor of 2π appears in neither the normalizing constant nor the exponent. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponent. == Computation methods == The appropriate computation method largely depends how the original mathematical function is represented and the desired form of the output function. In this section we consider both functions of a continuous variable, f ( x ) , {\displaystyle f(x),} and functions of a discrete variable (i.e. ordered pairs of x {\displaystyle x} and f {\displaystyle f} values). For discrete-valued x , {\displaystyle x,} the transform integral becomes a summation of sinusoids, which is still a continuous function of frequency ( ξ {\displaystyle \xi } or ω {\displaystyle \omega } ). When the sinusoids are harmonically related (i.e. when the x {\displaystyle x} -values are spaced at integer multiples of an interval), the transform is called discrete-time Fourier transform (DTFT). === Discrete Fourier transforms and fast Fourier transforms === Sampling the DTFT at equally-spaced values of frequency is the most common modern method of computation. Efficient procedures, depending on the frequency resolution needed, are described at Discrete-time Fourier transform § Sampling the DTFT. The discrete Fourier transform (DFT), used there, is usually computed by a fast Fourier transform (FFT) algorithm. === Analytic integration of closed-form functions === Tables of closed-form Fourier transforms, such as § Square-integrable functions, one-dimensional and § Table of discrete-time Fourier transforms, are created by mathematically evaluating the Fourier analysis integral (or summation) into another closed-form function of frequency ( ξ {\displaystyle \xi } or ω {\displaystyle \omega } ). When mathematically possible, this provides a transform for a continuum of frequency values. Many computer algebra systems such as Matlab and Mathematica that are capable of symbolic integration are capable of computing Fourier transforms analytically. For example, to compute the Fourier transform of cos(6πt) e−πt2 one might enter the command integrate cos(6*pi*t) exp(−pi*t^2) exp(-i*2*pi*f*t) from -inf to inf into Wolfram Alpha. === Numerical integration of closed-form continuous functions === Discrete sampling of the Fourier transform can also be done by numerical integration of the definition at each value of frequency for which transform is desired. The numerical integration approach works on a much broader class of functions than the analytic approach. === Numerical integration of a series of ordered pairs === If the input function is a series of ordered pairs, numerical integration reduces to just a summation over the set of data pairs. The DTFT is a common subcase of this more general situation. == Tables of important Fourier transforms == The following tables record some closed-form Fourier transforms. For functions f(x) and g(x) denote their Fourier transforms by f̂ and ĝ. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse. === Functional relationships, one-dimensional === The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix). === Square-integrable functions, one-dimensional === The Fourier transforms in this table may be found in Campbell & Foster (1948), Erdélyi (1954), or Kammler (2000, appendix). === Distributions, one-dimensional === The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix). === Two-dimensional functions === === Formulas for general n-dimensional functions === == See also == == Notes == == Citations == == References == == External links == Media related to Fourier transformation at Wikimedia Commons Encyclopedia of Mathematics Weisstein, Eric W. "Fourier Transform". MathWorld. Fourier Transform in Crystallography
Wikipedia/Fourier_transforms
Land change models (LCMs) describe, project, and explain changes in and the dynamics of land use and land-cover. LCMs are a means of understanding ways that humans change the Earth's surface in the past, present, and future. Land change models are valuable in development policy, helping guide more appropriate decisions for resource management and the natural environment at a variety of scales ranging from a small piece of land to the entire spatial extent. Moreover, developments within land-cover, environmental and socio-economic data (as well as within technological infrastructures) have increased opportunities for land change modeling to help support and influence decisions that affect human-environment systems, as national and international attention increasingly focuses on issues of global climate change and sustainability. == Importance == Changes in land systems have consequences for climate and environmental change on every scale. Therefore, decisions and policies in relation to land systems are very important for reacting these changes and working towards a more sustainable society and planet. Land change models are significant in their ability to help guide the land systems to positive societal and environmental outcomes at a time when attention to changes across land systems is increasing. A plethora of science and practitioner communities have been able to advance the amount and quality of data in land change modeling in the past few decades. That has influenced the development of methods and technologies in model land change. The multitudes of land change models that have been developed are significant in their ability to address land system change and useful in various science and practitioner communities. For the science community, land change models are important in their ability to test theories and concepts of land change and its connections to human-environment relationships, as well as explore how these dynamics will change future land systems without real-world observation. Land change modeling is useful to explore spatial land systems, uses, and covers. Land change modeling can account for complexity within dynamics of land use and land cover by linking with climatic, ecological, biogeochemical, biogeophysical and socioeconomic models. Additionally, LCMs are able to produce spatially explicit outcomes according to the type and complexity within the land system dynamics within the spatial extent. Many biophysical and socioeconomic variables influence and produce a variety of outcomes in land change modeling. == Model uncertainty == A notable property of all land change models is that they have some irreducible level of uncertainty in the model structure, parameter values, and/or input data. For instance, one uncertainty within land change models is a result from temporal non-stationarity that exists in land change processes, so the further into the future the model is applied, the more uncertain it is. Another uncertainty within land change models are data and parameter uncertainties within physical principles (i.e., surface typology), which leads to uncertainties in being able to understand and predict physical processes. Furthermore, land change model design are a product of both decision-making and physical processes. Human-induced impact on the socio-economic and ecological environment is important to take into account, as it is constantly changing land cover and sometimes model uncertainty. To avoid model uncertainty and interpret model outputs more accurately, a model diagnosis is used to understand more about the connections between land change models and the actual land system of the spatial extent. The overall importance of model diagnosis with model uncertainty issues is its ability to assess how interacting processes and the landscape are represented, as well as the uncertainty within the landscape and its processes. == Approaches == === Machine learning and statistical models === A machine-learning approach uses land-cover data from the past to try to assess how land will change in the future, and works best with large datasets. There are multiple types of machine-learning and statistical models - a study in western Mexico from 2011 found that results from two outwardly similar models were considerably different, as one used a neural network and the other used a simple weights-of-evidence model. === Cellular models === A cellular land change model uses maps of suitability for various types of land use, and compares areas that are immediately adjacent to one another to project changes into the future. Variations in the scale of cells in a cellular model can have significant impacts on model outputs. === Sector-based and spatially disaggregated economic models === Economic models are built on principles of supply and demand. They use mathematical parameters in order to predict what land types will be desired and which will be discarded. These are frequently built for urban areas, such as a 2003 study of the highly dense Pearl River Delta in southern China. === Agent-based models === Agent-based models try to simulate the behavior of many individuals making independent choices, and then see how those choices affect the landscape as a whole. Agent-based modeling can be complex - for instance, a 2005 study combined an agent-based model with computer-based genetic programming to explore land change in the Yucatan peninsula of Mexico. === Hybrid approaches === Many models do not limit themselves to one of the approaches above - they may combine several in order to develop a fully comprehensive and accurate model. == Evaluation == === Purpose === Land change models are evaluated to appraise and quantify the performance of a model’s predictive power in terms of spatial allocation and quantity of change. Evaluating a model allows the modeler to evaluate a model’s performance to edit a “model’s output, data measurement, and the mapping and modeling of data” for future applications. The purpose for model evaluation is not to develop a singular metric or method to maximize a “correct” outcome, but to develop tools to evaluate and learn from model outputs to produce better models for their specific applications === Methods === There are two types of validation in land change modeling: process validation and pattern validation. Process Validation compares the match between “the process in the model and the process operating in the real world”. Process validation is most commonly used in agent-based modeling whereby the modeler is using the behaviors and decisions to inform the process determining land change in the model. Pattern validation compares model outputs (ie. predicted change) and observed outputs (ie. reference change). Three map analyses are a commonly used method for pattern validation in which three maps, a reference map at time 1, a reference map at time 2, and a simulated map of time 2, are compared. This generates a cross-comparison of the three maps where the pixels are classified as one of these five categories: Hits: reference change is correctly simulated as change Misses: reference change is simulated incorrectly as persistence False alarms: persistence in the reference data is simulated incorrectly as change Correct rejections: reference change correctly simulated as persistence Wrong hits: reference change simulated as correctly as change, but to the wrong gaining category Because three map comparisons include both errors and correctly simulated pixels, it results in a visual expression of both allocation and quantity errors. Single-summary metrics are also used to evaluate LCMs. There are many single summary metrics that modelers have used to evaluate their models and are often utilized to compare models to each other. One such metric is the Figure of Merit (FoM) which uses the hit, miss, and false alarm values generated from a three-map comparison to generate a percentage value that expresses the intersection between reference and simulated change. Single summary metrics can obfuscate important information, but the FoM can be useful especially when the hit, miss and false alarm values are reported as well. === Improvements === The separation of calibration from validation has been identified as a challenge that should be addressed as a modeling challenge. This is commonly caused by modelers use of information from after the first time period. This can cause a map to appear to have a level of accuracy that is much higher than a model’s actual predictive power. Additional improvements that have been discussed within the field include characterizing the difference between allocation errors and quantity errors, which can be done through three map comparisons, as well as including both observed and predicted change in the analysis of land change models. Single summary metrics have been overly relied on in the past, and have varying levels of usefulness when evaluating LCMs. Even the best single summary metrics often leave out important information, and reporting metrics like FoM along with the maps and values that are used to generate them can communicate necessary information that would otherwise be obfuscated. == Implementation opportunities == Scientists use LCMs to build and test theories in land change modeling for a variety of human and environmental dynamics. Land change modeling has a variety of implementation opportunities in many science and practice disciplines, such as in decision-making, policy, and in real-world application in public and private domains. Land change modeling is a key component of land change science, which uses LCMs to assess long-term outcomes for land cover and climate. The science disciplines use LCMs to formalize and test land change theory, and the explore and experiment with different scenarios of land change modeling. The practical disciplines use LCMs to analyze current land change trends and explore future outcomes from policies or actions in order to set appropriate guidelines, limits and principles for policy and action. Research and practitioner communities may study land change to address topics related to land-climate interactions, water quantity and quality, food and fiber production, and urbanization, infrastructure, and the built environment. == Improvement and advancement == === Improved land observational strategies === One improvement for land change modeling can be made through better data and integration with available data and models. Improved observational data can influence modeling quality. Finer spatial and temporal resolution data that can integrate with socioeconomic and biogeophysical data can help land change modeling couple the socioeconomic and biogeological modeling types. Land change modelers should value data at finer scales. Fine data can give a better conceptual understanding of underlying constructs of the model and capture additional dimensions of land use. It is important to maintain the temporal and spatial continuity of data from airborne-based and survey-based observation through constellations of smaller satellite coverage, image processing algorithms, and other new data to link satellite-based land use information and land management information. It is also important to have better information on land change actors and their beliefs, preferences, and behaviors to improve the predictive ability of models and evaluate the consequences of alternative policies. === Aligning model choices with model goals === One important improvement for land change modeling can be made though better aligning model choices with model goals. It is important to choose the appropriate modeling approach based on the scientific and application contexts of the specific study of interest. For example, when someone needs to design a model with policy and policy actors in mind, they may choose an agent-based model. Here, structural economic or agent-based approaches are useful, but specific patterns and trends in land change as with many ecological systems may not be as useful. When one needs to grasp the early stages of problem identification, and thus needs to understand the scientific patterns and trend of land change, machine learning and cellular approaches are useful. === Integrating positive and normative approaches === Land Change Modeling should also better integrate positive and normative approaches to explanation and prediction based on evidence-based accounts of land systems. It should also integrate optimization approaches to explore the outcomes that are the most beneficial and the processes that might produce those outcomes. === Integrating across scales === It is important to integrate data across scales. A models design is based on the dominant processes and data from a specific scale of application and spatial extent. Cross-scale dynamics and feedbacks between temporal and spatial scales influences the patterns and processes of the model. Process like telecoupling, indirect land use change, and adaption to climate change at multiple scales requires better representation by cross-scale dynamics. Implementing these processes will require a better understanding of feedback mechanisms across scales. === Opportunities in research infrastructure and cyberinfrastructure support === As there is continuous reinvention of modeling environments, frameworks, and platforms, land change modeling can improve from better research infrastructure support. For example, model and software infrastructure development can help avoid duplication of initiatives by land change modeling community members, co-learn about land change modeling, and integrate models to evaluate impacts of land change. Better data infrastructure can provide more data resources to support compilation, curation, and comparison of heterogeneous data sources. Better community modeling and governance can advance decision-making and modeling capabilities within a community with specific and achievable goals. Community modeling and governance would provide a step towards reaching community agreement on specific goals to move modeling and data capabilities forward. A number of modern challenges in land change modeling can potentially be addressed through contemporary advances in cyberinfrastructure such as crowd-source, “mining” for distributed data, and improving high-performance computing. Because it is important for modelers to find more data to better construct, calibrate, and validate structural models, the ability to analyze large amount of data on individual behaviors is helpful. For example, modelers can find point-of-sales data on individual purchases by consumers and internet activities that reveal social networks. However, some issues of privacy and propriety for crowdsourcing improvements have not yet been resolved. The land change modeling community can also benefit from Global Positioning System and Internet-enabled mobile device data distribution. Combining various structural-based data-collecting methods can improve the availability of microdata and the diversity of people that see the findings and outcomes of land change modeling projects. For example, citizen-contributed data supported the implementation of Ushahidi in Haiti after the 2010 earthquake, helping at least 4,000 disaster events. Universities, non-profit agencies, and volunteers are needed to collect information on events like this to make positive outcomes and improvements in land change modeling and land change modeling applications. Tools such as mobile devices are available to make it easier for participants to participate in collecting micro-data on agents. Google Maps uses cloud-based mapping technologies with datasets that are co-produced by the public and scientists. Examples in agriculture such as coffee farmers in Avaaj Otalo showed use of mobile phones for collecting information and as an interactive voice. Cyberinfrastructure developments may also increase the ability of land change modeling to meet computational demands of various modeling approaches given increasing data volumes and certain expected model interactions. For example, improving the development of processors, data storage, network bandwidth, and coupling land change and environmental process models at high resolution. === Model evaluation === An additional way to improve land change modeling is through improvement of model evaluation approaches. Improvement in sensitivity analysis are needed to gain a better understand of the variation in model output in response to model elements like input data, model parameters, initial conditions, boundary conditions, and model structure. Improvement in pattern validation can help land change modelers make comparisons between model outputs parameterized for some historic case, like maps, and observations for that case. Improvement in uncertainty sources is needed to improve forecasting of future states that are non-stationary in processes, input variables, and boundary conditions. One can explicitly recognize stationarity assumptions and explore data for evidence in non-stationarity to better acknowledge and understand model uncertainty to improve uncertainty sources. Improvement in structural validation can help improve acknowledgement and understanding of the processes in the model and the processes operating in the real world through a combination of qualitative and quantitative measures. == See also == Land change science – Interdisciplinary study of changes in climate, land use, and land cover Land use – Classification of land resources based on what can be built and on its use and land use planning Land-use forecasting – Projecting the distribution and intensity of trip generating activities in the urban area Land Use Evolution and Impact Assessment Model (LEAM) TerrSet – Windows GIS and remote sensing software GeoMod – Software to model changes of land use == References ==
Wikipedia/Land_change_modeling
A geographic coordinate system (GCS) is a spherical or geodetic coordinate system for measuring and communicating positions directly on Earth as latitude and longitude. It is the simplest, oldest, and most widely used type of the various spatial reference systems that are in use, and forms the basis for most others. Although latitude and longitude form a coordinate tuple like a cartesian coordinate system, the geographic coordinate system is not cartesian because the measurements are angles and are not on a planar surface. A full GCS specification, such as those listed in the EPSG and ISO 19111 standards, also includes a choice of geodetic datum (including an Earth ellipsoid), as different datums will yield different latitude and longitude values for the same location. == History == The invention of a geographic coordinate system is generally credited to Eratosthenes of Cyrene, who composed his now-lost Geography at the Library of Alexandria in the 3rd century BC. A century later, Hipparchus of Nicaea improved on this system by determining latitude from stellar measurements rather than solar altitude and determining longitude by timings of lunar eclipses, rather than dead reckoning. In the 1st or 2nd century, Marinus of Tyre compiled an extensive gazetteer and mathematically plotted world map using coordinates measured east from a prime meridian at the westernmost known land, designated the Fortunate Isles, off the coast of western Africa around the Canary or Cape Verde Islands, and measured north or south of the island of Rhodes off Asia Minor. Ptolemy credited him with the full adoption of longitude and latitude, rather than measuring latitude in terms of the length of the midsummer day. Ptolemy's 2nd-century Geography used the same prime meridian but measured latitude from the Equator instead. After their work was translated into Arabic in the 9th century, Al-Khwārizmī's Book of the Description of the Earth corrected Marinus' and Ptolemy's errors regarding the length of the Mediterranean Sea, causing medieval Arabic cartography to use a prime meridian around 10° east of Ptolemy's line. Mathematical cartography resumed in Europe following Maximus Planudes' recovery of Ptolemy's text a little before 1300; the text was translated into Latin at Florence by Jacopo d'Angelo around 1407. In 1884, the United States hosted the International Meridian Conference, attended by representatives from twenty-five nations. Twenty-two of them agreed to adopt the longitude of the Royal Observatory in Greenwich, England as the zero-reference line. The Dominican Republic voted against the motion, while France and Brazil abstained. France adopted Greenwich Mean Time in place of local determinations by the Paris Observatory in 1911. == Latitude and longitude == The latitude φ of a point on Earth's surface is defined in one of three ways, depending on the type of coordinate system. In each case, the latitude is the angle formed by the plane of the equator and a line formed by the point on the surface and a second point on equatorial plane. What varies between the types of coordinate systems is how the point on the equatorial plane is determined: In an astronomical coordinate system, the second point is found where the extension of the plumb bob vertical from the surface point intersects the equatorial plane. In a geodetic coordinate system, the second point is found where the normal vector from the surface of the ellipsoid at the surface point intersects the equatorial plane. In a geocentric coordinate system, the second point is the center of Earth. The path that joins all points of the same latitude traces a circle on the surface of Earth, as viewed from above the north or south pole, called parallels, as they are parallel to the equator and to each other. The north pole is 90° N; the south pole is 90° S. The 0° parallel of latitude is defined to be the equator, the fundamental plane of a geographic coordinate system. The equator divides the globe into Northern and Southern Hemispheres. The longitude λ of a point on Earth's surface is the angle east or west of a reference meridian to another meridian that passes through that point. All meridians are halves of great ellipses, which converge at the North and South Poles. The meridian of the British Royal Observatory in Greenwich, in southeast London, England, is the international prime meridian, although some organizations—such as the French Institut national de l'information géographique et forestière—continue to use other meridians for internal purposes. The antipodal meridian of Greenwich is both 180°W and 180°E. This is not to be conflated with the International Date Line, which diverges from it in several places for political and convenience reasons, including between far eastern Russia and the far western Aleutian Islands. The combination of these two components specifies the position of any location on the surface of Earth, without consideration of altitude or depth. The visual grid on a map formed by lines of latitude and longitude is known as a graticule. The origin/zero point of this system is located in the Gulf of Guinea about 625 km (390 mi) south of Tema, Ghana, a location often facetiously called Null Island. == Geodetic datum == In order to use the theoretical definitions of latitude, longitude, and height to precisely measure actual locations on the physical earth, a geodetic datum must be used. A horizonal datum is used to precisely measure latitude and longitude, while a vertical datum is used to measure elevation or altitude. Both types of datum bind a mathematical model of the shape of the earth (usually a reference ellipsoid for a horizontal datum, and a more precise geoid for a vertical datum) to the earth. Traditionally, this binding was created by a network of control points, surveyed locations at which monuments are installed, and were only accurate for a region of the surface of the Earth. Newer datums are based on a global network for satellite measurements (GNSS, VLBI, SLR and DORIS). This combination of a mathematical model and physical binding ensures that users of the same datum obtain identical coordinates for a given physical point. However, different datums typically produce different coordinates for the same location (sometimes deviating several hundred meters) not due to actual movement, but because the reference system itself is shifted. Because any spatial reference system or map projection is ultimately calculated from latitude and longitude, it is crucial that they clearly state the datum on which they are based. For example, a UTM coordinate based on a WGS84 realisation will be different than a UTM coordinate based on NAD27 for the same location. Transforming coordinates from one datum to another requires a datum transformation method such as a Helmert transformation, although in certain situations a simple translation may be sufficient. Datums may be global, meaning that they represent the whole Earth, or they may be regional, meaning that they represent an ellipsoid best-fit to only a portion of the Earth. Examples of global datums include the several realizations of WGS 84 (with the 2D datum ensemble EPSG:4326 with 2 meter accuracy as identifier) used for the Global Positioning System, and the several realizations of the International Terrestrial Reference System and Frame (such as ITRF2020 with subcentimeter accuracy), which takes into account continental drift and crustal deformation. Datums with a regional fit of the ellipsoid that are chosen by a national cartographical organization include the North American Datums, the European ED50, and the British OSGB36. Given a location, the datum provides the latitude ϕ {\displaystyle \phi } and longitude λ {\displaystyle \lambda } . In the United Kingdom there are three common latitude, longitude, and height systems in use. WGS 84 differs at Greenwich from the one used on published maps OSGB36 by approximately 112 m. ED50 differs from about 120 m to 180 m. Points on the Earth's surface move relative to each other due to continental plate motion, subsidence, and diurnal Earth tidal movement caused by the Moon and the Sun. This daily movement can be as much as a meter. Continental movement can be up to 10 cm a year, or 10 m in a century. A weather system high-pressure area can cause a sinking of 5 mm. Scandinavia is rising by 1 cm a year as a result of the melting of the ice sheets of the last ice age, but neighboring Scotland is rising by only 0.2 cm. These changes are insignificant if a regional datum is used, but are statistically significant if a global datum is used. == Length of a degree == On the GRS 80 or WGS 84 spheroid at sea level at the Equator, one latitudinal second measures 30.715 m, one latitudinal minute is 1843 m and one latitudinal degree is 110.6 km. The circles of longitude, meridians, meet at the geographical poles, with the west–east width of a second naturally decreasing as latitude increases. On the Equator at sea level, one longitudinal second measures 30.92 m, a longitudinal minute is 1855 m and a longitudinal degree is 111.3 km. At 30° a longitudinal second is 26.76 m, at Greenwich (51°28′38″N) 19.22 m, and at 60° it is 15.42 m. On the WGS 84 spheroid, the length in meters of a degree of latitude at latitude ϕ (that is, the number of meters you would have to travel along a north–south line to move 1 degree in latitude, when at latitude ϕ), is about The returned measure of meters per degree latitude varies continuously with latitude. Similarly, the length in meters of a degree of longitude can be calculated as (Those coefficients can be improved, but as they stand the distance they give is correct within a centimeter.) The formulae both return units of meters per degree. An alternative method to estimate the length of a longitudinal degree at latitude ϕ {\displaystyle \phi } is to assume a spherical Earth (to get the width per minute and second, divide by 60 and 3600, respectively): where Earth's average meridional radius M r {\displaystyle \textstyle {M_{r}}\,\!} is 6,367,449 m. Since the Earth is an oblate spheroid, not spherical, that result can be off by several tenths of a percent; a better approximation of a longitudinal degree at latitude ϕ {\displaystyle \phi } is where Earth's equatorial radius a {\displaystyle a} equals 6,378,137 m and tan ⁡ β = b a tan ⁡ ϕ {\displaystyle \textstyle {\tan \beta ={\frac {b}{a}}\tan \phi }\,\!} ; for the GRS 80 and WGS 84 spheroids, b a = 0.99664719 {\textstyle {\tfrac {b}{a}}=0.99664719} . ( β {\displaystyle \textstyle {\beta }\,\!} is known as the reduced (or parametric) latitude). Aside from rounding, this is the exact distance along a parallel of latitude; getting the distance along the shortest route will be more work, but those two distances are always within 0.6 m of each other if the two points are one degree of longitude apart. == Alternative encodings == Like any series of multiple-digit numbers, latitude-longitude pairs can be challenging to communicate and remember. Therefore, alternative schemes have been developed for encoding GCS coordinates into alphanumeric strings or words: the Maidenhead Locator System, popular with radio operators. the World Geographic Reference System (GEOREF), developed for global military operations, replaced by the current Global Area Reference System (GARS). Open Location Code or "Plus Codes", developed by Google and released into the public domain. Geohash, a public domain system based on the Morton Z-order curve. Mapcode, an open-source system originally developed at TomTom. What3words, a proprietary system that encodes GCS coordinates as pseudorandom sets of words by dividing the coordinates into three numbers and looking up words in an indexed dictionary. These are not distinct coordinate systems, only alternative methods for expressing latitude and longitude measurements. == See also == Decimal degrees – Angular measurements, typically for latitude and longitude Geographical distance – Distance measured along the surface of the Earth Geographic information system – System to capture, manage, and present geographic data Geo URI scheme – System of geographic location identifiers ISO 6709, standard representation of geographic point location by coordinates Linear referencing – method of spatial referencingPages displaying wikidata descriptions as a fallback Primary direction – Celestial coordinate system used to specify the positions of celestial objectsPages displaying short descriptions of redirect targets Planetary coordinate system Selenographic coordinate system Spatial reference system – System to specify locations on Earth == Notes == == References == === Sources === == Further reading == Jan Smits (2015). Mathematical data for bibliographic descriptions of cartographic materials and spatial data. Geographical co-ordinates. ICA Commission on Map Projections. == External links == Media related to Geographic coordinate system at Wikimedia Commons
Wikipedia/Geographic_coordinate_system
Geographic information systems (GIS) play a constantly evolving role in geospatial intelligence (GEOINT) and United States national security. These technologies allow a user to efficiently manage, analyze, and produce geospatial data, to combine GEOINT with other forms of intelligence collection, and to perform highly developed analysis and visual production of geospatial data. Therefore, GIS produces up-to-date and more reliable GEOINT to reduce uncertainty for a decisionmaker. Since GIS programs are Web-enabled, a user can constantly work with a decision maker to solve their GEOINT and national security related problems from anywhere in the world. There are many types of GIS software used in GEOINT and national security, such as Google Earth, ERDAS IMAGINE, GeoNetwork opensource, and Esri ArcGIS. == Background == === Geographic information systems (GIS) === A GIS is a system that incorporates software, hardware, and data for collecting, managing, analyzing, and portraying geographically referenced information. It allows the user to view, understand, manipulate, and visualize data to reveal relationships and patterns that solve problems. The user can then present the data in easily understood and disseminated forms, such as maps, reports, or charts. A user can enter different kinds of data in map form into a GIS to begin their analysis, such as United States Geological Survey (USGS) digital line graph data, contour lines, elevation maps, topographic maps, geological maps, and satellite imagery. A user can also convert digital information into forms that a GIS can identify and utilize, such as census tabular data or Microsoft Excel files. Users can easily capture digital data in a GIS. If the data is not digital, then users will need to employ various techniques to capture the data, such as digitizing maps by hand-tracing with a computer mouse, utilizing a digitizing tablet to collect feature coordinates, using electronic scanners, or uploading Global Positioning System (GPS) coordinates. GIS applies to the geographical facets of various aspects of everyday life, such as transportation, logistics, medicine, marketing, sociology, ecology, pure and applied sciences, emergency management, and criminology. GIS is also utilized in all three areas of intelligence: national security intelligence, law enforcement intelligence, and competitive intelligence === Geospatial intelligence (GEOINT) === GEOINT, known previously as imagery intelligence (IMINT), is an intelligence collection discipline that applies to national security intelligence, law enforcement intelligence, and competitive intelligence. For example, an analyst can use GEOINT to identify the route of least resistance for a military force in a hostile country, to discover a pattern in the locations of reported burglaries in a neighborhood, or to generate a map and comparison of failing businesses that a company is likely to purchase. GEOINT is also the geospatial product of a process that is focused externally, designed to reduce the level of uncertainty for a decisionmaker, and that uses information derived from all sources. The National Geospatial-Intelligence Agency (NGA), who has overall responsibility for GEOINT in the U.S. Intelligence Community (IC), defines GEOINT as "information about any object—natural or man-made—that can be observed or referenced to the Earth, and has national security implications." Some of the sources of collected imagery information for GEOINT are imagery satellites, cameras on airplanes, Unmanned Aerial Vehicles (UAV) and drones, handheld cameras, maps, or GPS coordinates. Recently the NGA and IC have increased the use of commercial satellite imagery for intelligence support, such as the use of the IKONOS, Landsat, or SPOT satellites. These sources produce digital imagery via electro-optical systems, radar, infrared, visible light, multispectral, or hyperspectral imageries. The advantages of GEOINT are that imagery is easily consumable and understood by a decisionmaker, has low human life risk, displays the capabilities of a target and its geographical relationship to other objects, and that analysts can use imagery world-wide in a short time. On the other hand, the disadvantages of GEOINT are that imagery is only a snapshot of a moment in time, can be too compelling and lead to ill-informed decisions that ignore other intelligence, is static and vulnerable to deception and decoys, does not depict the intentions of a target, and is expensive and subject to environmental problems. == GIS use in GEOINT and national security intelligence == === Overview === A majority of national security intelligence decisions involve geography and GEOINT. GIS allows the user to capture, manage, exploit, analyze, and visualize geographically referenced information, physical features, and other geospatial data. GIS is thus a critical infrastructure for the GEOINT and national security community in manipulating and interpreting spatial knowledge in an information system. GIS extracts real world geographic or other information into datasets, maps, metadata, data models, and workflow models within a geodatabase that is used to solve GEOINT-related problems. GIS provides a structure for map and data production that allows a user to add other data sources, such as satellite or UAV imagery, as new layers to a geodatabase. The geodatabase can be disseminated and operated across any network of associated users (i.e. from the GEOINT analyst to the warfighter) and engenders a common spatial capability for all defense and intelligence domains. The map and chart production agency and imagery intelligence agency, the principal two agencies of GEOINT, use GIS to efficiently work together to solve decisionmaker's geospatial questions, to communicate effectively between their unique departments, and to provide constantly updated, accurate GEOINT to their national security and warfighter domains. Another important aspect of GIS is its ability to fuse geospatial data with other forms of intelligence collection, such as signals intelligence (SIGINT), measurement and signature intelligence (MASINT), human intelligence (HUMINT), or open source intelligence (OSINT). A GIS user can incorporate and fuse all of these types of intelligence into applications that provide corroborated GEOINT throughout an organization's information system. GIS enables efficient management of geospatial data, the fusion of geospatial data with other forms of intelligence collection, and advanced analysis and visual production of geospatial data. This produces faster, corroborated, and more reliable GEOINT that aims to reduce uncertainty for a decisionmaker. === Roles === Source: Data and map production Data fusion, data discovery through metadata catalogs, and data dissemination through Web portals and browsers Analysis and exploitation of collected imagery or intelligence SIGINT, GEOINT, MASINT, and other sensor analysis Fusion of multiple forms of intelligence collection Collaborative planning and efficient workflow management between decisionmakers, analysts, consumers, and warfighters Suitability and temporal analysis Stewardship: Geospatial intelligence === Related Esri Products === ==== Distributed Geospatial Intelligence Network (DGInet) ==== The DGInet technology allows military and national security intelligence customers to access large multi-terabyte databases through a common Web-based interface. This gives the users the capability to quickly and easily identify, overlay, and fuse georeferenced data from various sources to create maps or support geospatial analysis. Esri designed the technology for inexperienced GIS users of national security intelligence and defense organizations in order to provide a Web-based enterprise solution for publishing, distributing, and exploiting GEOINT data among designated organizations. According to Esri, the DGInet technology "uses thin clients to search massive amounts of geospatial and intelligence data using low-bandwidth Web services for data discovery, dissemination, and horizontal fusion of data and products." ==== PLTS for ArcGIS Specialized Solutions ==== PLTS for ArcGIS Specialized Solutions is a group of software applications that extends ArcGIS to facilitate database driven cartographic production for geospatial and mapping agencies, nautical and aeronautical chart production, foundation mapping, and defense mapping requirements. The collection of software applications includes Esri Production Mapping, Esri Nautical Solution, Esri Aeronautical Solution, and Esri Defense Mapping programs that provide quality control, easier and consistent map production, database sharing, and efficient workflow management for each program's specific type of mapping or charting. ==== Geoprocessing ==== Geoprocessing is based on a framework of data transformation in GIS and is a collection of hundreds of GIS tools that manipulate geospatial or other data in GIS. A geoprocessing tool performs an operation (often the name of the tool, such as "Clip") on an existing GIS dataset and produces a new dataset as a result of the utilized tool. GIS users utilize these tools to create a workflow model that quickly and easily transforms raw data into the desired product. In GEOINT, users employ geoprocessing in similar ways. They can make geoprocessing tools resemble analytic techniques to transform large amounts of data into actionable information. In national security intelligence and defense organizations, geoprocessing notifies users to events occurring in specific areas of interest and enables domain-specific analysis applications, such as radio frequency analysis, terrain analysis, and network analysis. ==== Tracking Analyst and Tracking Server ==== The ArcGIS Tracking Analyst extension enables the user to create time series visualizations to analyze time and location sensitive information. It creates a visible path from incorporated data that shows movement through space and time. The program allows the national security intelligence or defense user to track assets (such as vehicles or personnel), monitor sensors, visualize change over time, play back events, and analyze historical or real-time temporal data. The Tracking Server program is an kanye enterprise technology that integrates real-time data with GIS to disseminate information quickly and easily to decisionmakers. This program enables the user to obtain data in any format and transmit it to the necessary consumer or ArcGIS Tracking Analyst user, to conduct filters or alerts on specific attributes of incoming data or global positions, and to log data into ArcGIS Server for efficient project management and information sharing. When Tracking Server and ArcGIS Tracking Analyst are used together, a user can monitor changes in data as they occur in real-time. A national security intelligence or defense user can subscribe to real-time data over the Internet from GPS and custom data feeds to support GEOINT requirements, such as fleet management or target tracking. ==== ArcGIS Military Analyst ==== The ArcGIS Military Analyst extension incorporates display and analysis tools that allow the use and production of vector and raster products, line-of-sight analysis, hillshade analysis, terrain analysis, and Military Grid Reference System (MGRS) conversion. This program also provides a basis for command, control, and intelligence (C2I) systems. National security intelligence and defense organizations use ArcGIS Military Analyst extension to integrate geospatial data with other defense data, analyze digital terrains, and prepare for battle. This program also enables such users to manage and analyze geospatial data and relationships between mission planning, logistics, and C2I. ==== Military Overlay Editor (MOLE) ==== MOLE is a set of command components that enables national security intelligence and defense users to easily create, display, and edit U.S. Department of Defense MIL-STD-2525B and the North Atlantic Treaty Organization APP-6A military symbology in a map. This allows for easier and faster identification, understanding, and movement of ally and hostile forces on a map by combining GIS spatial analysis techniques with common military symbols. MOLE provides a clearer visualization of mission planning and goals for the decisionmaker, and allows a user to import, locate, and display order of battle databases. ==== Grid Manager ==== Grid Manager enables the national security intelligence or defense user to create accurate, realistic grids that contain geographic location indicators based on specified shapes, scales, coordinate systems, and units. This program allows the user to create multiple grids, graticules, and borders for such map products as MGRS coordinates and tourist, topographic, parcel, street, nautical, and aeronautical maps. == GIS use in the National Geospatial-Intelligence Agency (NGA) == The NGA uses GIS products to create digital nautical, aeronautical, and topographic charts and maps, to perform geotechnical and coordinate system analysis, and to help solve a large variety of national security and military problems. Since the NGA is a U.S. Department of Defense combat support agency and a member of the IC, it uses GIS to produce precise, up-to-date GEOINT for members of the U.S. Armed Forces, the IC, and other government agencies. Web-enabled GIS applications allow for fast, efficient sharing and disseminating of geospatial data, products, and intelligence from the NGA to its allies, warfighters, partners, and other agencies across the World Wide Web. The NGA and Esri have successfully collaborated on providing timely, accurate, and relevant GEOINT in support of U.S. national security for the past 20 years. The NGA has created a grouping of web-based capabilities called GEOINT Online. This program allows a user to search and access all NGA GEOINT documents from wherever they are stored and from wherever the user is. GEOINT Online provides quick, easy, and reliable access to current NGA intelligence products, changes in activities or regions, information from analyst's blogs and Intellipedia, geospatial imagery, maps and charts, major GIS commercial software packages, and GIS combinations of these products. A user can also edit and format existing NGA/GIS products and maps to create, print, and download new products that fulfill current decisionmaker requirements. Ultimately, this results in the faster production of timely and relevant GEOINT data. This program allowed the NGA to change its focus from simply generating cartographic products to providing updated, accurate GEOINT to support the national security and military requirements of its customers. == See also == ArcGIS ERDAS IMAGINE Esri Geographic information system Geospatial intelligence GeoTime Google Earth Imagery intelligence National Geospatial-Intelligence Agency National security Richard Petron == References == == External links == National Geospatial-Intelligence Agency
Wikipedia/Geographic_information_systems_in_geospatial_intelligence
A hydrologic model is a simplification of a real-world system (e.g., surface water, soil water, wetland, groundwater, estuary) that aids in understanding, predicting, and managing water resources. Both the flow and quality of water are commonly studied using hydrologic models. == Analog models == Prior to the advent of computer models, hydrologic modeling used analog models to simulate flow and transport systems. Unlike mathematical models that use equations to describe, predict, and manage hydrologic systems, analog models use non-mathematical approaches to simulate hydrology. Two general categories of analog models are common; scale analogs that use miniaturized versions of the physical system and process analogs that use comparable physics (e.g., electricity, heat, diffusion) to mimic the system of interest. === Scale analogs === Scale models offer a useful approximation of physical or chemical processes at a size that allows for greater ease of visualization. The model may be created in one (core, column), two (plan, profile), or three dimensions, and can be designed to represent a variety of specific initial and boundary conditions as needed to answer a question. Scale models commonly use physical properties that are similar to their natural counterparts (e.g., gravity, temperature). Yet, maintaining some properties at their natural values can lead to erroneous predictions. Properties such as viscosity, friction, and surface area must be adjusted to maintain appropriate flow and transport behavior. This usually involves matching dimensionless ratios (e.g., Reynolds number, Froude number). Groundwater flow can be visualized using a scale model built of acrylic and filled with sand, silt, and clay. Water and tracer dye may be pumped through this system to represent the flow of the simulated groundwater. Some physical aquifer models are between two and three dimensions, with simplified boundary conditions simulated using pumps and barriers. === Process analogs === Process analogs are used in hydrology to represent fluid flow using the similarity between Darcy's law, Ohm's law, Fourier's law, and Fick's law. The analogs to fluid flow are the flux of electricity, heat, and solutes, respectively. The corresponding analogs to fluid potential are voltage, temperature, and solute concentration (or chemical potential). The analogs to hydraulic conductivity are electrical conductivity, thermal conductivity, and the solute diffusion coefficient. An early process analog model was an electrical network model of an aquifer composed of resistors in a grid. Voltages were assigned along the outer boundary, and then measured within the domain. Electrical conductivity paper can also be used instead of resistors. == Statistical models == Statistical models are a type of mathematical model that are commonly used in hydrology to describe data, as well as relationships between data. Using statistical methods, hydrologists develop empirical relationships between observed variables, find trends in historical data, or forecast probable storm or drought events. === Moments === Statistical moments (e.g., mean, standard deviation, skewness, kurtosis) are used to describe the information content of data. These moments can then be used to determine an appropriate frequency distribution, which can then be used as a probability model. Two common techniques include L-moment ratios and Moment-Ratio Diagrams. The frequency of extremal events, such as severe droughts and storms, often requires the use of distributions that focus on the tail of the distribution, rather than the data nearest the mean. These techniques, collectively known as extreme value analysis, provide a methodology for identifying the likelihood and uncertainty of extreme events. Examples of extreme value distributions include the Gumbel, Pearson, and generalized extreme value. The standard method for determining peak discharge uses the log-Pearson Type III (log-gamma) distribution and observed annual flow peaks. === Correlation analysis === The degree and nature of correlation may be quantified, by using a method such as the Pearson correlation coefficient, autocorrelation, or the T-test. The degree of randomness or uncertainty in the model may also be estimated using stochastics, or residual analysis. These techniques may be used in the identification of flood dynamics, storm characterization, and groundwater flow in karst systems. Regression analysis is used in hydrology to determine whether a relationship may exist between independent and dependent variables. Bivariate diagrams are the most commonly used statistical regression model in the physical sciences, but there are a variety of models available from simplistic to complex. In a bivariate diagram, a linear or higher-order model may be fitted to the data. Factor analysis and principal component analysis are multivariate statistical procedures used to identify relationships between hydrologic variables. Convolution is a mathematical operation on two different functions to produce a third function. With respect to hydrologic modeling, convolution can be used to analyze stream discharge's relationship to precipitation. Convolution is used to predict discharge downstream after a precipitation event. This type of model would be considered a “lag convolution”, because of the predicting of the “lag time” as water moves through the watershed using this method of modeling. Time-series analysis is used to characterize temporal correlation within a data series as well as between different time series. Many hydrologic phenomena are studied within the context of historical probability. Within a temporal dataset, event frequencies, trends, and comparisons may be made by using the statistical techniques of time series analysis. The questions that are answered through these techniques are often important for municipal planning, civil engineering, and risk assessments. Markov chains are a mathematical technique for determine the probability of a state or event based on a previous state or event. The event must be dependent, such as rainy weather. Markov Chains were first used to model rainfall event length in days in 1976, and continues to be used for flood risk assessment and dam management. == Data-driven models == Data-driven models in hydrology emerged as an alternative approach to traditional statistical models, offering a more flexible and adaptable methodology for analysing and predicting various aspects of hydrological processes. While statistical models rely on rigorous assumptions about probability distributions, data-driven models leverage techniques from artificial intelligence, machine learning, and statistical analysis, including correlation analysis, time series analysis, and statistical moments, to learn complex patterns and dependencies from historical data. This allows them to make more accurate predictions and provide insights into the underlying processes. Since their inception in the latter half of the 20th century, data-driven models have gained popularity in the water domain, as they help improve forecasting, decision-making, and management of water resources. A couple of notable publications that use data-driven models in hydrology include "Application of machine learning techniques for rainfall-runoff modelling" by Solomatine and Siek (2004), and "Data-driven modelling approaches for hydrological forecasting and prediction" by Valipour et al. (2021). These models are commonly used for predicting rainfall, runoff, groundwater levels, and water quality, and have proven to be valuable tools for optimizing water resource management strategies. == Conceptual models == Conceptual models represent hydrologic systems using physical concepts. The conceptual model is used as the starting point for defining the important model components. The relationships between model components are then specified using algebraic equations, ordinary or partial differential equations, or integral equations. The model is then solved using analytical or numerical procedures. Conceptual models are commonly used to represent the important components (e.g., features, events, and processes) that relate hydrologic inputs to outputs. These components describe the important functions of the system of interest, and are often constructed using entities (stores of water) and relationships between these entitites (flows or fluxes between stores). The conceptual model is coupled with scenarios to describe specific events (either input or outcome scenarios). For example, a watershed model could be represented using tributaries as boxes with arrows pointing toward a box that represents the main river. The conceptual model would then specify the important watershed features (e.g., land use, land cover, soils, subsoils, geology, wetlands, lakes), atmospheric exchanges (e.g., precipitation, evapotranspiration), human uses (e.g., agricultural, municipal, industrial, navigation, thermo- and hydro-electric power generation), flow processes (e.g., overland, interflow, baseflow, channel flow), transport processes (e.g., sediments, nutrients, pathogens), and events (e.g., low-, flood-, and mean-flow conditions). Model scope and complexity is dependent on modeling objectives, with greater detail required if human or environmental systems are subject to greater risk. Systems modeling can be used for building conceptual models that are then populated using mathematical relationships. Example 1 The linear-reservoir model (or Nash model) is widely used for rainfall-runoff analysis. The model uses a cascade of linear reservoirs along with a constant first-order storage coefficient, K, to predict the outflow from each reservoir (which is then used as the input to the next in the series). The model combines continuity and storage-discharge equations, which yields an ordinary differential equation that describes outflow from each reservoir. The continuity equation for tank models is: d S ( t ) d t = i ( t ) − q ( t ) {\displaystyle {dS(t) \over dt}=i(t)-q(t)} which indicates that the change in storage over time is the difference between inflows and outflows. The storage-discharge relationship is: q ( t ) = S ( t ) / K {\displaystyle q(t)=S(t)/K} where K is a constant that indicates how quickly the reservoir drains; a smaller value indicates more rapid outflow. Combining these two equation yields K d q d t = i − q {\displaystyle K{dq \over dt}=i-q} and has the solution: q = i ( 1 − e − t / k ) {\displaystyle q=i(1-e^{-t/k})} Example 2 Instead of using a series of linear reservoirs, also the model of a non-linear reservoir can be used. In such a model the constant K in the above equation, that may also be called reaction factor, needs to be replaced by another symbol, say α (Alpha), to indicate the dependence of this factor on storage (S) and discharge (q). In the left figure the relation is quadratic: α = 0.0123 q2 + 0.138 q - 0.112 === Governing equations === Governing equations are used to mathematically define the behavior of the system. Algebraic equations are likely often used for simple systems, while ordinary and partial differential equations are often used for problems that change in space in time. Examples of governing equations include: Manning's equation is an algebraic equation that predicts stream velocity as a function of channel roughness, the hydraulic radius, and the channel slope: v = k n R 2 / 3 S 1 / 2 {\displaystyle v={k \over n}R^{2/3}S^{1/2}} Darcy's law describes steady, one-dimensional groundwater flow using the hydraulic conductivity and the hydraulic gradient: q → = − K ∇ h {\displaystyle {\vec {q}}=-K\nabla h} Groundwater flow equation describes time-varying, multidimensional groundwater flow using the aquifer transmissivity and storativity: T ∇ 2 h = S ∂ h ∂ t {\displaystyle T\nabla ^{2}h=S{\partial h \over \partial t}} Advection-Dispersion equation describes solute movement in steady, one-dimensional flow using the solute dispersion coefficient and the groundwater velocity: D ∂ 2 C ∂ x 2 − v ∂ C ∂ x = ∂ C ∂ t {\displaystyle D{\partial ^{2}C \over \partial x^{2}}-v{\partial C \over \partial x}={\partial C \over \partial t}} Poiseuille's law describes laminar, steady, one-dimensional fluid flow using the shear stress: ∂ P ∂ x = − μ ∂ τ ∂ y {\displaystyle {\partial P \over \partial x}=-\mu {\partial \tau \over \partial y}} Cauchy's integral is an integral method for solving boundary value problems: f ( a ) = 1 2 π i ∮ γ f ( z ) z − a d z {\displaystyle f(a)={\frac {1}{2\pi i}}\oint _{\gamma }{\frac {f(z)}{z-a}}\,dz} === Solution algorithms === ==== Analytic methods ==== Exact solutions for algebraic, differential, and integral equations can often be found using specified boundary conditions and simplifying assumptions. Laplace and Fourier transform methods are widely used to find analytic solutions to differential and integral equations. ==== Numeric methods ==== Many real-world mathematical models are too complex to meet the simplifying assumptions required for an analytic solution. In these cases, the modeler develops a numerical solution that approximates the exact solution. Solution techniques include the finite-difference and finite-element methods, among many others. Specialized software may also be used to solve sets of equations using a graphical user interface and complex code, such that the solutions are obtained relatively rapidly and the program may be operated by a layperson or an end user without a deep knowledge of the system. There are model software packages for hundreds of hydrologic purposes, such as surface water flow, nutrient transport and fate, and groundwater flow. Commonly used numerical models include SWAT, MODFLOW, FEFLOW, PORFLOW, MIKE SHE, and WEAP. === Model calibration and evaluation === Physical models use parameters to characterize the unique aspects of the system being studied. These parameters can be obtained using laboratory and field studies, or estimated by finding the best correspondence between observed and modelled behavior. Between neighbouring catchments which have physical and hydrological similarities, the model parameters varies smoothly suggesting the spatial transferability of parameters. Model evaluation is used to determine the ability of the calibrated model to meet the needs of the modeler. A commonly used measure of hydrologic model fit is the Nash-Sutcliffe efficiency coefficient. == See also == Hydrological optimization Scientific modelling Soil and Water Assessment Tool == References == == External links == http://drought.unl.edu/MonitoringTools/DownloadableSPIProgram.aspx
Wikipedia/Hydrological_model
Geographic data and information is defined in the ISO/TC 211 series of standards as data and information having an implicit or explicit association with a location relative to Earth (a geographic location or geographic position). It is also called geospatial data and information, georeferenced data and information, as well as geodata and geoinformation. Location information (known by the many names mentioned here) is stored in a geographic information system (GIS). There are also many different types of geodata, including vector files, raster files, geographic databases, web files, and multi-temporal data. Spatial data or spatial information is broader class of data whose geometry is relevant but it is not necessarily georeferenced, such as in computer-aided design (CAD), see geometric modeling. == Fields of study == Geographic data and information are the subject of a number of overlapping fields of study, mainly: Geocomputation Geographic information science Geographic information science and technology Geoinformatics Geomatics Geovisualization Technical geography "Geospatial technology" may refer to any of "geomatics", "geomatics", or "geographic information technology." The above is in addition to other related fields, such as: Cartography Geodesy Geography Geostatistics Photogrammetry Remote sensing Spatial data analysis Surveying Topography == See also == Geomatics engineering Earth observation data Geographic feature Georeferencing Geospatial intelligence Ubiquitous geographic information == References == == Further reading == Roger A. Longhorn; Michael Blakemore (2007). Geographic Information: Value, Pricing, Production, and Consumption. CRC Press. == External links == Media related to Geographic data and information at Wikimedia Commons
Wikipedia/Geographic_data_and_information
A historical geographic information system (also written as historical GIS or HGIS) is a geographic information system that may display, store and analyze data of past geographies and track changes in time. It can be regarded as a tool for historical geography. == Techniques == Historical geographic information systems are built from a variety of sources and techniques. An especially prominent method is the digitization and georeferencing of historical maps. Old maps may contain valuable information about the past. By adding coordinates to such maps, they may be added as a feature layer to modern GIS data. This facilitates comparison of different map layers showing the geography at different times. The maps may be further enhanced by techniques such as rubbersheeting, which spatially warps the data to fit with more accurate modern maps. Large map collections, such as the David Rumsey Historical Map Collection, have digitized and georeferenced their maps and published them on the Internet, making them accessible for a variety of projects. Georeferencing historical microdata, such as census or parish records, allows researchers to conduct spatial analysis of historical data. Comparisons between statistical areas can require reconstructing former political boundaries and other types of borders and tracking their evolution. == Notable historical GIS projects == The Atlas of Historical County Boundaries tracks the evolution of U.S. counties. China Historical Geographic Information System is a project on Imperial China developed by Harvard University and Fudan University. Electronic Cultural Atlas Initiative (ECAI) is a clearinghouse for the exchange of metadata of Historical GIS. Maintained by the University of California, Berkeley. Euratlas History Maps is a historical atlas of Europe from year 1 to present days with one map per century. The maps depict sovereign states as well as administrative divisions and dependent territories. Great Britain Historical GIS is a GIS-enabled database holding diverse geo-referenced maps, statistics, gazetteers and travel writing, especially for the period 1801–2001 covered by British censuses. Created and maintained by Portsmouth University. HistoAtlas is an open historical geographical information system that tries to build a free historical atlas of the world. National Historical Geographic Information System (NHGIS) is a system for displaying and analyzing Census tracts and tract changes in the United States. OpenHistoricalMap is an open-source historical world atlas based on OpenStreetMap technology and practices. == Software or web services developed for Historical GIS == Google Earth added a time line feature in version 4 (2006) that enables simple temporal browsing of spatial data. TimeMap is a Java open-source applet (or program) for browsing spatial-temporal data and ECAI data sets. Developed by the department of archaeology University of Sydney. == See also == Digital history Geographic information system § Adding the dimension of time GIS in archaeology Landscape history Rephotography Spatiotemporal database Time geography == References == == Further reading == Ian N. Gregory, Don Debats, Don Lafreniere eds.: The Routledge Companion to Spatial History. Routledge 2018 ISBN 9781138860148 Joachim Laczny: Friedrich III. (1440–1493) auf Reisen. Die Erstellung des Itinerars eines spätmittelalterlichen Herrschers unter Anwendung eines Historical Geographic Information System (Historical GIS) In: Joachim Laczny, Jürgen Sarnowsky eds.: Perzeption und Rezeption. Wahrnehmung und Deutung im Mittelalter und in der Moderne (Nova Mediaevalia Quellen und Studien zum europäischen Mittelalter, 12), Göttingen: V&R unipress 2014, p. 33–65. ISBN 978-3-8471-0248-9, doi:10.14220/9783737002486.33 Ian N. Gregory, Paul Ell: Historical GIS: Technologies, Methodologies, and Scholarship (Cambridge Studies in Historical Geography) 2008 ISBN 978-0-521-67170-5 Anne Kelly Knowles: Past Time, Past Place: GIS for history A collection of twelve case studies on the use of GIS in historical research and education. ESRI press 2002 ISBN 1-58948-032-5 Anne Kelly Knowles, Amy Hillier eds.: Placing History: How Maps, Spatial Data, and GIS Are Changing Historical Scholarship 2008 ISBN 978-1-58948-013-1 Ian N. Gregory: A place in History A short introduction to HGIS by the lead developers of GBHGIS ISSN 1463-5194 Ott, T. and Swiaczny, F.: Time-integrative GIS. Management and analysis of spatio-temporal data, Berlin / Heidelberg / New York: Springer 2001 ISBN 3-540-41016-3 Feature edition of Historical development GIS in the journal Social Science History 24 2000, Introduction by Anne Kelly Knowles.
Wikipedia/Historical_geographic_information_system
Cartographic design or map design is the process of crafting the appearance of a map, applying the principles of design and knowledge of how maps are used to create a map that has both aesthetic appeal and practical function. It shares this dual goal with almost all forms of design; it also shares with other design, especially graphic design, the three skill sets of artistic talent, scientific reasoning, and technology. As a discipline, it integrates design, geography, and geographic information science. Arthur H. Robinson, considered the father of cartography as an academic research discipline in the United States, stated that a map not properly designed "will be a cartographic failure." He also claimed, when considering all aspects of cartography, that "map design is perhaps the most complex." == History == From ancient times to the 20th century, cartography was a craft or trade. Most map makers served several years as an apprentice, learning the skills of the master, with little room for innovation other than adapting to changing production technology. That said, there were notable exceptions, such as the occasional introduction of a novel Map projection, and the advent of thematic mapping in the 19th century highlighted by the work of Charles Dupin and Charles Joseph Minard in France. As late as 1948, Erwin Raisz's General Cartography, the standard English textbook on the subject, reads as a set of instructions of how to construct maps in keeping with tradition, with very little reflection on why it is done that way. This was despite the fact that Raisz himself was a very creative designer, developing techniques as varied as cartograms and a style of Terrain depiction on physiographic maps that few have been able to replicate. Advances in cartographic production technology in the 20th century, especially the advent and widespread availability of color Offset printing, then a multitude of advances spurred on by World War II, such as Photolithography, gave cartographers a larger palette of design options, and made it easier to creatively innovate. This was synchronized with the widespread expansion of higher education, during which most cartography training transitioned from an apprenticeship to a college degree (typically using Raisz's textbook in America). The new generation of cartography professionals and professors began to reflect on why some maps seemed to be better (in beauty and function) than others, and to think of ways to improve design. Perhaps chief among them was Arthur H. Robinson, whose short but seminal work The Look of Maps (1952) set the stage for the future of cartographic design, both for his early theorizing about map design, and for his honest acknowledgment of what was not yet known, soon spawning dozens of PhD dissertations. His subsequent textbook, Elements of Cartography (1953), was a marked departure from the past, with a major focus on design, claiming to "present cartography as an intellectual art and science rather than as a sterile system of drafting and drawing procedures." Since the 1950s, a significant focus of cartography as an academic discipline has been the cartographic communication school of thought, seeking to improve design standards through increased scientific understanding of how maps are perceived and used, typically based on cognate disciplines such as psychology (especially perception, Gestalt psychology, and psychophysical experimentation), Human vision, and geography. This focus began to be challenged towards the end of the 1980s by the study of critical cartography, which drew attention to the influence of social and political forces on map design. A second major research track has been the investigation of the design opportunities offered by changing technology, especially computer graphics starting in the 1960s, geographic information systems starting in the 1970s, and the Internet starting in the 1990s. However, as much or more of the recent innovation in cartographic design has been at the hands of professional cartographers and their sharing of resources and ideas through organisations such as the International Cartographic Association and through national mapping societies such as the North American Cartographic Information Society and the British Cartographic Society. == Map types == A wide variety of different types of maps have been developed, and are available to use for different purposes. In addition to the general principles of cartographic design, some types of visualizations have their its own design needs, constraints, and best practices. Terrain/Relief/Topography. Several methods have been developed for visualizing elevation and the shape of the Earth's surface. Some techniques date back hundreds or thousands of years and are difficult to replicate digitally, such hill profiles and hachures; others, such as shaded relief and contour lines, are much easier to produce in GIS than using manual tools. Some of these methods are designed for analytical use, such as measuring slope on contours, but most are intended to produce an intuitive visual representation of the terrain. A Choropleth map visualizes statistical data that has been aggregated into a priori districts (such as countries or counties) using area symbols based on the visual variables of color and/or pattern. Choropleth maps are by far the most popular kind of thematic maps due to the widespread availability of aggregated statistical data (such as census data, but the nature of aggregate data can result in significant misinterpretation issues, such as the Ecological fallacy and the Modifiable areal unit problem, which can be somewhat mitigated by careful design. A Dasymetric map is a hybrid type that uses additional data sources to refine the boundaries of a choropleth map (especially through excluding uninhabited areas), thereby mitigating some of the sources of misinterpretation. A Proportional symbol map visualizes statistical data of point symbols, often circles, using the visual variable of size. The underlying data may be of point features, or it may be the same aggregate data used in choropleth maps. In the latter case, the two map types are often complimentary, as variables that are inappropriate to represent in one type are well-suited for the other. A Cartogram purposefully distorts the size of areal features proportional to a chosen variable, such as total population, and thus may be thought of as a hybrid between choropleth and proportional symbol maps. Several automated and manual techniques have been developed to construct cartograms, each having advantages and disadvantages. Frequently, the resultant shapes are filled as a choropleth map representing a variable thought to relate in some way to the area variable. An Isarithmic map (or isometric or isopleth or contour) represents a continuous field by interpolating lines wherein the field variable has equal value (an isoline). The lines themselves and/or the intervening regions may be symbolized. Some choropleth maps may be thought of as rough approximations of isarithmic maps, and dasymetric maps as slightly better approximations. A Continuous tone map represents a continuous field as smoothly transitioning color (hue, value, and/or saturation), usually based on a raster grid. Some have considered this to be a special type of unclassified isarithmic map, while others consider it to be something fundamentally different. A Chorochromatic map (or area-class) visualizes a discrete/nominal Field (geography) as a set of regions of homogeneous value. A Dot distribution map (or dot density) visualizes the density of an aggregate group as representative dots (each of which may represent a single individual or a constant number of individuals). The source data may be the actual point locations of the individuals, or choropleth-type aggregate district statistics. A Flow map focuses on lines of movement. A wide variety of flow maps exist, depending on whether flow volume is represented (usually using visual variables such as stroke weight or color value), and whether the route of flow is shown accurately (such as a navigation route on a Road map) or schematically (such as a Transit map or airline route map) Although these are called separate "maps," they should be thought of as single map layers, which may be combined with other thematic or feature layers in a single map composition. A bivariate map uses one or more of the methods above to represent two variables simultaneously; three or more variables produce a multivariate map. == Design process == As map production and reproduction technology has advanced, the process of designing and producing maps has changed considerably. Most notably, GIS and graphics software not only makes it easier and faster to create a map, but it facilitates a non-linear editing process that is more flexible than in the days of manual cartography. There is still a general procedure that cartographers generally follow: Planning: The iterative nature of modern cartography makes this step somewhat less involved than before, but it is still crucial to have some form of plan. Typically, this involves answering several questions: What is the purpose of the map? Maps serve a wide variety of purposes; they may be descriptive (showing the accurate location of geographic features to be used in a variety of ways, like a street map), exploratory (showing the distribution of phenomena and their properties, to look for underlying patterns and processes, like many thematic maps), explanatory (educating the audience about a specific topic), or even rhetorical (trying to convince the audience to believe or do something). Who is the audience? Maps will be more useful if they cater to the intended audience. This audience could range from the cartographer herself (desiring to learn about a topic by mapping it), to focused individuals or groups, to the general public. Several characteristics of the audience can aid this process, if they can be determined, such as: their level of knowledge about the subject matter and the region being covered; their skill in map reading and understanding of geographic principles (e.g., do they know what 1:100,000 means?); and their needs, motivations and biases. Is a map the best solution? There are times when a map could be made, but a chart, photograph, text, or other tool may better serve the purpose. What datasets are needed? The typical map will require data to serve several roles, including information about the primary purpose, as well as supporting background information. What medium should be used? Different mapping media, such as posters, brochures, folded maps, page maps, screen displays, and web maps have advantages and disadvantages for different purposes, audiences, and usage contexts. Data Collection: In the era of Geographic information systems, it seems like vast amounts of data are available for every conceivable topic, but they must be found and obtained. Frequently, available datasets are not perfect matches for the needs of the project at hand, and must be augmented or edited. Also, it is still common for there to be no available data on the specific topic, requiring the cartographer to create them, or derive them from existing data using GIS tools. Design and Implementation: This step involves making decisions about all of the aspects of map design, as listed below, and implementing them using computer software. In the manual drafting era, this was a very linear process of careful decision making, in which some aspects needed to be implemented before others (often, projection first). However, current GIS and graphics software enables interactive editing of all of these aspects interchangeably, leading to a non-linear, iterative process of experimentation, evaluation, and refinement. Production and Distribution: The last step is to produce the map in the chosen medium, and distribute it to the audience. This could be as simple as a desktop printer, or sending it to a press, or developing an interactive Web mapping site. === Map === Cartographic design is one part of a larger process in which maps play a central role. This cartographic process begins with a real or imagined environment or setting. As map makers gather data on the subject they are mapping (usually through technology and/or remote sensing), they begin to recognize and detect patterns that can be used to classify and arrange the data for map creation (i.e., they think about the data and its patterns as well as how to best visualize them on a map). After this, the cartographer compiles the data and experiments with the many different methods of map design and production (including generalization, symbolization, and other production methods) in an attempt to encode and portray the data on a map that will allow the map user to decode and interpret the map in the way that matches the intended purpose of the map maker. Next, the user of the map reads and analyzes the map by recognizing and interpreting the symbols and patterns that are found on the map. This leads the user to take action and draw conclusions based on the information that they find on the map. In this way, maps help shape how we view the world based on the spatial perspectives and viewpoints that they help create in our mind. === Goals === While maps serve a variety of purposes, and come in a variety of styles, most designs share common goals. Some of the most commonly stated include: Accuracy, the degree to which the information on the map corresponds to the nature of the real world. Traditionally, this was the primary determinant of quality cartography. It is now accepted, due largely to studies in Critical cartography, that no dataset or map is a perfect reproduction of reality, and that the subjective biases and motivations of the cartographer are virtually impossible to circumvent. That said, maps can still be crafted to be as accurate as possible, honest about their shortcomings, and leverage their subjectivity. Functionality, the usefulness of the map to achieve its purpose. During much of the latter 20th century, this was the primary goal of academic cartography, especially the Cartographic Communication school of thought: to determine how to make the most efficient maps as conduits of information. Clarity, the degree to which the map makes its purpose obvious and its information easy to access. Clarity can be achieved through removing all but the most important information, but this comes at the expense of other goals. Richness, the volume and diversity of information the reader can glean from the map. Even maps with a narrowly-defined purpose often require the reader to see patterns in large amounts of data. Aesthetic appeal, a positive emotional reaction to the overall appearance of the map. Maps may be appreciated as "beautiful," but other positive affects include "interesting," "engaging," "convincing," and "motivating." Aesthetic reactions can be negative as well, such as "ugly," "cluttered," "confusing," "complicated," "annoying," or "off-putting." These goals often seem to be in conflict, and it may be tempting to prioritize one over the others. However, quality design in cartography, as in any other design field, is about finding creative and innovative solutions to achieve multiple goals. According to Edward Tufte, What is to be sought in designs for the display of information is the clear portrayal of complexity. Not the complication of the simple; rather the task of the designer is to give visual access to the subtle and the difficult--that is, the revelation of the complex. In fact, good design can produce synergistic results. Even aesthetics can have practical value: potential map users are more likely to pick up, and more likely to spend time with, a beautiful map than one that is difficult to look at. In turn, the practical value of maps has gained aesthetic appeal, favoring those that exude a feeling of being "professional," "authoritative," "well-crafted," "clear," or "informative." In 1942, cartographer John K. Wright said, An ugly map, with crude colors, careless line work, and disagreeable, poorly arranged lettering may be intrinsically as accurate as a beautiful map, but it is less likely to inspire confidence. Rudolf Arnheim, an art theorist, said this about the relationship between maps and aesthetics in 1976: The aesthetic or artistic qualities of maps are sometimes thought to be simply matters of so-called good taste, of harmonious color schemes and sensory appeal. In my opinion, those are secondary concerns. The principal task of the artist, be he a painter or a map designer, consists of translating the relevant aspects of the message into the expressive qualities of the medium in such a way that the information comes across as a direct impact of perceptual forces. This distinguishes the mere transmission of facts from the arousal of meaningful experience. More recently, cartographers have recognised the central role of aesthetics in cartographic design and called for greater focus on how this role functions over time and space. For example, in 2005, Dr Alex Kent (former President of the British Cartographic Society) recommended: It will thus be more useful to cartographers and the development of cartography in general to undertake further research towards understanding the role of aesthetics in cartography than to pursue universal principles. Some possible topics for investigation include: 1. A history of the development of aesthetics in cartography; 2. An exploration of geographical variations in cartographic aesthetics; and 3. A critical examination of the factors influencing aesthetic decisions in contemporary mapmaking. == Map purpose and selection of information == Robinson codified the mapmaker's understanding that a map must be designed foremost with consideration to the audience and its needs, stating that from the very beginning of mapmaking, maps "have been made for some particular purpose or set of purposes". The intent of the map should be illustrated in a manner in which the percipient (the map reader) acknowledges its purpose in a timely fashion. The principle of figure-ground refers to this notion of engaging the user by presenting a clear presentation, leaving no confusion concerning the purpose of the map. This will enhance the user's experience and keep their attention. If the user is unable to identify what is being demonstrated in a reasonable fashion, the map may be regarded as useless. Making a meaningful map is the ultimate goal. Alan MacEachren explains that a well designed map "is convincing because it implies authenticity". An interesting map will no doubt engage a reader. Information richness or a map that is multivariate shows relationships within the map. Showing several variables allows comparison, which adds to the meaningfulness of the map. This also generates hypothesis and stimulates ideas and perhaps further research. In order to convey the message of the map, the creator must design it in a manner which will aid the reader in the overall understanding of its purpose. The title of a map may provide the "needed link" necessary for communicating that message, but the overall design of the map fosters the manner in which the reader interprets it. In the 21st century it is possible to find a map of virtually anything from the inner workings of the human body to the virtual worlds of cyberspace. Therefore, there are now a huge variety of different styles and types of map – for example, one area which has evolved a specific and recognisable variation are those used by public transport organisations to guide passengers, namely urban rail and metro maps, many of which are loosely based on 45 degree angles as originally perfected by Harry Beck and George Dow. == Aspects of design == Unlike cognate disciplines such as Graphic design, Cartography is constrained by the fact that geographic phenomena are where and what they are. However, within that framework the cartographer has a great deal of control over many aspects of the map. === Cartographic data and generalization === The widespread availability of data from Geographic information systems, especially free data such as OpenStreetMap, has greatly shortened the time and cost of creating most maps. However, this part of the design process is still not trivial. Existing GIS data, often created for management or research purposes, is not always in a form that is most suited to a particular map purpose, and data frequently need to be augmented, edited, or updated to be useful. Some sources, especially in Europe, refer to the former as a Digital Landscape Model, and spatial data that are fine-tuned for map design as a Digital Cartographic Model. A significant part of this transformation is generalization, a set of procedures for adjusting the amount of detail (geometry and attributes) in datasets to be appropriate for a given map. All maps portray a small, strategic sample of the infinite amount of potential information in the real world; the strategy for that sample is largely driven by the scale, purpose, and audience of the map. The cartographer is thus constantly making judgements about what to include, what to leave out and what to show in a slightly incorrect place. Most often, generalization starts with detailed data created for a larger scale, and strategically removes information deemed to be unnecessary for a smaller scale map. This issue assumes more importance as the scale of the map gets smaller (i.e. the map shows a larger area) because the information shown on the map takes up more space on the ground. For example, a 2mm thick highway symbol on a map at a scale of 1:1,000,000 occupies a space 2 km wide, leaving no room for roadside features. In the late 1980s, the Ordnance Survey's first digital maps, where the absolute positions of major roads were sometimes moved hundreds of meters from their true location on digital maps at scales of 1:250,000 and 1:625,000 (the generalization technique of displacement), because of the overriding need to annotate the features. === Projections === Because the Earth is (nearly) spherical, any planar representation (a map) requires it to be flattened in some way, known as a projection. Most map projections are implemented using mathematical formulas and computer algorithms based on geographic coordinates (latitude, longitude). All projections generate distortions such that shapes and areas cannot both be conserved simultaneously, and distances can never all be preserved. The mapmaker must choose a suitable map projection according to the space to be mapped and the purpose of the map; this decision process becomes increasingly important as the scope of the map increases; while a variety of projections would be indistinguishable on a city street map, there are dozens of drastically different ways of projecting the entire world, with extreme variations in the type, degree, and location of distortion. === Interruptions and arrangements === World maps are often designed by cutting the globe into smaller pieces, using a different projection for each piece, and then arranging all those small maps into a single map on one piece of paper, with discontinuities between the small maps. Perhaps the earliest types of such interrupted arrangements are various maps composed of 2 disks showing 2 hemispheres of Earth, one disk centered on some point selected by the cartographer and the other disk centered on its antipode. More recently, cartographers have experimented with a wide variety of interrupted arrangements of projections, including homolosine and polyhedral maps. === Symbology === Cartographic symbology encodes information on the map in ways intended to convey information to the map reader efficiently, taking into consideration the limited space on the map, models of human understanding through visual means, and the likely cultural background and education of the map reader. Symbology may be implicit, using universal elements of design, or may be more specific to cartography or even to the map. National topographic map series, for example, adopt a standardised symbology, which varies from country to country. Jacques Bertin, in Sémiologie Graphique (1967), introduced a system of codifying graphical elements (including map symbols) that has been a part of the canon of Cartographic knowledge ever since. He analyzed graphical objects in terms of three aspects (here using current terminology): Dimension: The basic type of geometric shape used to represent a geographic phenomenon, commonly points (marker symbols), lines (stroke symbols), or areas (fill symbols), as well as fields. Level of measurement: the basic type of property being visualized, generally using the classification of Stanley Smith Stevens (nominal, ordinal, interval, ratio), or some extension thereof. Visual variable: the graphical components of a symbol, including shape, size, color, orientation, pattern, transparency, and so on. Thus, a map symbol consists of a number of visual variables, graphically representing the location and spatial form of a geographic phenomenon, as well as zero or more of its properties. For example, might represent the point location of a facility, with shape being used to represent that the facility type is "mine" (a nominal property). This symbol would be intuitively understood by many users without any explanation. On a Choropleth map of median income, A dark green fill might represent an area location of a county, with hue and value being used to represent that the income is US$50,000 (a ratio property). This is an example of an ad hoc symbol with no intrinsic meaning, requiring a legend for users to discover the intended meaning. === Labeling and typography === Text serves a variety of purposes on maps. Most directly, it identifies features on the map by name; in addition, it helps to classify features (as in "Jones Park"); it can explain information; it can help to locate features, in some cases on its own without a geometric map symbol (esp. natural features); it plays a role in the gestalt of the map, especially the visual hierarchy; and it contributes to the aesthetic aspects of the map, including its "look and feel" and its attractiveness. While the cartographer has a great deal of freedom in choosing the style and size of type to accomplish these purposes, two basic goals are seen as crucial: Legibility, the ease with which map users can read a particular piece of text. Map labels introduce unique challenges to legibility, due to their tendency to be small, unfamiliar, irregularly spaced, and placed on top of map symbols. Association, the ease with which map users can recognize which feature a particular piece of text is labeling. This can be especially challenging on general purpose maps containing a large number of varied features and their labels. Most of the elements of labeling design are intended to achieve these two goals, including: the choice of typefaces, type style, size, color, and other visual variables; halos, masks, leader lines, and other additional symbols; decisions about what to label and what to not label; label text content; and label placement. While many of these decisions are specific to the particular map, functional label placement tends to follow a number of rules that have been developed through cartographic research, which has led to automated algorithms to place them automatically, to a reasonable degree of quality. ==== Placenames ==== One challenge for map labeling is dealing with varying preferences of place names. Although maps are often made in one specific language, place names often differ between languages. So a map made in English may use the name Germany for that country, while a German map would use Deutschland and a French map Allemagne. A non-native term for a place is referred to as an exonym. Sometimes a name may be disputed, such as Myanmar vs. Burma. Further difficulties arise when transliteration or transcription between writing systems is required. Some well-known places have well-established names in other languages and writing systems, such as Russia or Rußland for Росси́я, but in other cases a system of transliteration or transcription is required. Sometimes multiple transliteration systems exist; for example, the Yemeni city of المخا is written variously in English as Mocha, Al Mukha, al-Makhā, al-Makha, Mocca and Moka. Some transliteration systems produce such different place names as to cause confusion, such as the transition of Chinese–English transliteration from Wade–Giles (Peking, Kwangchow) to Pinyin (Beijing, Guangzhou). === Composition === The term map composition is sometimes used to refer to the composition of the symbols within the map itself, and sometimes to the composition of the map and other elements on the page. Some of the same principles apply to both processes, while others are unique to each. In the former sense of the symbols on the map, as all of the symbols and thematic layers on the map are brought together, their interactions have major effects on map reading. A number of composition principles have been studied in cartography. While some of these ideas were posited by Arthur H. Robinson in The Look of Maps (1952), Borden Dent was likely the first to approach it in a systematic way in 1972, firmly within the Cartographic Communication school of thought. Dent's model drew heavily on psychology, especially Gestalt psychology and Perception, to evaluate what made some maps difficult to read as a whole, even when individual symbols were designed well, and creating a model that included most of the list below. Later, artistic composition principles were adopted from graphic design, many of which are similar, having come from similar sources. They all share the same goal: to combine all of the individual symbols into a single whole that achieves the goals above. Contrast is the degree of visual difference between graphic elements (e.g., map symbols). Robinson saw contrast as the fundamental principle of composition, supporting everything else. As suggested by Robinson, and further developed by Jacques Bertin, contrast is created by manipulating the visual variables of map symbols, such as size, shape, and color. Figure-ground is the ease with which each individual symbol or feature (the figure) can be mentally isolated from the rest of the map (the ground). The rules for establishing figure-ground are largely drawn from the gestalt principle of Prägnanz. Visual hierarchy is the apparent order of items, from those that look most important (i.e., attract the most attention) to those that look least important. Typically, the intent is for the visual hierarchy to match the intellectual hierarchy of what is intended to be more or less important. Bertin suggested that some of the visual variables, especially size and value, naturally contributed to visual hierarchy (which he termed as dissociative), while others had differences that were more easily ignored. Grouping (Dent) or Selectivity (Bertin) is the ease with which a reader can isolate all of the symbols of a particular appearance, while ignoring the rest of the map, allowing the reader to identify patterns in that type of feature (e.g., "where are all the blue dots?"). In Bertin's model, size, value, and hue were particularly selective, while others, such as shape, require significant contrast to be useful. Harmony is how well all of the individual elements (map symbols) "look good" together. This generally follows from the above principles, as well as the careful selection of harmonious colors, textures, and typefaces. === Layout === A typical map, whether on paper or on a web page, consists of not only the map image, but also other elements that support the map: A title tells the reader what the map is about, including the purpose or theme, and perhaps the region covered. A legend or key explains the meaning of the symbols on the map A neatline may frames the entire map image, although many maps use negative space to set the map apart A compass rose or north arrow provides orientation Inset maps may serve several purposes, such as showing the context of the main map in a larger area, showing more detail for a subset of the main map, showing a separated but related area, or showing related themes for the same region. A bar scale or other indication of scale translates between map measurements and real distances. Illustrations may be included to help explain the map subject or add aesthetic appeal. Explanatory text may discuss the subject further Metadata declares the sources, date, authorship, projection, or other information about the construction of the map. Composing and arranging all of the elements on the page involves just as much design skill and knowledge of how readers will use the map as designing the map image itself. Page composition serves several purposes, including directing the reader's attention, establishing a particular aesthetic feel, clearly stating the purpose of the map, and making the map easier to understand and use. Therefore, Page layout follows many of the same principles of Composition above, including figure-ground and Visual hierarchy, as well as aesthetic principles adopted from Graphic design, such as balance and the use of White space (visual arts). In fact, this aspect of cartographic design has more in common with graphic design than any other part of the craft. === Reproduction and distribution === At one time, the process of getting a map printed was a major part of the time and effort spent in cartography. While less of a concern with modern technology, it is not insignificant. Professional cartographers are asked to produce maps that will be distributed by a variety of media, and understanding the various reproduction and distribution technologies help to cater a design to work best for the intended medium. Inkjet printing Laser printing Offset printing, including Prepress preparation Animated mapping Web mapping == See also == Map coloring Geographic information systems == References ==
Wikipedia/Cartographic_design
Map algebra is an algebra for manipulating geographic data, primarily fields. Developed by Dr. Dana Tomlin and others in the late 1970s, it is a set of primitive operations in a geographic information system (GIS) which allows one or more raster layers ("maps") of similar dimensions to produce a new raster layer (map) using mathematical or other operations such as addition, subtraction etc. == History == Prior to the advent of GIS, the overlay principle had developed as a method of literally superimposing different thematic maps (typically an isarithmic map or a chorochromatic map) drawn on transparent film (e.g., cellulose acetate) to see the interactions and find locations with specific combinations of characteristics. The technique was largely developed by landscape architects and city planners, starting with Warren Manning and further refined and popularized by Jaqueline Tyrwhitt, Ian McHarg and others during the 1950s and 1960s. In the mid-1970s, landscape architecture student C. Dana Tomlin developed some of the first tools for overlay analysis in raster as part of the IMGRID project at the Harvard Laboratory for Computer Graphics and Spatial Analysis, which he eventually transformed into the Map Analysis Package (MAP), a popular raster GIS during the 1980s. While a graduate student at Yale University, Tomlin and Joseph K. Berry re-conceptualized these tools as a mathematical model, which by 1983 they were calling "map algebra." This effort was part of Tomlin's development of cartographic modeling, a technique for using these raster operations to implement the manual overlay procedures of McHarg. Although the basic operations were defined in his 1983 PhD dissertation, Tomlin had refined the principles of map algebra and cartographic modeling into their current form by 1990. Although the term cartographic modeling has not gained as wide an acceptance as synonyms such as suitability analysis, suitability modeling and multi-criteria decision making, "map algebra" became a core part of GIS. Because Tomlin released the source code to MAP, its algorithms were implemented (with varying degrees of modification) as the analysis toolkit of almost every raster GIS software package starting in the 1980s, including GRASS, IDRISI (now TerrSet), and the GRID module of ARC/INFO (later incorporated into the Spatial Analyst module of ArcGIS). This widespread implementation further led to the development of many extensions to map algebra, following efforts to extend the raster data model, such as adding new functionality for analyzing spatiotemporal and three-dimensional grids. == Map algebra operations == Like other algebraic structures, map algebra consists of a set of objects (the domain) and a set of operations that manipulate those objects with closure (i.e., the result of an operation is itself in the domain, not something completely different). In this case, the domain is the set of all possible "maps," which are generally implemented as raster grids. A raster grid is a two-dimensional array of cells (Tomlin called them locations or points), each cell occupying a square area of geographic space and being coded with a value representing the measured property of a given geographic phenomenon (usually a field) at that location. Each operation 1) takes one or more raster grids as inputs, 2) creates an output grid with matching cell geometry, 3) scans through each cell of the input grid (or spatially matching cells of multiple inputs), 4) performs the operation on the cell value(s), and writes the result to the corresponding cell in the output grid. Originally, the inputs and the output grids were required to have the identical cell geometry (i.e., covering the same spatial extent with the same cell arrangement, so that each cell corresponds between inputs and outputs), but many modern GIS implementations do not require this, performing interpolation as needed to derive values at corresponding locations. Tomlin classified the many possible map algebra operations into three types, to which some systems add a fourth: Local Operators Operations that operate on one cell location at a time during the scan phase. A simple example would be an arithmetic operator such as addition: to compute MAP3 = MAP1 + MAP2, the software scans through each matching cell of the input grids, adds the numeric values in each using normal arithmetic, and puts the result in the matching cell of the output grid. Due to this decomposition of operations on maps into operations on individual cell values, any operation that can be performed on numbers (e.g., arithmetic, statistics, trigonometry, logic) can be performed in map algebra. For example, a LocalMean operator would take in two or more grids and compute the arithmetic mean of each set of spatially corresponding cells. In addition, a range of GIS-specific operations has been defined, such as reclassifying a large range of values to a smaller range of values (e.g., 45 land cover categories to 3 levels of habitat suitability), which dates to the original IMGRID implementation of 1975. A common use of local functions is for implementing mathematical models, such as an index, that are designed to compute a resultant value at a location from a set of input variables. Focal Operators Functions that operate on a geometric neighborhood around each cell. A common example is calculating slope from a grid of elevation values. Looking at a single cell, with a single elevation, it is impossible to judge a trend such as slope. Thus, the slope of each cell is computed from the value of the corresponding cell in the input elevation grid and the values of its immediate neighbors. Other functions allow for the size and shape of the neighborhood (e.g. a circle or square of arbitrary size) to be specified. For example, a FocalMean operator could be used to compute the mean value of all the cells within 1000 meters (a circle) of each cell. Zonal Operators Functions that operate on regions of identical value. These are commonly used with discrete fields (also known as categorical coverages), where space is partitioned into regions of homogeneous nominal or categorical value of a property such as land cover, land use, soil type, or surface geologic formation. Unlike local and focal operators, zonal operators do not operate on each cell individually; instead, all of the cells of a given value are taken as input to a single computation, with identical output being written to all of the corresponding cells. For example, a ZonalMean operator would take in two layers, one with values representing the regions (e.g., dominant vegetation species) and another of a related quantitative property (e.g., percent canopy cover). For each unique value found in the former grid, the software collects all of the corresponding cells in the latter grid, computes the arithmetic mean, and writes this value to all of the corresponding cells in the output grid. Global Operators Functions that summarize the entire grid. These were not included in Tomlin's work, and are not technically part of map algebra, because the result of the operation is not a raster grid (i.e., it is not closed), but a single value or summary table. However, they are useful to include in the general toolkit of operations. For example, a GlobalMean operator would compute the arithmetic mean of all of the cells in the input grid and return a single mean value. Some also consider operators that generate a new grid by evaluating patterns across the entire input grid as global, which could be considered part of the algebra. An example of these are the operators for evaluating cost distance. == Implementation == Several GIS software packages implement map algebra concepts, including PostGIS, ERDAS Imagine, QGIS, GRASS GIS, TerrSet, PCRaster, and ArcGIS. In Tomlin's original formulation of cartographic modeling in the Map Analysis Package, he designed a simple procedural language around the algebra operators to allow them to be combined into a complete procedure with additional structures such as conditional branching and looping. However, in most modern implementations, map algebra operations are typically one component of a general procedural processing system, such as a visual modeling tool or a scripting language. For example, ArcGIS implements Map Algebra in both its visual ModelBuilder tool and in Python. Here, Python's overloading capability allows simple operators and functions to be used for raster grids. For example, rasters can be multiplied using the same "*" arithmetic operator used for multiplying numbers. Here a modern MapAlgebra implementation, embedding map algebra expressions into SQL (of PostGIS and others), see function ST_MapAlgebra() guide: Here are some examples in MapBasic, the scripting language for MapInfo Professional: # demo for Brown's Pond data set # Give layers # altitude # development – 0: vacant, 1: major, 2: minor, 3: houses, 4: buildings, 5 cement # water – 0: dry, 2: wet, 3: pond # calculate the slope at each location based on altitude slope = IncrementalGradient of altitude # identify the areas that are too steep toosteep = LocalRating of slope where 1 replaces 4 5 6 where VOID replaces ... # create layer unifying water and development occupied = LocalRating of development where water replaces VOID notbad = LocalRating of occupied and toosteep where 1 replaces VOID and VOID where VOID replaces ... and ... roads = LocalRating of development where 1 replaces 1 2 where VOID replaces ... nearread = FocalNeighbor of roads at 0 ... 10 aspect = IncrementalAspect of altitude southface = LocalRating of aspect where 1 replaces 135 ... 225 where VOID replaces ... sites = LocalMinimum of nearroad and southface and notbad sitenums = FocalInsularity of sites at 0 ... 1 sitesize = ZonalSum of 1 within sitenums bestsites = LocalRating of sitesize where sitesize replaces 100 ... 300 where VOID replaces ... == See also == Mathematical morphology Field (geography) == External links == osGeo-RFC-39 about Layer Algebra == References == B. E. Davis GIS: A Visual Approach (2001 Cengage Learning) pp. 249ff.
Wikipedia/Map_algebra
The Advanced Very-High-Resolution Radiometer (AVHRR) instrument is a space-borne sensor that measures the reflectance of the Earth in five spectral bands that are relatively wide by today's standards. AVHRR instruments are or have been carried by the National Oceanic and Atmospheric Administration (NOAA) family of polar orbiting platforms (POES) and European MetOp satellites. The instrument scans several channels; two are centered on the red (0.6 micrometres) and near-infrared (0.9 micrometres) regions, a third one is located around 3.5 micrometres, and another two the thermal radiation emitted by the planet, around 11 and 12 micrometres. The first AVHRR instrument was a four-channel radiometer. The final version, AVHRR/3, first carried on NOAA-15 launched in May 1998, acquires data in six channels. The AVHRR has been succeeded by the Visible Infrared Imaging Radiometer Suite, carried on the Joint Polar Satellite System spacecraft. == Operation == NOAA has at least two polar-orbiting meteorological satellites in orbit at all times, with one satellite crossing the equator in the early morning and early evening and the other crossing the equator in the afternoon and late evening. The primary sensor on board both satellites is the AVHRR instrument. Morning-satellite data are most commonly used for land studies, while data from both satellites are used for atmosphere and ocean studies. Together they provide twice-daily global coverage, and ensure that data for any region of the earth are no more than six hours old. The swath width, the width of the area on the Earth's surface that the satellite can "see", is approximately 2,500 kilometers (~1,540 mi). The satellites orbit between 833 or 870 kilometers (+/− 19 kilometers, 516–541 miles) above the surface of the Earth. The highest ground resolution that can be obtained from the current AVHRR instruments is 1.1-kilometer (0.68 mi) per pixel at the nadir. Data from AVHRR (in its three evolutions) has been collected continuously since 1981. The primary purpose of these instruments is to monitor clouds and to measure the thermal emission of the Earth. These sensors have proven useful for a number of other applications, however, including the surveillance of land surfaces, ocean state, aerosols, etc. AVHRR data are particularly relevant to study climate change and environmental degradation because of the comparatively long records of data already accumulated (over 20 years). The main difficulty associated with these investigations is to properly deal with the many limitations of these instruments, especially in the early period (sensor calibration, orbital drift, limited spectral and directional sampling, etc.). The AVHRR instrument also flies on the MetOp series of satellites. The three MetOp satellites are part of the EUMETSAT Polar System (EPS) run by EUMETSAT, which will be succeeded by MetOp-SG. == Calibration and validation == Remote sensing applications of the AVHRR sensor are based on validation (matchup) techniques of co-located ground observations and satellite observations. Alternatively, radiative transfer calculations are performed. There are specialized codes which allow simulation of the AVHRR observable brightness temperatures and radiances in near infrared and infrared channels. === Pre-launch calibration of visible channels (Ch. 1 and 2) === Prior to launch, the visible channels (Ch. 1 and 2) of AVHRR sensors are calibrated by the instrument manufacturer, ITT, Aerospace/Communications Division, and are traceable to NIST standards. The calibration relationship between electronic digital count response (C) of the sensor and the albedo (A) of the calibration target are linearly regressed: A = S * C + I where S and I are the slope and intercept (respectively) of the calibration regression [NOAA KLM]. However, the highly accurate prelaunch calibration will degrade during launch and transit to orbit as well as during the operational life of the instrument [Molling et al., 2010]. Halthore et al. [2008] note that sensor degradation is mainly caused by thermal cycling, outgassing in the filters, damage from higher energy radiation (such as ultraviolet (UV)), and condensation of outgassed gases onto sensitive surfaces. One major design constraint of AVHRR instruments is that they lack the capability to perform accurate, onboard calibrations once on orbit [NOAA KLM]. Thus, post-launch on-orbit calibration activities (known as vicarious calibration methods) must be performed to update and ensure the accuracy of retrieved radiances and the subsequent products derived from these values [Xiong et al., 2010]. Numerous studies have been performed to update the calibration coefficients and provide more accurate retrievals versus using the pre-launch calibration. === On-orbit individual/few sensor absolute calibration === ==== Rao and Chen ==== Rao and Chen [1995] use the Libyan Desert as a radiometrically stable calibration target to derive relative annual degradation rates for Channels 1 and 2 for AVHRR sensors on board the NOAA -7, -9, and -11 satellites. Additionally, with an aircraft field campaign over the White Sands desert site in New Mexico, USA [See Smith et al., 1988], an absolute calibration for NOAA-9 was transferred from a well calibrated spectrometer on board a U-2 aircraft flying at an altitude of ~18 km in a congruent path with the NOAA-9 satellite above. After being corrected for the relative degradation, the absolute calibration of NOAA-9 is then passed onto NOAA −7 and −11 via a linear relationship using Libyan Desert observations that are restricted to similar viewing geometries as well as dates in the same calendar month [Rao and Chen, 1995], and any sensor degradation is corrected for by adjusting the slope (as a function of days after launch) between the albedo and digital count signal recorded [Rao and Chen, 1999]. ==== Loeb ==== In another similar method using surface targets, Loeb [1997] uses spatiotemporal uniform ice surfaces in Greenland and Antarctica to produce second-order polynomial reflectance calibration curves as a function of solar zenith angle; calibrated NOAA-9 near-nadir reflectances are used to generate the curves that can then derive the calibrations for other AHVRRs in orbit (e.g. NOAA-11, -12, and -14). It was found that the ratio of calibration coefficients derived by Loeb [1997] and Rao and Chen [1995] are independent of solar zenith angle, thus implying that the NOAA-9-derived calibration curves provide an accurate relation between the solar zenith angle and observed reflectance over Greenland and Antarctica. ==== Iwabuchi ==== Iwabuchi [2003] employed a method to calibrate NOAA-11 and -14 that uses clear-sky ocean and stratus cloud reflectance observations in a region of the NW Pacific Ocean and radiative transfer calculations of a theoretical molecular atmosphere to calibrate AVHRR Ch. 1. Using a month of clear-sky observations over the ocean, an initial minimum guess to the calibration slope is made. An iterative method is then used to achieve the optimal slope values for Ch. 1 with slope corrections adjusting for uncertainties in ocean reflectance, water vapor, ozone, and noise. Ch. 2 is then subsequently calibrated under the condition that the stratus cloud optical thickness in both channels must be the same (spectrally uniform in the visible) if their calibrations are correct [Iwabuchi, 2003]. ==== Vermote and Saleous ==== A more contemporary calibration method for AVHRR uses the on-orbit calibration capabilities of the VIS/IR channels of MODIS. Vermote and Saleous [2006] present a methodology that uses MODIS to characterize the BRDF of an invariant desert site. Due to differences in the spectral bands used for the instruments' channels, spectral translation equations were derived to accurately transfer the calibration accounting for these differences. Finally, the ratio of AVHRR observed to that modeled from the MODIS observation is used to determine the sensor degradation and adjust the calibration accordingly. ==== Others ==== Methods for extending the calibration and record continuity also make use of similar calibration activities [Heidinger et al., 2010]. === Long-term calibration and record continuity === In the discussion thus far, methods have been posed that can calibrate individual or are limited to a few AVHRR sensors. However, one major challenge from a climate point of view is the need for record continuity spanning 30+ years of three generations of AVHRR instruments as well as more contemporary sensors such as MODIS and VIIRS. Several artifacts may exist in the nominal AVHRR calibration, and even in updated calibrations, that cause a discontinuity in the long-term radiance record constructed from multiple satellites [Cao et al., 2008]. ==== International Satellite Cloud Climatology Project (ISCCP) method ==== Brest and Rossow [1992], and the updated methodology [Brest et al., 1997], put forth a robust method for calibration monitoring of individual sensors and normalization of all sensors to a common standard. The International Satellite Cloud Climatology Project (ISCCP) method begins with the detection of clouds and corrections for ozone, Rayleigh scatter, and seasonal variations in irradiance to produce surface reflectances. Monthly histograms of surface reflectance are then produced for various surface types, and various histogram limits are then applied as a filter to the original sensor observations and ultimately aggregated to produce a global, cloud free surface reflectance. After filtering, the global maps are segregated into monthly mean SURFACE, two bi-weekly SURFACE, and a mean TOTAL reflectance maps. The monthly mean SURFACE reflectance maps are used to detect long-term trends in calibration. The bi-weekly SURFACE maps are compared to each other and are used to detect short-term changes in calibration. Finally, the TOTAL maps are used to detect and assess bias in the processing methodology. The target histograms are also examined, as changes in mode reflectances and in population are likely the result of changes in calibration. ==== Long-term record continuity ==== Long-term record continuity is achieved by the normalization between two sensors. First, observations from the operational time period overlap of two sensors are processed. Next, the two global SURFACE maps are compared via a scatter plot. Additionally, observations are corrected for changes in solar zenith angle caused by orbital drift. Ultimately, a line is fit to determine the overall long-term drift in calibration, and, after a sensor is corrected for drift, normalization is performed on observations that occur during the same operational period [Brest et al., 1997]. ==== Calibration using the moderate-resolution imaging spectroradiometer ==== Another recent method for the absolute calibration of the AHVRR record makes use of the contemporary MODIS sensor onboard NASA's TERRA and AQUA satellites. The MODIS instrument has high calibration accuracy and can track its own radiometric changes due to the inclusion of an onboard calibration system for the VIS/NIR spectral region [MCST]. The following method utilizes the high accuracy of MODIS to absolutely calibrate AVHRRs via simultaneous nadir overpasses (SNOs) of both MODIS/AVHRR and AVHRR/AVHRR satellite pairs as well as MODIS-characterized surface reflectances for a Libyan Desert target and Dome-C in Antarctica [Heidinger et al., 2010]. Ultimately, each individual calibration event available (MODIS/AVHRR SNO, Dome C, Libyan Desert, or AVHRR/AVHRR SNO) is used to provide a calibration slope time series for a given AVHRR sensor. Heidinger et al. [2010] use a second-order polynomial from a least-squares fit to determine the time series. The first step involves using a radiative transfer model that will convert observed MODIS scenes into those that a perfectly calibrated AVHRR would see. For MODIS/AVHRR SNO occurrences, it was determined that the ratio of AVHRR to MODIS radiances in both Ch1 and Ch2 are modeled well by a second-order polynomial of the radio of MODIS reflectances in channels 17 and 18. Channels 17 and 18 are located in a spectral region (0.94mm) sensitive to atmospheric water vapor, a quantity that affects the accurate calibration of AVHRR Ch. 2. Using the Ch17 to Ch 18 ratio, an accurate guess at the total precipitable water (TPW) is obtained to further increase the accuracy of MODIS to AVHRR SNO calibrations. The Libyan Desert and Dome-C calibration sites are used when MODIS/AVHRR SNOs do not occur. Here, the AVHRR to MODIS ratio of reflectances is modeled as a third-order polynomial using the natural logarithm of TWP from the NCEP reanalysis. Using these two methods, monthly calibration slopes are generated with a linear fit forced through the origin of the adjusted MODIS reflectances versus AVHRR counts. To extend the MODIS reference back for AVHRRs prior to the MODIS era (pre-2000), Heidinger et al. [2010] use the stable Earth targets of Dome C in Antarctica and the Libyan Desert. MODIS mean nadir reflectances over the target are determined and are plotted versus the solar zenith angle. The counts for AVHRR observations at a given solar zenith angle and corresponding MODIS reflectance, corrected for TWP, are then used to determine what AVHRR value would be provided it had the MODIS calibration. The calibration slope is now calculated. ==== Calibration using direct AVHRR/AVHRR SNOs ==== One final method used by Heidinger et al. [2010] for extending the MODIS calibration back to AVHRRs that operated outside of the MODIS era is through direct AVHRR/AVHRR SNOs. Here, the counts from AVHRRs are plotted and a regression forced through the origin calculated. This regression is used to transfer the accurate calibration of one AVHRRs reflectances to the counts of an un-calibrated AVHRR and produce appropriate calibration slopes. These AVHRR/AVHRR SNOs do not provide an absolute calibration point themselves; rather they act as anchors for the relative calibration between AVHRRs that can be used to transfer the ultimate MODIS calibration. == Next-generation system == Operational experience with the MODIS sensor onboard NASA's Terra and Aqua led to the development of AVHRR's follow-on, VIIRS. VIIRS is currently operating on board the Suomi NPP and NOAA-20 satellites. Whereas EUMETSAT MetOp satellites with AVHRR instruments will be succeeded by MetOp-SG satellites with a European MetImage instrument. == Launch and service dates == == See also == Ocean temperature == References == == Further reading == Frey, C.; Kuenzer, C.; Dech, S. (2012). "Quantitative comparison of the operational NOAA AVHRR LST product of DLR and the MODIS LST product V005". International Journal of Remote Sensing. 33 (22): 7165–7183. Bibcode:2012IJRS...33.7165F. doi:10.1080/01431161.2012.699693. S2CID 128981116. Brest, C.L. and W.B. Rossow. 1992. Radiometric calibration and monitoring of NOAA AVHRR data for ISCCP. International Journal of Remote Sensing. Vol. 13. pp. 235–273. Brest, C.L. et al. 1997. Update of Radiance Calibrations for ISCCP. Journal of Atmospheric and Oceanic Technology. Vol 14. pp. 1091–1109. Cao, C. et al. 2008. Assessing the consistency of AVHRR and MODIS L1B reflectance for generating Fundamental Climate Data Records. Journal of Geophysical Research. Vol. 113. D09114. doi:10.1029/2007JD009363. Halthore, R. et al. 2008. Role of Aerosol Absorption in Satellite Sensor Calibration. IEEE Geoscience and Remote Sensing Letters. Vol. 5. pp. 157–161. Heidinger, A. K. et al. 2002. Using Moderate Resolution Imaging Spectrometer (MODIS) to calibrate Advanced Very High Resolution Radiometer reflectance channels. Journal of Geophysical Research. Vol. 107. doi:10.1029/2001JD002035. Heidinger, A.K. et al. 2010. Deriving an inter-sensor consistent calibration for the AVHRR solar reflectance data record. International Journal of Remote Sensing. Vol. 31. pp. 6493–6517. Iwabuchi, H. 2003. Calibration of the visible and near-infrared channels of NOAA-11 and NOAA-14 AVHRRs by using reflections from molecular atmosphere and stratus cloud. International Journal of Remote Sensing. Vol. 24. pp. 5367–5378. Loeb, N.G. 1997. In-flight calibration of NOAA AVHRR visible and near-IR bands over Greenland and Antarctica. International Journal of Remote Sensing. Vol. 18. pp. 477–490. MCST. MODIS Level 1B Algorithm Theoretical Basis Document, Version 3. Goddard Space Flight Center. Greenbelt, MD. December 2005. Molling, C.C. et al. 2010. Calibrations for AVHRR channels 1 and 2: review and path towards consensus. International Journal of Remote Sensing. Vol. 31. pp. 6519–6540. NOAA KLM User's Guide with NOAA-N, -N' Supplement. NOAA NESDIS NCDC. Asheville, NC. February 2009. Rao, C.R.N. and J. Chen. 1995. Inter-satellite calibration linkages for the visible and near-infrared channels of the Advanced Very High Resolution Radiometer on the NOAA-7, −9, and −11 spacecraft. International Journal of Remote Sensing. Vol. 16. pp. 1931–1942. Rao, C.R.N. and J. Chen. 1999. Revised post-launch calibration of the visible and near-infrared channels of the Advanced Very High Resolution Radiometer on the NOAA-14 spacecraft. International Journal of Remote Sensing. Vol. 20. pp. 3485–3491. Smith, G.R. et al. 1988. Calibration of the Solar Channels of the NOAA-9 AVHRR Using High Altitude Aircraft Measurements. Journal of Atmospheric and Oceanic Technology. Vol. 5. pp. 631–639. Vermote, E.F. and N.Z. Saleous. 2006. Calibration of NOAA16 AVHRR over a desert site using MODIS data. Remote Sensing of Environment. Vol. 105. pp. 214–220. Xiong, X. et al. 2010. On-Orbit Calibration and Performance of Aqua MODIS Reflective Solar Bands. IEEE Transactions on Geoscience and Remote Sensing. Vol 48. pp. 535–546. == External links == What is AVHRR? at National Atlas Advanced Very High Resolution Radiometer at NOAA Advanced Very High Resolution Radiometer at USGS [1] at NASA [2] at NASA
Wikipedia/Advanced_very-high-resolution_radiometer
The Helmert transformation (named after Friedrich Robert Helmert, 1843–1917) is a geometric transformation method within a three-dimensional space. It is frequently used in geodesy to produce datum transformations between datums. The Helmert transformation is also called a seven-parameter transformation and is a similarity transformation. == Definition == It can be expressed as: X T = C + μ R X {\displaystyle X_{T}=C+\mu RX\,} where XT is the transformed vector X is the initial vector The parameters are: C – translation vector. Contains the three translations along the coordinate axes μ – scale factor, which is unitless; if it is given in ppm, it must be divided by 1,000,000 and added to 1. R – rotation matrix. Consists of three axes (small rotations around each of the three coordinate axes) rx, ry, rz. The rotation matrix is an orthogonal matrix. The angles are given in either degrees or radians. === Variations === A special case is the two-dimensional Helmert transformation. Here, only four parameters are needed (two translations, one scaling, one rotation). These can be determined from two known points; if more points are available then checks can be made. Sometimes it is sufficient to use the five parameter transformation, composed of three translations, only one rotation about the Z-axis, and one change of scale. === Restrictions === The Helmert transformation only uses one scale factor, so it is not suitable for: The manipulation of measured drawings and photographs The comparison of paper deformations while scanning old plans and maps. In these cases, a more general affine transformation is preferable. == Application == The Helmert transformation is used, among other things, in geodesy to transform the coordinates of the point from one coordinate system into another. Using it, it becomes possible to convert regional surveying points into the WGS84 locations used by GPS. For example, starting with the Gauss–Krüger coordinate, x and y, plus the height, h, are converted into 3D values in steps: Undo the map projection: calculation of the ellipsoidal latitude, longitude and height (W, L, H) Convert from geodetic coordinates to geocentric coordinates: Calculation of x, y and z relative to the reference ellipsoid of surveying 7-parameter transformation (where x, y and z almost always change by a few hundred metres at most, and distances by a few mm per km). Because of this, terrestrially measured positions can be compared with GPS data; these can then be brought into the surveying as new points – transformed in the opposite order. The third step consists of the application of a rotation matrix, multiplication with the scale factor μ = 1 + s {\displaystyle \mu =1+s} (with a value near 1) and the addition of the three translations, cx, cy, cz. The coordinates of a reference system B are derived from reference system A by the following formula (position vector transformation convention and very small rotation angles simplification): [ X Y Z ] B = [ c x c y c z ] + ( 1 + s × 10 − 6 ) ⋅ [ 1 − r z r y r z 1 − r x − r y r x 1 ] ⋅ [ X Y Z ] A {\displaystyle {\begin{bmatrix}X\\Y\\Z\end{bmatrix}}^{B}={\begin{bmatrix}c_{x}\\c_{y}\\c_{z}\end{bmatrix}}+(1+s\times 10^{-6})\cdot {\begin{bmatrix}1&-r_{z}&r_{y}\\r_{z}&1&-r_{x}\\-r_{y}&r_{x}&1\end{bmatrix}}\cdot {\begin{bmatrix}X\\Y\\Z\end{bmatrix}}^{A}} or for each single parameter of the coordinate: X B = c x + ( 1 + s × 10 − 6 ) ⋅ ( X A − r z ⋅ Y A + r y ⋅ Z A ) Y B = c y + ( 1 + s × 10 − 6 ) ⋅ ( r z ⋅ X A + Y A − r x ⋅ Z A ) Z B = c z + ( 1 + s × 10 − 6 ) ⋅ ( − r y ⋅ X A + r x ⋅ Y A + Z A ) . {\displaystyle {\begin{aligned}X_{B}&=c_{x}+(1+s\times 10^{-6})\cdot (X_{A}-r_{z}\cdot Y_{A}+r_{y}\cdot Z_{A})\\Y_{B}&=c_{y}+(1+s\times 10^{-6})\cdot (r_{z}\cdot X_{A}+Y_{A}-r_{x}\cdot Z_{A})\\Z_{B}&=c_{z}+(1+s\times 10^{-6})\cdot (-r_{y}\cdot X_{A}+r_{x}\cdot Y_{A}+Z_{A}).\end{aligned}}} For the reverse transformation, each element is multiplied by −1. The seven parameters are determined for each region with three or more "identical points" of both systems. To bring them into agreement, the small inconsistencies (usually only a few cm) are adjusted using the method of least squares – that is, eliminated in a statistically plausible manner. === Standard parameters === Note: the rotation angles given in the table are in arcseconds and must be converted to radians before use in the calculation. These are standard parameter sets for the 7-parameter transformation (or data transformation) between two datums. For a transformation in the opposite direction, inverse transformation parameters should be calculated or inverse transformation should be applied (as described in paper "On geodetic transformations"). The translations cx, cy, cz are sometimes described as tx, ty, tz, or dx, dy, dz. The rotations rx, ry, and rz are sometimes also described as ω {\displaystyle \omega } , ϕ {\displaystyle \phi } and κ {\displaystyle \kappa } . In the United Kingdom the prime interest is the transformation between the OSGB36 datum used by the Ordnance survey for Grid References on its Landranger and Explorer maps to the WGS84 implementation used by GPS technology. The Gauss–Krüger coordinate system used in Germany normally refers to the Bessel ellipsoid. A further datum of interest was ED50 (European Datum 1950) based on the Hayford ellipsoid. ED50 was part of the fundamentals of the NATO coordinates up to the 1980s, and many national coordinate systems of Gauss–Krüger are defined by ED50. The earth does not have a perfect ellipsoidal shape, but is described as a geoid. Instead, the geoid of the earth is described by many ellipsoids. Depending upon the actual location, the "locally best aligned ellipsoid" has been used for surveying and mapping purposes. The standard parameter set gives an accuracy of about 7 m for an OSGB36/WGS84 transformation. This is not precise enough for surveying, and the Ordnance Survey supplements these results by using a lookup table of further translations in order to reach 1 cm accuracy. == Estimating the parameters == If the transformation parameters are unknown, they can be calculated with reference points (that is, points whose coordinates are known before and after the transformation. Since a total of seven parameters (three translations, one scale, three rotations) have to be determined, at least two points and one coordinate of a third point (for example, the Z-coordinate) must be known. This gives a system with seven equations and seven unknowns, which can be solved. For transformations between conformal map projections near an arbitrary point, the Helmert transformation parameters can be calculated exactly from the Jacobian matrix of the transformation function. In practice, it is best to use more points. Through this correspondence, more accuracy is obtained, and a statistical assessment of the results becomes possible. In this case, the calculation is adjusted with the Gaussian least squares method. A numerical value for the accuracy of the transformation parameters is obtained by calculating the values at the reference points, and weighting the results relative to the centroid of the points. While the method is mathematically rigorous, it is entirely dependent on the accuracy of the parameters that are used. In practice, these parameters are computed from the inclusion of at least three known points in the networks. However the accuracy of these will affect the following transformation parameters, as these points will contain observation errors. Therefore, a "real-world" transformation will only be a best estimate and should contain a statistical measure of its quality. == See also == Geographic coordinate conversion Procrustes analysis Surveying == References == == External links == Helmert transform in PROJ coordinate transformation software Computing Helmert Transformations Archived 13 September 2022 at the Wayback Machine
Wikipedia/Helmert_transformation
Quaternary science is the subfield of geology which studies the Quaternary Period commonly known as the ice age. The Quaternary Period is a time period that started around 2.58 million years ago and continues today. This period is divided into two epochs – the Pleistocene Epoch and the Holocene Epoch. The aim of Quaternary science is to understand everything that happened during the Pleistocene Epoch and the Holocene Epoch to be able to acquire fundamental knowledge about Earth's environment, ecosystem, climate changes, etc. Quaternary science was first studied during the nineteenth century by Georges Cuvier, a French scientist. Most Quaternary scientists have studied the history of the Quaternary to predict future changes in climate. Quaternary science plays a vital role in archaeology providing a possible accurate human studies' framework which would help the archaeologists interpret archaeological records. == Definition == Quaternary science is the systematic study of the Quaternary Period. It is a rapidly changing field with new research techniques being developed e.g. new dating techniques. Quaternary science is a field of study which involves geography, biology, chemistry, and physics. Its focus is during the Quaternary Period – a time period that started around 2.58 million years ago and which continues to the present day. Earth has been affected by the events that occurred during the Quaternary Period – a time of ice ages. One topic in Quaternary science is to understand what happened during the ice ages. Quaternary science adds an important historical perspective to the understanding of current ecosystems and climate changes. == History == The Quaternary Period is a geologic time period that can be separated into two epochs, the Pleistocene ("most recent") Epoch, generally defined as beginning about 2.58 million years ago, and the Holocene ("wholly modern") Epoch, which began about 11,700 years ago. The study of Quaternary science began in the late eighteenth century in Europe. The term 'Quaternary' was first used by Italian engineer Giovanni Arduino to describe the four most recent geologic eras. It later became clear that the term ‘Quaternary’ as described by Meadows and Finch (2016) was "a phase of highly variable climates, with marked periods of time when global temperatures were significantly lower than today and evidence for which was interpreted by Louis Agassiz as indications of a geologically recent ‘Great Ice Age’". The study of Quaternary science was first demonstrated by early nineteenth century French scientist Georges Cuvier. He proposed that some animals that lived in the Pleistocene epoch were made extinct by some environmental ‘revolution’ (e.g. some catastrophic flooding events). It was this insight that made him famous. Theory regarding the causation of ice ages also developed during this period. The first theory to come out was the theory of how the variation of Earth's orbit affect the global climate by James Croll, a Scottish scientist. James Croll was the first person to ever recognize the significance of positive feedbacks in the climate system, including the feedbacks of ice-albedo. Furthermore, his theory was also the first theory to predict the cause of glaciation. It was during the twentieth century that this idea was further elaborated. Milutin Milankovitch, a Serbian mathematician and geophysicist, was best known for his theory which involved the motion of the Earth and their relationship to long-term climate changes. One of early calculations of Milankovitch offered information about the changes in incident solar radiation (as a function of season) for millions of years. In addition, André Berger – a Belgian professor and climatologist, also identified the certain time period where reconstructed insolation was higher than the average or lower than the average. Many of his analyses show that from May to August, there has been a forwarded shift of insolation maximum (higher than average) in the late Quaternary insolation variation. This feature is known as "insolation signature" and may have possible relationship with the changes in climate as contemplated by Berger. During the twentieth century important sub-disciplines of Quaternary science, such as palaeoecology, palaeontology and palaeoclimatology, revealed relationships between changes in the environment and the planet's history during its Quaternary Period. == Latest developments == There are many studies of Quaternary science being researched in the present. As stated before, Quaternary science is rapidly changing field, hence why there are always new researches being studied and published – providing evidences and establishing new techniques. One of the latest researches was the study about the "Late Pleistocene–Holocene environmental and climatic history of a freshwater paramo ecosystem in the northern Andes" where the researchers study the palaeoclimatic history of the Northern South America based on the palaeolimnological reconstruction of a pond. Another recent study would be the study of "Molecular fossils as a tool for tracking Holocene sea‐level change in the Loch of Stenness, Orkney" by Conti, Bates, Preece, Penkman, and Keely (2020), in which they study how molecular fossils could be used as an approach to study past sea-level change. There are still many more researches being done right now. After all, Quaternary science is the study of our history spanning the last 2.58 million years, there are so many things left to be discovered. Quaternary science also played an important role in another area of science – archaeology. Archaeology is the field of science which uses material remains to study the human past. There are many types of archaeology as this field of study is a diverse field. Some archaeologists study the remains of the human – bioarchaeology, some study the ancient plants – paleoethnobotany, and some even study the stone tools. Furthermore, not every archaeologist is specialized in the same area, some archaeologist specialized in technologies which help located a map or sites, while some are a specialist in studying human remains underwater. Quaternary science has offered a precise and comprehensive framework to human studies which help the global interpretation of the archaeological records in the field of archaeology. For an illustration, some of the commonly known frameworks which contributed to the global interpretation of the records are chronology, palaeoenvironmental background and site formation processes. One of the important focuses of Quaternary science in archaeology is the study of geochronology. Geochronology is the study of science concerning the ages and dates of Earth's material (e.g. rocks, fossils, etc.) and events. This area of research was deemed to be very significant to the archaeology of Indigenous Australia due to the fact that there are very few cultural markers that can be used for the relative chronology. The relative chronology in archaeology is normally used in places that are easily identified on the archaeological records and have a strong differentiation in cultural productions. In addition to focusing on geochronology, the key role of Quaternary science to archaeology is to help the archaeologists in resolving some of their major problems relating to its impact on the surrounding environment, the Colonization of human in the past, cultural productions, and its mobility. Quaternary science offers the archaeologist invaluable data which assist them in further understanding the environment and landscape which involved the evolution of the humans during the late Quaternary Period. == Socio-economic impact == Quaternary science has wide-ranging effects, studying things such as the impact of climate changes on animals and human, adaptation of living organisms, and human evolution. A species’ adapting to new changes is a sign that it has been impacted by something. In this case, it is how organisms respond to climate changes. To be able to live, develop, and continue to reproduce, every species relies on upon its ecological requirements – including their environmental factors (climates, geology, etc. However, not all species respond in the same ways when changes happened. Adaptation allows species to evolve to be able to live in the same place despite the climate change. Some adaptations even involve genetic modification. The impact of climate shift has caused species to modify their genome to survive. Research was done to examine whether there are any impacts of the pre-Quaternary Period and Quaternary Period on contemporary species richness. Species richness is the number of various species that exist in a certain locations or landscape. The aim of the researchers here is to analyze the roles of the Quaternary climatic oscillations and pre-Quaternary legacies in influencing the worldwide distribution and palm diversity pattern (Aceraceae), and the ecological importance of a diverse group of keystone species in its tropical ecosystem. In the experiment, researchers gathered lists of almost every international species and assembled any related or connected data on possible climates during the Quaternary Period, modern-day environment drivers (such as our current climate, habitat, area, etc.), and vital biogeographic land to gauge the extent to which the global distribution and the patterns of species richness in palms reflect the effect of Quaternary climatic movement and pre-Quaternary legacies. After the experiment, they discovered that Quaternary climates change has significantly affected the richness of the palm species. Moreover, they found out that the global constraint on the distribution of the palm family was influenced by the current climate, whereas the climate during the Quaternary Period only caused a slight constraint. From many researches, climate changes during the Quaternary Period have impacted the life of many species living in the present day. The research by Silva, Antonelli, Lendel, Moraes, and Manfrin (2018) in Southeastern United States suggests that there was a major impact of the early Quaternary climate change on the spreading and diversity of the Cactus species of South America. Additionally, not only the Quaternary science impacted the plant species and animal species, they also caused some ecological state shift. An article researched by Barnosky, Lindsey, Villavicencio, et al. (2016) provides evidence which support the findings that megafaunal extinction during the late Quaternary Period has a huge effect in causing several ecological state shifts in North and South America. The loss of megafauna species has caused ecological change over a period of time. The purpose of the research is to examine whether the loss of megafauna species during the ice age could explain the phenomenon of ecological state shifts that have happened as the Pleistocene Epoch gave way to the Holocene Epoch. From their findings, they learned that should large species went extinct like the megafauna, our current ecosystem would be at risk of disappearing. The reason is that, in the megafauna case, those species must have been an effective ecosystem engineer and as a respond to the extinction of megafauna, possible events must have occurred to provide our ecosystem with more plant species, thus, triggering a lasting ecological state shift. == Academic journals == Boreas – An International Journal of Quaternary Research Geografiska Annaler (only the title is in Swedish) Journal of Quaternary Science Quaternary Geochronology Quaternary International Quaternary Research Quaternary Science Reviews The Quaternary Times == See also == International Union for Quaternary Research Palynology 100,000-year problem Geochronology == References == == External links == UK Quaternary Research Association Irish Quaternary Association Cambridge Quaternary formally the Godwin Institute
Wikipedia/Quaternary_science
A geographic data model, geospatial geographical measurements, or simply data from modules in the context of geographic information systems (GIS), is a mathematical and digital structure for representing phenomena over the Earth. Generally, such data modules represent various aspects of these phenomena by means of statistical data measurement, including locations, change over time. For example, the vector graphic data model represents geography as collections of points, lines, and arrays, and the elimination data model represent geography as space matrices that store numeric values. Data models are implemented throughout the GIS ecosystem, including the software tools for data management and spatial analysis, data stored in very specific languages of GIS file formats specifications and standards, and specific designs for GIS installations. While the unique nature of spatial information has led to its own set of model structures, much of the process of data modeling is similar to the rest of information technology, including the progression from conceptual models to logical models, and the difference between generic models and application-specific design. == History == The earliest computer systems that represented geographic phenomena were quantitative analysis models developed during the quantitative revolution in geography in the 1950s and 1960s; these could not be called a geographic information system because they did not attempt to store geographic data in a consistent permanent structure, but were usually statistical or mathematical models. The first true GIS software modeled spatial information using data models that would come to be known as raster or vector: SYMAP (by Howard Fisher, Harvard Laboratory for Computer Graphics and Spatial Analysis, developed 1963–1967) produced raster maps, although data was usually entered as vector-like region outlines or sample points then interpolated into a raster structure for output. The GRID package, developed at the lab in 1969 by David Sinton, was based on VAR but was more focused on the permanent storage and analysis of gridded data, thus becoming perhaps the first general purpose raster GIS software. The Canadian Geographic Information System (by Roger Tomlinson, Canada Land Inventory, developed 1963–1968) stored natural resource data as "faces" (vector polygons), although these were typically derived from raster scans of paper maps. Dual Independent Map Encoding (DIME, US Census Bureau, 1967) was perhaps the first robust vector data model incorporating network and polygon topology and attributes sufficient to allow address geocoding. Like the CGIS, early GIS installations in the United States were often focused on inventories of land use and natural resources, including the Minnesota Land Management Information System (MLMIS, 1969), the Land Use and Natural Resources Inventory of New York (LUNR, 1970), and the Oak Ridge Regional Modelling Information System (ORRMIS, 1973). Unlike CGIS, these were all raster systems inspired by SYMAP, although the MLMIS was based on subsections of the Public Land Survey System, which is not a perfect regular grid. Most first-generation GIS were custom-built for specific needs, with data models designed to be stored and processed most efficiently using the technology limitations of the day (especially punched cards and limited mainframe processing time). During the 1970s, the early systems had produced sufficient results to compare them and evaluate the effectiveness of their underlying data models. This led to efforts at the Harvard Lab and elsewhere focused on developing a new generation of generic data models, such as the POLYVRT topological vector model that would form the basis for commercial software and data such as the Esri Coverage. As commercial off-the-shelf GIS software, GIS installations, and GIS data proliferated in the 1980s, scholars began to look for conceptual models of geographic phenomena that seemed to underlay the common data models, trying to discover why the raster and vector data models seemed to make common sense, and how they measured and represented the real world. This was one of the primary threads that formed the subdiscipline of geographic information science in the early 1990s. Further developments in GIS data modeling in the 1990s were driven by rapid increases in both the GIS user base and computing capability. Major trends included 1) the development of extensions to the traditional data models to handle more complex needs such as time, three-dimensional structures, uncertainty, and multimedia; and 2) the need to efficiently manage exponentially increasing volumes of spatial data with enterprise needs for multiuser access and security. These trends eventually culminated in the emergence of spatial databases incorporated into relational databases and object-relational databases. == Types of data models == Because the world is much more complex than can be represented in a computer, all geospatial data are incomplete approximations of the world. Thus, most geospatial data models encode some form of strategy for collecting a finite sample of an often infinite domain, and a structure to organize the sample in such a way as to enable interpolation of the nature of the unsampled portion. For example, a building consists of an infinite number of points in space; a vector polygon represents it with a few ordered points, which are connected into a closed outline by straight lines and assuming all interior points are part of the building; furthermore, a "height" attribute may be the only representation of its three-dimensional volume. The process of designing geospatial data models is similar to data modeling in general, at least in its overall pattern. For example, it can be segmented into three distinct levels of model abstraction: Conceptual data model, a high-level specification of how information is organized in the mind and in enterprise processes, without regard to the restrictions of GIS and other computer systems. It is common to develop and represent a conceptual model visually using tools such as an entity-relationship model. Logical data model, a broad strategy for how to represent the conceptual model in the computer, sometimes novel but often within the framework of existing software, hardware, and standards. The unified modeling language (UML), specifically the class diagram, is commonly used for visually developing logical and physical models. Physical data model, the detailed specification of how data will be structured in memory or in files. Each of these models can be designed in one of two situations or scopes: A generic data model is intended to be employed in a wide variety applications, by discovering consistent patterns in the ways that society in general conceptualizes information and/or structures that work most efficiently in computers. For example, the field is a generic conceptual model of geographic phenomena, the relational database model and var are generic logical models, while the shapefile format is a generic physical model. These models are typically implemented directly info software and GIS file formats. In the past, these models have been designed by academic researchers, by standards bodies such as the Open Geospatial Consortium, and by software vendors such as Esri. While academic and standard models are public (and sometimes open source), companies may choose to keep the details of their model a secret (as Esri attempted to do with the coverage and the file geodatabase) or to publish them openly (as Esri did with the shapefile). A specific data model or GIS design is a specification of the data needed for a particular enterprise or project GIS, GIT, Read,_ Meapplication. It is generally created within the constraints of chosen generic data models, so that existing GIS software can be used. For example, a data model for a city would include a list of data layers to be included (e.g., roads, buildings, parcels, zoning), with each being specified with the type of generic spatial data model being used (e.g. raster or vector), choices of parameters such as coordinate system, and its attribute columns. == Conceptual spatial models == Generic geospatial conceptual models attempt to capture both the physical nature of geographic phenomena and how people think about them and work with them. Contrary to the standard modeling process described above, the data models upon which GIS is built were not originally designed based on a general conceptual model of geographic phenomena, but were largely designed according to technical expediency, likely influenced by common sense conceptualizations that had not yet been documented. That said, an early conceptual framework that was very influential in early GIS development was the recognition by Brian Berry and others that geographic information can be decomposed into the description of three very different aspects of each phenomenon: space, time, and attribute/property/theme. As a further development in 1978, David Sinton presented a framework that characterized different strategies for measurement, data, and mapping as holding one of the three aspects constant, controlling a second, and measuring the third. During the 1980s and 1990s, a body of spatial information theories gradually emerged as a major subfield of geographic information science, incorporating elements of philosophy (especially ontology), linguistics, and sciences of spatial cognition. By the early 1990s, a basic dichotomy had emerged of two alternative ways of making sense of the world and its contents: An object (also called a feature or entity) is a distinct "thing," comprehended as a whole. It may be a visible, material object, such as a building or road, or an abstract entity such as a county or the market area of a retail store. A field is a property that varies over space, so that it potentially has a distinct measurable value at any location within its extent. It may be a physical, directly measurable characteristic of matter akin to the intensive properties of chemistry, such as temperature or density; or it may be an abstract concept defined via a mathematical model, such as the likelihood that a person living at each location will use a local park. These two conceptual models are not meant to represent different phenomena, but often are different ways of conceptualizing and describing the same phenomenon. For example, a lake is an object, but the temperature, clarity, and proportion of pollution of the water in the lake are each fields (the water itself may be considered as a third concept of a mass, but this is not as widely accepted as objects and fields). == Vector data model == The vector logical model represents each geographic location or phenomenon by a geometric shape and a set of values for its attributes. Each geometric shape is represented using coordinate geometry, by a structured set of coordinates (x,y) in a geographic coordinate system, selected from a set of available geometric primitives, such as points, lines, and polygons. Although there are dozens of vector file formats (i.e., physical data models) used in various GIS software, most conform to the Simple Feature Access (SFA) specification from the Open Geospatial Consortium (OGC). It was developed in the 1990s by finding common ground between existing vector models, and is now enshrined as ISO 19125, the reference standard for the vector data model. OGC-SFA includes the following vector geometric primitives: Point: a single coordinate in two- or three-dimensional space. Many vector formats allow a single feature to consist of several isolated points and has a zero dimension. (a MultiPoint in OGC-SFA). Curve (alternatively called a polyline or linestring): a line includes an infinite number of points and has one-dimesnion, but it is represented by a finite ordered sample of points (called vertices), allowing for software to interpolate the intervening points. Traditionally, this was a linear interpolation (OGC-SFA calls this case a LineString), but some vector formats allow for curves (usually circular arcs or Bézier curves), or for a single feature to consist of multiple disjoint curves (a MultiCurve in OGC-SFA). Polygon: a region also includes an infinite number of points, so the vector model represents its boundary as a closed line (called a ring in OGC-SFA), allowing the software to interpolate the interior. GIS software distinguishes the interior and the exterior by requiring that the line be ordered counter-clockwise, so the interior is always on the left side of the boundary. In nearly every format, a polygon can have "holes" (e.g., an island in a lake) by including interior rings, each in clockwise order (so the interior is still on the left). As with lines, curved boundaries may be allowed; usually a single feature may include multiple polygons, which OGC-SFA collectively terms a surface. Text (alternatively called annotation): a minority of vector data formats, including the Esri geodatabase and Autodesk .dwg, support the storage of text in the database. An annotation is usually represented as a point or curve (the baseline) with a set of attributes giving the text content and design characteristics (font, size, spacing, etc.). The geometric shape stored in a vector data set representing a phenomenon may or may not be of the same dimension as the real-world phenomenon itself. It is common to represent a feature by a lower dimension than its real nature, based on the scale and purpose of the representation. For example, a city (a two-dimensional region) may be represented as a point, or a road (a three-dimensional structure) may be represented as a line. As long as the user is aware that the latter is a representation choice and a road is not really a line, this generalization can be useful for applications such as transport network analysis. Based on this basic strategy of geometric shapes and attributes, vector data models use a variety of structures to collect these into a single data set (often called a layer), usually containing a set of related features (e.g., roads). These can be categorized into several approaches: The georelational data model was the basis for most early vector GIS software. The geometric data and the attribute data are stored separately; this was originally because the geometric data required GIS-specific code to process it, but existing relational database software (RDBMS) could be used to manage the attributes. For example, Esri ARC/INFO (later ArcInfo) was originally composed of two separate programs: ARC was written by Esri for spatial management and analysis, while INFO was a licensed commercial RDBMS program. It was termed "georelational" because in keeping with the principles of relational databases, the geometry and attributes could be joined by matching each shape with a row in the table using a key, such as the row number or an ID number. The spatial database (also called the object-based model) first appeared in the 1990s. It also leverages the maturity of relational database management systems, especially for their ability to manage extremely large enterprise databases. Instead of storing geometric data separately, the spatial database defines a geometry data type, allowing the shapes to be stored in a column in the same table as the attributes, creating a single unified data set for each layer. Most RDBMS software (both commercial and open-source) have spatial extensions to enable the storage and query of geometric data, usually based on the Simple Features-SQL standard from the Open Geospatial Consortium. Some non-database data formats also integrate geometric and attribute data for each object into a single structure, such as GeoJSON. Vector data structures can also be classified by how they manage topological relationships between objects in a dataset: A topological data model incorporates topological relationships as a core part of the model design.: 46  The GBF/DIME format from the U.S. Census Bureau was probably the first topological data model; another early example was POLYVRT, developed at the Harvard Laboratory for Computer Graphics and Spatial Analysis in the 1970s, eventually evolving into the Esri ARC/INFO Coverage format. In this structure, lines are broken at all intersection points; these nodes can then store topological information about which lines connect there. Polygons are not stored separately, but are defined as a set of lines that collectively close. Each line contains information about the polygons on its right and left, thus explicitly storing topological adjacency. This structure was designed to enable composite line-polygon structures (e.g., the census block), address geocoding, and transport network analysis. It also had the benefit of increased storage efficiency and reduced error, because the shared border of each pair of adjacent polygons was only digitized once. However, it is a fairly complicated data structure. Almost all topological data models are also geo-relational. A spaghetti data model does not include any information about topology (so-called because the individual strands in a bowl of spaghetti may overlap without connecting).: 215  It was common in early GIS systems such as the Map Overlay and Statistical System (MOSS) as well as most recent data formats, such as the Esri shapefile, geography markup language (GML), and almost all spatial databases. In this model, each feature geometry is encoded separately from any others in the data set, regardless of whether they may be topologically related. For example, the shared boundary between two adjacent regions would be duplicated in each polygon shape. Despite the increased data volume and potential for error over topological data, this model has dominated GIS since 2000, largely due to its conceptual simplicity. Some GIS software has tools for validating topological integrity rules (e.g. not allowing polygons to overlap or have gaps) on spaghetti data to prevent and/or correct topological errors. A hybrid topological data model has the option of storing topological relationship information as a separate layer built on top of a spaghetti data set. An example is the network dataset within the Esri geodatabase. Vector data are commonly used to represent conceptual objects (e.g., trees, buildings, counties), but they can also represent fields. As an example of the latter, a temperature field could be represented by an irregular sample of points (e.g., weather stations), or by isotherms, a sample of lines of equal temperature.: 89  == Raster data model == The raster logical model represents a field using a tessellation of geographic space into a regularly spaced two-dimensional array of locations (each called a cell), with a single attribute value for each cell (or more than one value in a multi-band raster). Typically, each cell either represents a single central point sample (in which the measurement model for the entire raster is called a lattice) or it represents a summary (usually the mean) of the field variable over the square area (in which the model is called a grid).: 86  The general data model is essentially the same as that used for images and other raster graphics, with the addition of capabilities for the geographic context. A small example follows: To represent a raster grid in a computer file, it must be serialized into a single (one-dimensional) list of values. While there are various possible ordering schemes, the most commonly used is row-major, in which the cells in the first row, followed immediately by the cells in the second row, as follows: 6 7 10 9 8 6 7 8 6 8 9 10 8 7 7 7 7 8 9 10 9 8 7 6 8 8 9 11 10 9 9 7 . . . To reconstruct the original grid, a header is required with general parameters for the grid. At the very least, it requires the number of rows in each column so it will know where to begin each new row, and the datatype of each value (i.e. the number of bits in each value before beginning the next value). While the raster model is closely tied to the field conceptual model, objects can also be represented in raster, essentially by transforming an object X into a discrete (Boolean) field of presence/absence of X. Alternatively, a layer of objects (usually polygons) could be transformed into a discrete field of object identifiers. In this case, some raster file formats allow a vector-like table of attributes to be joined to the raster by matching the ID values. Raster representations of objects are often temporary, only created and used as part of a modelling procedure, rather than in a permanent data store.: 135-137  To be useful in GIS, a raster file must be georeferenced to correspond to real world locations, as a raw raster can only express locations in terms of rows and columns. This is typically done with a set of metadata parameters, either in the file header (such as the GeoTIFF format) or in a sidecar file (such as a world file). At the very least, the georeferencing metadata must include the location of at least one cell in the chosen coordinate system and the resolution or cell size, the distance between each cell. A linear Affine transformation is the most common type of georeferencing, allowing rotation and rectangular cells.: 171  More complex georeferencing schemes include polynomial and spline transformations. Raster data sets can be very large, so image compression techniques are often used. Compression algorithms identify spatial patterns in the data, then transform the data into parameterized representations of the patterns, from which the original data can be reconstructed. In most GIS applications, lossless compression algorithms (e.g., Lempel-Ziv) are preferred over lossy ones (e.g., JPEG), because the complete original data are needed, not an interpolation. == Extensions == Starting in the 1990s, as the original data models and GIS software matured, one of the primary foci of data modeling research was on developing extensions to the traditional models to handle more complex geographic information. === Spatiotemporal models === Time has always played an important role in analytical geography, dating at least back to Brian Berry's regional science matrix (1964) and the time geography of Torsten Hägerstrand (1970). In the dawn of the GIScience era of the early 1990s, the work of Gail Langran opened the doors to research into methods of explicitly representing change over time in GIS data; this led to many conceptual and data models emerging in the decades since. Some forms of temporal data began to be supported in off-the-shelf GIS software by 2010. Several common models for representing time in vector and raster GIS data include: The snapshot model (also known as time-stamped layers), in which an entire dataset is tied to a particular valid time. That is, it is a "snapshot" of the world at that time. Time-stamped features, in which the dataset includes features valid at a variety of times, with each feature stamped by the time during which it was valid (i.e., by "start date" and "end date" columns in the attribute table.). Some GIS software, such as ArcGIS Pro, natively supports this model, with functionality including animation. Time-stamped boundaries, using the topological vector data model to decompose polygons into boundary segments, and stamping each segment by the time during which it was valid. This method was pioneered by the Great Britain Historical GIS. Time-stamped facts, in which each individual datum (including attribute values) can have its own time stamp, allowing for the attributes within a single feature to change over time, or for a single feature (with constant identity) to have different geometric shapes at different times. Time as dimension, which treats time as another (3rd or 4th) spatial dimension, and using multidimensional vector or raster structures to create geometries incorporating time. Hägerstrand visualized his time geography this way, and some GIS models based on it use this approach. The NetCDF format supports managing temporal raster data as a dimension. === Three-dimensional models === There are several approaches for representing three-dimensional map information, and for managing it in the data model. Some of these were developed specifically for GIS, while others have been adopted from 3D computer graphics or computer-aided drafting (CAD). Height fields (also known as "2 1/2 dimensional surfaces") model three-dimensional phenomena by a single functional surface, in which elevation is a function of two-dimensional location, allowing it to be represented using field techniques such as isolated points, contour lines, raster (the digital elevation model), and triangulated irregular networks. A polygon mesh (related to the mathematical polyhedron) is a logical extension of the vector data model, and is probably the 3-D model type most widely supported in GIS. A volumetric object is reduced to its outer surface, which is represented by a set of polygons (often triangles) that collectively completely enclose a volume. The voxel model is the logical extension of the raster data model, by tessellating three-dimensional space into cubes called voxels (a portmanteau of volume and pixel, the latter being itself a portmanteau). NetCDF is one of the most common data formats that supports 3-D cells. Vector-based stack-unit maps depict the vertical succession of geologic units to a specified depth (here, the base of the block diagram). This mapping approach characterizes the vertical variations of physical properties in each 3-D map unit. In this example, an alluvial deposit (unit "a") overlies glacial till (unit "t"), and the stack-unit labeled "a/t" indicates that relationship, whereas the unit "t" indicates that glacial till extends down to the specified depth. In a manner similar to that shown in figure 11, the stack-unit's occurrence (the map unit's outcrop), geometry (the map unit's boundaries), and descriptors (the physical properties of the geologic units included in the stack-unit) are managed as they are for a typical 2-D geologic map. Raster-based stacked surfaces depict the surface of each buried geologic unit, and can accommodate data on lateral variations of physical properties. In this example from Soller and others (1999), the upper surface of each buried geologic unit was represented in raster format as an ArcInfo Grid file. The middle grid is the uppermost surface of an economically important aquifer, the Mahomet Sand, which fills a pre- and inter-glacial valley carved into the bedrock surface. Each geologic unit in raster format can be managed in the data model, in a manner not dissimilar from that shown for the stack-unit map. The Mahomet Sand is continuous in this area, and represents one occurrence of this unit in the data model. Each raster, or pixel, on the Mahomet Sand surface has a set of map coordinates that are recorded in a GIS (in the data model bin that is labeled "pixel coordinates", which is the raster corollary of the "geometry" bin for vector map data). Each pixel can have a unique set of descriptive information, such as surface elevation, unit thickness, lithology, transmissivity, etc.). == See also == ArcGIS Data structure == References == == Further reading == B.R. Johnson et al. (1998). Digital geologic map data model. v. 4.3: AASG/USGS Data Model Working Group Report, http://geology.usgs.gov/dm/. Soller, D.R., Berg, T.M., and Wahl, Ron (2000). "Developing the National Geologic Map Database, phase 3—An online, "living" database of map information". In Soller, D.R., ed., Digital Mapping Techniques '00—Workshop Proceedings: U.S. Geological Survey Open-File Report 00-325, p. 49–52, http://pubs.usgs.gov/openfile/of00-325/soller4.html. Soller, D.R., and Lindquist, Taryn (2000). "Development and public review of the draft "Digital cartographic standard for geologic map symbolization". In Soller, D.R., ed., Digital Mapping Techniques '00—Workshop Proceedings: U.S. Geological Survey Open-File Report 00-325, p. 43–47, http://pubs.usgs.gov/openfile/of00-325/soller3.html.
Wikipedia/Data_model_(GIS)
Soil science is the study of soil as a natural resource on the surface of the Earth including soil formation, classification and mapping; physical, chemical, biological, and fertility properties of soils; and these properties in relation to the use and management of soils. The main branches of soil science are pedology ― the study of formation, chemistry, morphology, and classification of soil ― and edaphology ― the study of how soils interact with living things, especially plants. Sometimes terms which refer to those branches are used as if synonymous with soil science. The diversity of names associated with this discipline is related to the various associations concerned. Indeed, engineers, agronomists, chemists, geologists, physical geographers, ecologists, biologists, microbiologists, silviculturists, sanitarians, archaeologists, and specialists in regional planning, all contribute to further knowledge of soils and the advancement of the soil sciences. Soil scientists have raised concerns about how to preserve soil and arable land in a world with a growing population, possible future water crisis, increasing per capita food consumption, and land degradation. == Fields of study == Soil occupies the pedosphere, one of Earth's spheres that the geosciences use to organize the Earth conceptually. This is the conceptual perspective of pedology and edaphology, the two main branches of soil science. Pedology is the study of soil in its natural setting. Edaphology is the study of soil in relation to soil-dependent uses. Both branches apply a combination of soil physics, soil chemistry, and soil biology. Due to the numerous interactions between the biosphere, atmosphere and hydrosphere that are hosted within the pedosphere, more integrated, less soil-centric concepts are also valuable. Many concepts essential to understanding soil come from individuals not identifiable strictly as soil scientists. This highlights the interdisciplinary nature of soil concepts. == Research == Exploring the diversity and dynamics of soil continues to yield fresh discoveries and insights. New avenues of soil research are compelled by a need to understand soil in the context of climate change, greenhouse gases, and carbon sequestration. Interest in maintaining the planet's biodiversity and in exploring past cultures has also stimulated renewed interest in achieving a more refined understanding of soil. == Mapping == == Classification == In 1998, the World Reference Base for Soil Resources (WRB) replaced the FAO soil classification as the international soil classification system. The currently valid version of WRB is the 4th edition, 2022. The FAO soil classification, in turn, borrowed from modern soil classification concepts, including USDA soil taxonomy. WRB is based mainly on soil morphology as an expression of pedogenesis. A major difference with USDA soil taxonomy is that soil climate is not part of the system, except insofar as climate influences soil profile characteristics. Many other classification schemes exist, including vernacular systems. The structure in vernacular systems is either nominal (giving unique names to soils or landscapes) or descriptive (naming soils by their characteristics such as red, hot, fat, or sandy). Soils are distinguished by obvious characteristics, such as physical appearance (e.g., color, texture, landscape position), performance (e.g., production capability, flooding), and accompanying vegetation. A vernacular distinction familiar to many is classifying texture as heavy or light. Light soil content and better structure take less effort to turn and cultivate. Light soils do not necessarily weigh less than heavy soils on an air dry basis, nor do they have more porosity. == History == The earliest known soil classification system comes from China, appearing in the book Yu Gong (5th century BCE), where the soil was divided into three categories and nine classes, depending on its color, texture and hydrology. Contemporaries Friedrich Albert Fallou (the German founder of modern soil science) and Vasily Dokuchaev (the Russian founder of modern soil science) are both credited with being among the first to identify soil as a resource whose distinctness and complexity deserved to be separated conceptually from geology and crop production and treated as a whole. As a founding father of soil science, Fallou has primacy in time. Fallou was working on the origins of soil before Dokuchaev was born; however Dokuchaev's work was more extensive and is considered to be the more significant to modern soil theory than Fallou's. Previously, soil had been considered a product of chemical transformations of rocks, a dead substrate from which plants derive nutritious elements. Soil and bedrock were in fact equated. Dokuchaev considers the soil as a natural body having its own genesis and its own history of development, a body with complex and multiform processes taking place within it. The soil is considered as different from bedrock. The latter becomes soil under the influence of a series of soil-formation factors (climate, vegetation, country, relief and age). According to him, soil should be called the "daily" or outward horizons of rocks regardless of the type; they are changed naturally by the common effect of water, air and various kinds of living and dead organisms. A 1914 encyclopedic definition: "the different forms of earth on the surface of the rocks, formed by the breaking down or weathering of rocks". serves to illustrate the historic view of soil which persisted from the 19th century. Dokuchaev's late 19th century soil concept developed in the 20th century to one of soil as earthy material that has been altered by living processes. A corollary concept is that soil without a living component is simply a part of Earth's outer layer. Further refinement of the soil concept is occurring in view of an appreciation of energy transport and transformation within soil. The term is popularly applied to the material on the surface of the Earth's moon and Mars, a usage acceptable within a portion of the scientific community. Accurate to this modern understanding of soil is Nikiforoff's 1959 definition of soil as the "excited skin of the sub aerial part of the Earth's crust". == Areas of practice == Academically, soil scientists tend to be drawn to one of five areas of specialization: microbiology, pedology, edaphology, physics, or chemistry. Yet the work specifics are very much dictated by the challenges facing our civilization's desire to sustain the land that supports it, and the distinctions between the sub-disciplines of soil science often blur in the process. Soil science professionals commonly stay current in soil chemistry, soil physics, soil microbiology, pedology, and applied soil science in related disciplines. One exciting effort drawing in soil scientists in the U.S. as of 2004 is the Soil Quality Initiative. Central to the Soil Quality Initiative is developing indices of soil health and then monitoring them in a way that gives us long-term (decade-to-decade) feedback on our performance as stewards of the planet. The effort includes understanding the functions of soil microbiotic crusts and exploring the potential to sequester atmospheric carbon in soil organic matter. Relating the concept of agriculture to soil quality, however, has not been without its share of controversy and criticism, including critiques by Nobel Laureate Norman Borlaug and World Food Prize Winner Pedro Sanchez. A more traditional role for soil scientists has been to map soils. Almost every area in the United States now has a published soil survey, including interpretive tables on how soil properties support or limit activities and uses. An internationally accepted soil taxonomy allows uniform communication of soil characteristics and soil functions. National and international soil survey efforts have given the profession unique insights into landscape-scale functions. The landscape functions that soil scientists are called upon to address in the field seem to fall roughly into six areas: Land-based treatment of wastes Septic system Manure Municipal biosolids Food and fiber processing waste Identification and protection of environmentally critical areas Sensitive and unstable soils Wetlands Unique soil situations that support valuable habitat, and ecosystem diversity Management for optimum land productivity Silviculture Agronomy Nutrient management Water management Native vegetation Grazing Management for optimum water quality Stormwater management Sediment and erosion control Remediation and restoration of damaged lands Mine reclamation Flood and storm damage Contamination Sustainability of desired uses Soil conservation There are also practical applications of soil science that might not be apparent from looking at a published soil survey. Radiometric dating: specifically a knowledge of local pedology is used to date prior activity at the site Stratification (archeology) where soil formation processes and preservative qualities can inform the study of archaeological sites Geological phenomena Landslides Active faults Altering soils to achieve new uses Vitrification to contain radioactive wastes Enhancing soil microbial capabilities in degrading contaminants (bioremediation). Carbon sequestration Environmental soil science Pedology Soil genesis Pedometrics Soil morphology Soil micromorphology Soil classification USDA soil taxonomy World Reference Base for Soil Resources Soil biology Soil microbiology Soil chemistry Soil biochemistry Soil mineralogy Soil physics Pedotransfer function Soil mechanics and engineering Soil hydrology, hydropedology === Fields of application in soil science === Climate change Ecosystem studies Pedotransfer function Soil fertility / Nutrient management Soil management Soil survey Standard methods of analysis Watershed and wetland studies Land Suitability classification === Related disciplines === Agricultural sciences Agricultural soil science Agrophysics science Irrigation management Anthropology archaeological stratigraphy Environmental science Landscape ecology Physical geography Geomorphology Geology Biogeochemistry Geomicrobiology Hydrology Hydrogeology Waste management Wetland science == Depression storage capacity == Depression storage capacity, in soil science, is the ability of a particular area of land to retain water in its pits and depressions, thus preventing it from flowing. Depression storage capacity, along with infiltration capacity, is one of the main factors involved in Horton overland flow, whereby water volume surpasses both infiltration and depression storage capacity and begins to flow horizontally across land, possibly leading to flooding and soil erosion. The study of land's depression storage capacity is important in the fields of geology, ecology, and especially hydrology. == See also == == References == Soil Survey Staff (1993). Soil Survey: Early Concepts of Soil. (html) Soil Survey Manual USDA Handbook 18, Soil Conservation Service. U.S. Department of Agriculture. URL accessed on 2004-11-30. Marion LeRoy Jackson (2005). Soil Chemical Analysis: Advanced Course. UW-Madison Libraries Parallel Press. pp. 5–. ISBN 978-1-893311-47-3. == External links == Media related to Soil science at Wikimedia Commons
Wikipedia/Soil_science
Aquatic science is the study of the various bodies of water that make up our planet including oceanic and freshwater environments. Aquatic scientists study the movement of water, the chemistry of water, aquatic organisms, aquatic ecosystems, the movement of materials in and out of aquatic ecosystems, and the use of water by humans, among other things. Aquatic scientists examine current processes as well as historic processes, and the water bodies that they study can range from tiny areas measured in millimeters to full oceans. Moreover, aquatic scientists work in Interdisciplinary groups. For example, a physical oceanographer might work with a biological oceanographer to understand how physical processes, such as tropical cyclones or rip currents, affect organisms in the Atlantic Ocean. Chemists and biologists, on the other hand, might work together to see how the chemical makeup of a certain body of water affects the plants and animals that reside there. Aquatic scientists can work to tackle global problems such as global oceanic change and local problems, such as trying to understand why a drinking water supply in a certain area is polluted. There are two main fields of study that fall within the field of aquatic science. These fields of study include oceanography and limnology. == Oceanography == Oceanography refers to the study of the physical, chemical, and biological characteristics of oceanic environments. Oceanographers study the history, current condition, and future of the planet's oceans. They also study marine life and ecosystems, ocean circulation, plate tectonics, the geology of the seafloor, and the chemical and physical properties of the ocean. Oceanography is interdisciplinary. For example, there are biological oceanographers and marine biologists. These scientists specialize in marine organisms. They study how these organisms develop, their relationship with one another, and how they interact and adapt to their environment. Biological oceanographers and marine biologists often utilize field observations, computer models, laboratory experiments, or field experiments for their research. In the field of oceanography, there are also chemical oceanographers and marine chemists. These scientist's areas of focus are the composition of seawater. They study the processes and cycles of seawater, as well as how seawater chemically interacts with the atmosphere and seafloor. Some examples of jobs that chemical oceanographers and marine chemists perform are analyzing seawater components, exploring the effects pollutants have on seawater, and analyzing the effects that chemical processes have on marine animals. In addition, a chemical oceanographer might use chemistry to better understand how ocean currents move seawater and how the ocean affects the climate. They might also search for ocean resources that could be beneficial, such as products that have medicinal properties. The field of oceanography also consists of geological oceanographers and marine geologists who study the ocean floor and how its mountains, canyons, and valleys were formed. Geological oceanographers and marine geologists use sampling to examine the history of sea-floor spreading, plate tectonics, thermohaline circulation, and climates. In addition, they study undersea volcanos as well as mantle (geology) and hydrothermal circulation. Their research helps us to better understand the events that led to the creation of oceanic basins and how the ocean interacts with the seabed. Lastly, under the field of oceanography, there are physical oceanographers. Physical oceanographers are experts on the physical conditions and processes that occur naturally in the ocean. These include waves, currents, eddies, gyres, tides, and coastal erosion. Physical oceanographers also study topics such as the transmission of light and sound through water and the effects that the ocean has on weather and climate. All of these fields are intertwined. In order for an oceanographer to succeed in their field, they need to have an adequate understanding of other related sciences, such as biology, chemistry, and physics. == Limnology == Limnology is the study of freshwater environments, such as rivers, streams, lakes, reservoirs, groundwater, and marshlands. Limnologists work to understand the various natural and man made factors that affect our natural water bodies such as pesticides, temperature, runoff, and aquatic life. For example, a limnologist might study the effects of pesticides on the temperature of a lake or they might seek to understand why a certain species of fish in the Nile River is declining. In order to increase their understanding of what they are studying, limnologists employ three main study techniques. The first study technique has to do with observations. Limnologists make descriptive observations of conditions and note how those conditions have changed over time. These observations allow limnologists to form theories and hypotheses. The second study technique that limnologists use has to do with experimentation. Limnologists conduct controlled experiments under laboratory conditions in order to further their understanding of the impact of small, individual changes in the ecosystem. Lastly, limnologists come up with predictions. After they have conducted their experiments, they can apply what they have learned to known data about the wider ecosystem and make predictions about the natural environment. Within the field of limnology, there are more specific areas of study. One of those areas of study is ecology, particularly the ecology of water systems. The ecology of water systems focuses on the organisms that live in freshwater environments and how they are affected by changes in their habitat. For example, a limnologist specializing in ecology could study how chemical or temperature changes in a body of water inhibit or support new organic growth. Another aspect that they may examine are the effects of a nonnative species on native populations of aquatic life. Most ecological limnologists conduct their studies in laboratory settings, where their hypotheses can be tested, verified, and controlled. Another area of study under limnology is biology. Limnologists who specialize in the biology field only study the living aquatic organisms that are present in a certain freshwater environment. They aim to understand various aspects of the organisms, such as their history, their life cycles, and their populations. These scientists study living organisms in order to support the proper management of fresh bodies of water and their ecosystems. == Aquatic environments == Most aquatic environments contain both plants and animals. Aquatic plants are plants that grow in water. Examples of aquatic plants are waterlilies, floating hearts, the lattice plant, seagrass, and phytoplankton. Aquatic plants can be rooted in mud, such as the lotus flower or they can be found floating on the surface of the water such as the water hyacinth. Aquatic plants provide oxygen, food, and shelter for many aquatic animals. In addition, underwater vegetation provides several species of marine animals with grounds to spawn, nurse, take refuge, and forage. Seagrass, for example, is a vital source of food for commercial and recreational fish. Seagrass stabilizes sediments, produces the organic material that small aquatic invertebrates need, and adds oxygen to the water. Phytoplankton are also an important class of aquatic plant. Phytoplankton are similar to terrestrial plants in that they require chlorophyll and sunlight to grow. Most Phytoplankton are buoyant, floating in the upper part of the ocean, where sunlight penetrates the water. There are two main classes of phytoplankton: dinoflagellates and diatoms. Dinoflagellates have a whip-like tail called a Flagellum, which they use to move through the water, and their bodies are covered with complex shells. Diatoms, on the other hand, have shells, but they are made of a different substance. Instead of relying on flagella to travel through the water, diatoms use ocean currents. Both classes of phytoplankton provide food for a variety of sea creatures, such as shrimp, snails, and jellyfish. Both aquatic animals and plants contribute to the health of our environment and to the quality of human life. Humans depend on their ecological functions for our survival. Humans use surface waters and their inhabitants in order to process our waste products. Aquatic plants and animals provide us with necessities such as medicine, food, energy, shelter, and several raw materials. Today, more than 40% of medicines are derived from aquatic plants and animals. Moreover, aquatic wildlife are an important source of food for many people. In addition, aquatic wildlife is a big source of atmospheric oxygen and plays a big role in preventing humans from being affected by new diseases, pests, predators, food shortages, and global climate change. == Aquatic animals == Aquatic animals are organisms that spend most of their life underwater. These animals consist of crustaceans, reptiles, mollusks, aquatic birds, aquatic insects, and even starfish and coral. Aquatic animals unfortunately face a lot of threats, with most of these threats resulting from human behaviors. One major threat that aquatic animals face is overfishing. Scientists have figured out a way to replenish the species of fish that humans have over hunted by creating marine protected areas or fish regeneration zones These fish regeneration zones help protect their ecosystems and help rebuild their abundance. Another threat that aquatic animals face is pollution, particularly coastal pollution. This pollution is caused by industrial agriculture. These agricultural practices result in reactive nitrogen and phosphorus being poured into the rivers, which then gets transported to the ocean. These chemicals have created what is known as "dead zones" which is when there is less oxygen in the water. Moreover, another detrimental threat that aquatic animals face is the threat of habitat destruction. This can be exemplified with the clearing of mangrove forests for shrimp production and the scraping of underwater mountain ranges through deep-sea trawling. Other threats that aquatic animals face are global warming and acidification. Global warming is responsible for killing the algae that keeps coral alive, forcing species out of their natural habitats and into new areas, and for causing sea levels to rise. Acidification, on the other hand, is decreasing the pH level of oceans. High acidity levels in the water are preventing marine-calcifying organisms, such as coral, from forming shells. == World Aquatic Animal Day == Although there are not many currently existing formal holidays celebrating aquatic science, a new one has been made called World Aquatic Animal Day. World Aquatic Animal Day was created on April 3, 2020, as a way to raise awareness for these often forgotten animals. The holiday begun as a project of the Aquatic Animal Law Initiative and the Animal Law Clinic at the Lewis & Clark Law School as part of the Center for Animal Law Studies. In addition to raising awareness for these animals, this holiday aims to increase our appreciation and understanding of them. Under this holiday, the definition of aquatic animals is not limited to fish. == Recent Research and Developments in Aquatic Science == Current studies in aquatic science have expanded immensely to identify environmental concerns and new recovery practices. Several multidisciplinary studies have promoted an understanding of stressors and interventions affecting aquatic systems worldwide. Microplastic pollution is a notable concern. Microplastics are widespread in marine and freshwater ecosystems and have been found to affect a wide range of aquatic organisms. Microplastics have been shown to alter feeding behavior, growth rates, and reproductive health among species. Additionally, microplastics can adsorb and release toxic chemicals, increasing their ecological effects. Another critical issue is the effect of combining environmental stressors such as; global warming, eutrophication, and pesticide runoff on aquatic microbial communities. Experiments have shown that an accumulation of protozoan, which play major roles in nutrient cycling and food web stability, are particularly vulnerable to these combined stressors. The cumulative effects can lead to alterations in the community structure, loss of biodiversity, and impaired ecosystem function. Regarding restoration measures, aquatic vegetation rehabilitation has been demonstrated to be an effective approach to ecosystem recovery. Research conducted in Caizi Lakes, China, showed that the restoration of aquatic vegetation led to improved water quality and increased phytoplankton diversity. Mathematical modeling has also become increasingly important in aquatic science. With the use of Ulam–Hyers–Rassias stability techniques, researchers have developed models that simulate the impacts of global warming on aquatic ecosystems. These models provide predictions of how long-term climatic variability may affect biodiversity and productivity. In coastal regions, artificial habitats are increasingly used to rehabilitate marine biodiversity and support fisheries. Structures, such as artificial reefs, provide habitats for marine organisms and can help offset habitat loss caused by coastal development and overfishing. When properly designed and managed, artificial habitats can enhance ecosystem services and support long-term ecological sustainability. Furthermore, aquatic ecosystem health is being quantified more and more using new paradigms that consider the needs of ecosystem services. Rather than quantifying ecosystems solely on physical or biological measures, these methods analyze the capacity of ecosystems to provide services such as water filtration, habitat, and recreation. This shift emphasizes the functional integrity of ecosystems and supports more holistic and adaptive management practices. Together, these studies reflect future of aquatic science, which increasingly intersects with climate science, pollution research, and ecological restoration. == See also == GIS and aquatic science Pan-American Journal of Aquatic Sciences == References ==
Wikipedia/Aquatic_science