text
stringlengths
559
401k
source
stringlengths
13
121
In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches. Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The Fourier transform of a Gaussian function is another Gaussian function. Joseph Fourier introduced sine and cosine transforms (which correspond to the imaginary and real components of the modern Fourier transform) in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation. The Fourier transform can be formally defined as an improper Riemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory. For example, many relatively simple applications use the Dirac delta function, which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint. The Fourier transform can also be generalized to functions of several variables on Euclidean space, sending a function of 3-dimensional 'position space' to a function of 3-dimensional momentum (or a function of space and time to a function of 4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as in quantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possibly vector-valued. Still further generalization is possible to functions on groups, which, besides the original Fourier transform on R or Rn, notably includes the discrete-time Fourier transform (DTFT, group = Z), the discrete Fourier transform (DFT, group = Z mod N) and the Fourier series or circular Fourier transform (group = S1, the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handle periodic functions. The fast Fourier transform (FFT) is an algorithm for computing the DFT. == Definition == The Fourier transform of a complex-valued (Lebesgue) integrable function f ( x ) {\displaystyle f(x)} on the real line, is the complex valued function f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} , defined by the integral Evaluating the Fourier transform for all values of ξ {\displaystyle \xi } produces the frequency-domain function, and it converges at all frequencies to a continuous function tending to zero at infinity. If f ( x ) {\displaystyle f(x)} decays with all derivatives, i.e., lim | x | → ∞ f ( n ) ( x ) = 0 , ∀ n ∈ N , {\displaystyle \lim _{|x|\to \infty }f^{(n)}(x)=0,\quad \forall n\in \mathbb {N} ,} then f ^ {\displaystyle {\widehat {f}}} converges for all frequencies and, by the Riemann–Lebesgue lemma, f ^ {\displaystyle {\widehat {f}}} also decays with all derivatives. First introduced in Fourier's Analytical Theory of Heat., the corresponding inversion formula for "sufficiently nice" functions is given by the Fourier inversion theorem, i.e., The functions f {\displaystyle f} and f ^ {\displaystyle {\widehat {f}}} are referred to as a Fourier transform pair. A common notation for designating transform pairs is: f ( x ) ⟷ F f ^ ( ξ ) , {\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ {\widehat {f}}(\xi ),} for example rect ⁡ ( x ) ⟷ F sinc ⁡ ( ξ ) . {\displaystyle \operatorname {rect} (x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ \operatorname {sinc} (\xi ).} By analogy, the Fourier series can be regarded as an abstract Fourier transform on the group Z {\displaystyle \mathbb {Z} } of integers. That is, the synthesis of a sequence of complex numbers c n {\displaystyle c_{n}} is defined by the Fourier transform f ( x ) = ∑ n = − ∞ ∞ c n e i 2 π n P x , {\displaystyle f(x)=\sum _{n=-\infty }^{\infty }c_{n}\,e^{i2\pi {\tfrac {n}{P}}x},} such that c n {\displaystyle c_{n}} are given by the inversion formula, i.e., the analysis c n = 1 P ∫ − P / 2 P / 2 f ( x ) e − i 2 π n P x d x , {\displaystyle c_{n}={\frac {1}{P}}\int _{-P/2}^{P/2}f(x)\,e^{-i2\pi {\frac {n}{P}}x}\,dx,} for some complex-valued, P {\displaystyle P} -periodic function f ( x ) {\displaystyle f(x)} defined on a bounded interval [ − P / 2 , P / 2 ] ∈ R {\displaystyle [-P/2,P/2]\in \mathbb {R} } . When P → ∞ , {\displaystyle P\to \infty ,} the constituent frequencies are a continuum: n P → ξ ∈ R , {\displaystyle {\tfrac {n}{P}}\to \xi \in \mathbb {R} ,} and c n → f ^ ( ξ ) ∈ C {\displaystyle c_{n}\to {\hat {f}}(\xi )\in \mathbb {C} } . In other words, on the finite interval [ − P / 2 , P / 2 ] {\displaystyle [-P/2,P/2]} the function f ( x ) {\displaystyle f(x)} has a discrete decomposition in the periodic functions e i 2 π x n / P {\displaystyle e^{i2\pi xn/P}} . On the infinite interval ( − ∞ , ∞ ) {\displaystyle (-\infty ,\infty )} the function f ( x ) {\displaystyle f(x)} has a continuous decomposition in periodic functions e i 2 π x ξ {\displaystyle e^{i2\pi x\xi }} . === Lebesgue integrable functions === A measurable function f : R → C {\displaystyle f:\mathbb {R} \to \mathbb {C} } is called (Lebesgue) integrable if the Lebesgue integral of its absolute value is finite: ‖ f ‖ 1 = ∫ R | f ( x ) | d x < ∞ . {\displaystyle \|f\|_{1}=\int _{\mathbb {R} }|f(x)|\,dx<\infty .} If f {\displaystyle f} is Lebesgue integrable then the Fourier transform, given by Eq.1, is well-defined for all ξ ∈ R {\displaystyle \xi \in \mathbb {R} } . Furthermore, f ^ ∈ L ∞ ∩ C ( R ) {\displaystyle {\widehat {f}}\in L^{\infty }\cap C(\mathbb {R} )} is bounded, uniformly continuous and (by the Riemann–Lebesgue lemma) zero at infinity. The space L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} is the space of measurable functions for which the norm ‖ f ‖ 1 {\displaystyle \|f\|_{1}} is finite, modulo the equivalence relation of equality almost everywhere. The Fourier transform on L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} is one-to-one. However, there is no easy characterization of the image, and thus no easy characterization of the inverse transform. In particular, Eq.2 is no longer valid, as it was stated only under the hypothesis that f ( x ) {\displaystyle f(x)} decayed with all derivatives. While Eq.1 defines the Fourier transform for (complex-valued) functions in L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} , it is not well-defined for other integrability classes, most importantly the space of square-integrable functions L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . For example, the function f ( x ) = ( 1 + x 2 ) − 1 / 2 {\displaystyle f(x)=(1+x^{2})^{-1/2}} is in L 2 {\displaystyle L^{2}} but not L 1 {\displaystyle L^{1}} and therefore the Lebesgue integral Eq.1 does not exist. However, the Fourier transform on the dense subspace L 1 ∩ L 2 ( R ) ⊂ L 2 ( R ) {\displaystyle L^{1}\cap L^{2}(\mathbb {R} )\subset L^{2}(\mathbb {R} )} admits a unique continuous extension to a unitary operator on L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . This extension is important in part because, unlike the case of L 1 {\displaystyle L^{1}} , the Fourier transform is an automorphism of the space L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . In such cases, the Fourier transform can be obtained explicitly by regularizing the integral, and then passing to a limit. In practice, the integral is often regarded as an improper integral instead of a proper Lebesgue integral, but sometimes for convergence one needs to use weak limit or principal value instead of the (pointwise) limits implicit in an improper integral. Titchmarsh (1986) and Dym & McKean (1985) each gives three rigorous ways of extending the Fourier transform to square integrable functions using this procedure. A general principle in working with the L 2 {\displaystyle L^{2}} Fourier transform is that Gaussians are dense in L 1 ∩ L 2 {\displaystyle L^{1}\cap L^{2}} , and the various features of the Fourier transform, such as its unitarity, are easily inferred for Gaussians. Many of the properties of the Fourier transform, can then be proven from two facts about Gaussians: that e − π x 2 {\displaystyle e^{-\pi x^{2}}} is its own Fourier transform; and that the Gaussian integral ∫ − ∞ ∞ e − π x 2 d x = 1. {\displaystyle \int _{-\infty }^{\infty }e^{-\pi x^{2}}\,dx=1.} A feature of the L 1 {\displaystyle L^{1}} Fourier transform is that it is a homomorphism of Banach algebras from L 1 {\displaystyle L^{1}} equipped with the convolution operation to the Banach algebra of continuous functions under the L ∞ {\displaystyle L^{\infty }} (supremum) norm. The conventions chosen in this article are those of harmonic analysis, and are characterized as the unique conventions such that the Fourier transform is both unitary on L2 and an algebra homomorphism from L1 to L∞, without renormalizing the Lebesgue measure. === Angular frequency (ω) === When the independent variable ( x {\displaystyle x} ) represents time (often denoted by t {\displaystyle t} ), the transform variable ( ξ {\displaystyle \xi } ) represents frequency (often denoted by f {\displaystyle f} ). For example, if time is measured in seconds, then frequency is in hertz. The Fourier transform can also be written in terms of angular frequency, ω = 2 π ξ , {\displaystyle \omega =2\pi \xi ,} whose units are radians per second. The substitution ξ = ω 2 π {\displaystyle \xi ={\tfrac {\omega }{2\pi }}} into Eq.1 produces this convention, where function f ^ {\displaystyle {\widehat {f}}} is relabeled f 1 ^ : {\displaystyle {\widehat {f_{1}}}:} f 3 ^ ( ω ) ≜ ∫ − ∞ ∞ f ( x ) ⋅ e − i ω x d x = f 1 ^ ( ω 2 π ) , f ( x ) = 1 2 π ∫ − ∞ ∞ f 3 ^ ( ω ) ⋅ e i ω x d ω . {\displaystyle {\begin{aligned}{\widehat {f_{3}}}(\omega )&\triangleq \int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }{\widehat {f_{3}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}} Unlike the Eq.1 definition, the Fourier transform is no longer a unitary transformation, and there is less symmetry between the formulas for the transform and its inverse. Those properties are restored by splitting the 2 π {\displaystyle 2\pi } factor evenly between the transform and its inverse, which leads to another convention: f 2 ^ ( ω ) ≜ 1 2 π ∫ − ∞ ∞ f ( x ) ⋅ e − i ω x d x = 1 2 π f 1 ^ ( ω 2 π ) , f ( x ) = 1 2 π ∫ − ∞ ∞ f 2 ^ ( ω ) ⋅ e i ω x d ω . {\displaystyle {\begin{aligned}{\widehat {f_{2}}}(\omega )&\triangleq {\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\frac {1}{\sqrt {2\pi }}}\ \ {\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\widehat {f_{2}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}} Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites. == Background == === History === In 1822, Fourier claimed (see Joseph Fourier § The Analytic Theory of Heat) that any function, whether continuous or discontinuous, can be expanded into a series of sines. That important work was corrected and expanded upon by others to provide the foundation for the various forms of the Fourier transform used since. === Complex sinusoids === In general, the coefficients f ^ ( ξ ) {\displaystyle {\widehat {f}}(\xi )} are complex numbers, which have two equivalent forms (see Euler's formula): f ^ ( ξ ) = A e i θ ⏟ polar coordinate form = A cos ⁡ ( θ ) + i A sin ⁡ ( θ ) ⏟ rectangular coordinate form . {\displaystyle {\widehat {f}}(\xi )=\underbrace {Ae^{i\theta }} _{\text{polar coordinate form}}=\underbrace {A\cos(\theta )+iA\sin(\theta )} _{\text{rectangular coordinate form}}.} The product with e i 2 π ξ x {\displaystyle e^{i2\pi \xi x}} (Eq.2) has these forms: f ^ ( ξ ) ⋅ e i 2 π ξ x = A e i θ ⋅ e i 2 π ξ x = A e i ( 2 π ξ x + θ ) ⏟ polar coordinate form = A cos ⁡ ( 2 π ξ x + θ ) + i A sin ⁡ ( 2 π ξ x + θ ) ⏟ rectangular coordinate form . {\displaystyle {\begin{aligned}{\widehat {f}}(\xi )\cdot e^{i2\pi \xi x}&=Ae^{i\theta }\cdot e^{i2\pi \xi x}\\&=\underbrace {Ae^{i(2\pi \xi x+\theta )}} _{\text{polar coordinate form}}\\&=\underbrace {A\cos(2\pi \xi x+\theta )+iA\sin(2\pi \xi x+\theta )} _{\text{rectangular coordinate form}}.\end{aligned}}} which conveys both amplitude and phase of frequency ξ . {\displaystyle \xi .} Likewise, the intuitive interpretation of Eq.1 is that multiplying f ( x ) {\displaystyle f(x)} by e − i 2 π ξ x {\displaystyle e^{-i2\pi \xi x}} has the effect of subtracting ξ {\displaystyle \xi } from every frequency component of function f ( x ) . {\displaystyle f(x).} Only the component that was at frequency ξ {\displaystyle \xi } can produce a non-zero value of the infinite integral, because (at least formally) all the other shifted components are oscillatory and integrate to zero. (see § Example) It is noteworthy how easily the product was simplified using the polar form, and how easily the rectangular form was deduced by an application of Euler's formula. === Negative frequency === Euler's formula introduces the possibility of negative ξ . {\displaystyle \xi .} And Eq.1 is defined ∀ ξ ∈ R . {\displaystyle \forall \xi \in \mathbb {R} .} Only certain complex-valued f ( x ) {\displaystyle f(x)} have transforms f ^ = 0 , ∀ ξ < 0 {\displaystyle {\widehat {f}}=0,\ \forall \ \xi <0} (See Analytic signal. A simple example is e i 2 π ξ 0 x ( ξ 0 > 0 ) . {\displaystyle e^{i2\pi \xi _{0}x}\ (\xi _{0}>0).} ) But negative frequency is necessary to characterize all other complex-valued f ( x ) , {\displaystyle f(x),} found in signal processing, partial differential equations, radar, nonlinear optics, quantum mechanics, and others. For a real-valued f ( x ) , {\displaystyle f(x),} Eq.1 has the symmetry property f ^ ( − ξ ) = f ^ ∗ ( ξ ) {\displaystyle {\widehat {f}}(-\xi )={\widehat {f}}^{*}(\xi )} (see § Conjugation below). This redundancy enables Eq.2 to distinguish f ( x ) = cos ⁡ ( 2 π ξ 0 x ) {\displaystyle f(x)=\cos(2\pi \xi _{0}x)} from e i 2 π ξ 0 x . {\displaystyle e^{i2\pi \xi _{0}x}.} But of course it cannot tell us the actual sign of ξ 0 , {\displaystyle \xi _{0},} because cos ⁡ ( 2 π ξ 0 x ) {\displaystyle \cos(2\pi \xi _{0}x)} and cos ⁡ ( 2 π ( − ξ 0 ) x ) {\displaystyle \cos(2\pi (-\xi _{0})x)} are indistinguishable on just the real numbers line. === Fourier transform for periodic functions === The Fourier transform of a periodic function cannot be defined using the integral formula directly. In order for integral in Eq.1 to be defined the function must be absolutely integrable. Instead it is common to use Fourier series. It is possible to extend the definition to include periodic functions by viewing them as tempered distributions. This makes it possible to see a connection between the Fourier series and the Fourier transform for periodic functions that have a convergent Fourier series. If f ( x ) {\displaystyle f(x)} is a periodic function, with period P {\displaystyle P} , that has a convergent Fourier series, then: f ^ ( ξ ) = ∑ n = − ∞ ∞ c n ⋅ δ ( ξ − n P ) , {\displaystyle {\widehat {f}}(\xi )=\sum _{n=-\infty }^{\infty }c_{n}\cdot \delta \left(\xi -{\tfrac {n}{P}}\right),} where c n {\displaystyle c_{n}} are the Fourier series coefficients of f {\displaystyle f} , and δ {\displaystyle \delta } is the Dirac delta function. In other words, the Fourier transform is a Dirac comb function whose teeth are multiplied by the Fourier series coefficients. === Sampling the Fourier transform === The Fourier transform of an integrable function f {\displaystyle f} can be sampled at regular intervals of arbitrary length 1 P . {\displaystyle {\tfrac {1}{P}}.} These samples can be deduced from one cycle of a periodic function f P {\displaystyle f_{P}} which has Fourier series coefficients proportional to those samples by the Poisson summation formula: f P ( x ) ≜ ∑ n = − ∞ ∞ f ( x + n P ) = 1 P ∑ k = − ∞ ∞ f ^ ( k P ) e i 2 π k P x , ∀ k ∈ Z {\displaystyle f_{P}(x)\triangleq \sum _{n=-\infty }^{\infty }f(x+nP)={\frac {1}{P}}\sum _{k=-\infty }^{\infty }{\widehat {f}}\left({\tfrac {k}{P}}\right)e^{i2\pi {\frac {k}{P}}x},\quad \forall k\in \mathbb {Z} } The integrability of f {\displaystyle f} ensures the periodic summation converges. Therefore, the samples f ^ ( k P ) {\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)} can be determined by Fourier series analysis: f ^ ( k P ) = ∫ P f P ( x ) ⋅ e − i 2 π k P x d x . {\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)=\int _{P}f_{P}(x)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx.} When f ( x ) {\displaystyle f(x)} has compact support, f P ( x ) {\displaystyle f_{P}(x)} has a finite number of terms within the interval of integration. When f ( x ) {\displaystyle f(x)} does not have compact support, numerical evaluation of f P ( x ) {\displaystyle f_{P}(x)} requires an approximation, such as tapering f ( x ) {\displaystyle f(x)} or truncating the number of terms. == Units == The frequency variable must have inverse units to the units of the original function's domain (typically named t {\displaystyle t} or x {\displaystyle x} ). For example, if t {\displaystyle t} is measured in seconds, ξ {\displaystyle \xi } should be in cycles per second or hertz. If the scale of time is in units of 2 π {\displaystyle 2\pi } seconds, then another Greek letter ω {\displaystyle \omega } is typically used instead to represent angular frequency (where ω = 2 π ξ {\displaystyle \omega =2\pi \xi } ) in units of radians per second. If using x {\displaystyle x} for units of length, then ξ {\displaystyle \xi } must be in inverse length, e.g., wavenumbers. That is to say, there are two versions of the real line: one which is the range of t {\displaystyle t} and measured in units of t , {\displaystyle t,} and the other which is the range of ξ {\displaystyle \xi } and measured in inverse units to the units of t . {\displaystyle t.} These two distinct versions of the real line cannot be equated with each other. Therefore, the Fourier transform goes from one space of functions to a different space of functions: functions which have a different domain of definition. In general, ξ {\displaystyle \xi } must always be taken to be a linear form on the space of its domain, which is to say that the second real line is the dual space of the first real line. See the article on linear algebra for a more formal explanation and for more details. This point of view becomes essential in generalizations of the Fourier transform to general symmetry groups, including the case of Fourier series. That there is no one preferred way (often, one says "no canonical way") to compare the two versions of the real line which are involved in the Fourier transform—fixing the units on one line does not force the scale of the units on the other line—is the reason for the plethora of rival conventions on the definition of the Fourier transform. The various definitions resulting from different choices of units differ by various constants. In other conventions, the Fourier transform has i in the exponent instead of −i, and vice versa for the inversion formula. This convention is common in modern physics and is the default for Wolfram Alpha, and does not mean that the frequency has become negative, since there is no canonical definition of positivity for frequency of a complex wave. It simply means that f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} is the amplitude of the wave e − i 2 π ξ x {\displaystyle e^{-i2\pi \xi x}} instead of the wave e i 2 π ξ x {\displaystyle e^{i2\pi \xi x}} (the former, with its minus sign, is often seen in the time dependence for sinusoidal plane-wave solutions of the electromagnetic wave equation, or in the time dependence for quantum wave functions). Many of the identities involving the Fourier transform remain valid in those conventions, provided all terms that explicitly involve i have it replaced by −i. In electrical engineering the letter j is typically used for the imaginary unit instead of i because i is used for current. When using dimensionless units, the constant factors might not be written in the transform definition. For instance, in probability theory, the characteristic function Φ of the probability density function f of a random variable X of continuous type is defined without a negative sign in the exponential, and since the units of x are ignored, there is no 2π either: ϕ ( λ ) = ∫ − ∞ ∞ f ( x ) e i λ x d x . {\displaystyle \phi (\lambda )=\int _{-\infty }^{\infty }f(x)e^{i\lambda x}\,dx.} In probability theory and mathematical statistics, the use of the Fourier—Stieltjes transform is preferred, because many random variables are not of continuous type, and do not possess a density function, and one must treat not functions but distributions, i.e., measures which possess "atoms". From the higher point of view of group characters, which is much more abstract, all these arbitrary choices disappear, as will be explained in the later section of this article, which treats the notion of the Fourier transform of a function on a locally compact Abelian group. == Properties == Let f ( x ) {\displaystyle f(x)} and h ( x ) {\displaystyle h(x)} represent integrable functions Lebesgue-measurable on the real line satisfying: ∫ − ∞ ∞ | f ( x ) | d x < ∞ . {\displaystyle \int _{-\infty }^{\infty }|f(x)|\,dx<\infty .} We denote the Fourier transforms of these functions as f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} and h ^ ( ξ ) {\displaystyle {\hat {h}}(\xi )} respectively. === Basic properties === The Fourier transform has the following basic properties: ==== Linearity ==== a f ( x ) + b h ( x ) ⟺ F a f ^ ( ξ ) + b h ^ ( ξ ) ; a , b ∈ C {\displaystyle a\ f(x)+b\ h(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ a\ {\widehat {f}}(\xi )+b\ {\widehat {h}}(\xi );\quad \ a,b\in \mathbb {C} } ==== Time shifting ==== f ( x − x 0 ) ⟺ F e − i 2 π x 0 ξ f ^ ( ξ ) ; x 0 ∈ R {\displaystyle f(x-x_{0})\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ e^{-i2\pi x_{0}\xi }\ {\widehat {f}}(\xi );\quad \ x_{0}\in \mathbb {R} } ==== Frequency shifting ==== e i 2 π ξ 0 x f ( x ) ⟺ F f ^ ( ξ − ξ 0 ) ; ξ 0 ∈ R {\displaystyle e^{i2\pi \xi _{0}x}f(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(\xi -\xi _{0});\quad \ \xi _{0}\in \mathbb {R} } ==== Time scaling ==== f ( a x ) ⟺ F 1 | a | f ^ ( ξ a ) ; a ≠ 0 {\displaystyle f(ax)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\frac {1}{|a|}}{\widehat {f}}\left({\frac {\xi }{a}}\right);\quad \ a\neq 0} The case a = − 1 {\displaystyle a=-1} leads to the time-reversal property: f ( − x ) ⟺ F f ^ ( − ξ ) {\displaystyle f(-x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(-\xi )} ==== Symmetry ==== When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform: T i m e d o m a i n f = f RE + f RO + i f IE + i f IO ⏟ ⇕ F ⇕ F ⇕ F ⇕ F ⇕ F F r e q u e n c y d o m a i n f ^ = f ^ RE + i f ^ IO ⏞ + i f ^ IE + f ^ RO {\displaystyle {\begin{array}{rlcccccccc}{\mathsf {Time\ domain}}&f&=&f_{_{\text{RE}}}&+&f_{_{\text{RO}}}&+&i\ f_{_{\text{IE}}}&+&\underbrace {i\ f_{_{\text{IO}}}} \\&{\Bigg \Updownarrow }{\mathcal {F}}&&{\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}\\{\mathsf {Frequency\ domain}}&{\widehat {f}}&=&{\widehat {f}}_{_{\text{RE}}}&+&\overbrace {i\ {\widehat {f}}_{_{\text{IO}}}\,} &+&i\ {\widehat {f}}_{_{\text{IE}}}&+&{\widehat {f}}_{_{\text{RO}}}\end{array}}} From this, various relationships are apparent, for example: The transform of a real-valued function ( f R E + f R O ) {\displaystyle (f_{_{RE}}+f_{_{RO}})} is the conjugate symmetric function f ^ R E + i f ^ I O . {\displaystyle {\hat {f}}_{RE}+i\ {\hat {f}}_{IO}.} Conversely, a conjugate symmetric transform implies a real-valued time-domain. The transform of an imaginary-valued function ( i f I E + i f I O ) {\displaystyle (i\ f_{_{IE}}+i\ f_{_{IO}})} is the conjugate antisymmetric function f ^ R O + i f ^ I E , {\displaystyle {\hat {f}}_{RO}+i\ {\hat {f}}_{IE},} and the converse is true. The transform of a conjugate symmetric function ( f R E + i f I O ) {\displaystyle (f_{_{RE}}+i\ f_{_{IO}})} is the real-valued function f ^ R E + f ^ R O , {\displaystyle {\hat {f}}_{RE}+{\hat {f}}_{RO},} and the converse is true. The transform of a conjugate antisymmetric function ( f R O + i f I E ) {\displaystyle (f_{_{RO}}+i\ f_{_{IE}})} is the imaginary-valued function i f ^ I E + i f ^ I O , {\displaystyle i\ {\hat {f}}_{IE}+i{\hat {f}}_{IO},} and the converse is true. ==== Conjugation ==== ( f ( x ) ) ∗ ⟺ F ( f ^ ( − ξ ) ) ∗ {\displaystyle {\bigl (}f(x){\bigr )}^{*}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ \left({\widehat {f}}(-\xi )\right)^{*}} (Note: the ∗ denotes complex conjugation.) In particular, if f {\displaystyle f} is real, then f ^ {\displaystyle {\widehat {f}}} is even symmetric (aka Hermitian function): f ^ ( − ξ ) = ( f ^ ( ξ ) ) ∗ . {\displaystyle {\widehat {f}}(-\xi )={\bigl (}{\widehat {f}}(\xi ){\bigr )}^{*}.} And if f {\displaystyle f} is purely imaginary, then f ^ {\displaystyle {\widehat {f}}} is odd symmetric: f ^ ( − ξ ) = − ( f ^ ( ξ ) ) ∗ . {\displaystyle {\widehat {f}}(-\xi )=-({\widehat {f}}(\xi ))^{*}.} ==== Real and imaginary parts ==== Re ⁡ { f ( x ) } ⟺ F 1 2 ( f ^ ( ξ ) + ( f ^ ( − ξ ) ) ∗ ) {\displaystyle \operatorname {Re} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2}}\left({\widehat {f}}(\xi )+{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)} Im ⁡ { f ( x ) } ⟺ F 1 2 i ( f ^ ( ξ ) − ( f ^ ( − ξ ) ) ∗ ) {\displaystyle \operatorname {Im} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2i}}\left({\widehat {f}}(\xi )-{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)} ==== Zero frequency component ==== Substituting ξ = 0 {\displaystyle \xi =0} in the definition, we obtain: f ^ ( 0 ) = ∫ − ∞ ∞ f ( x ) d x . {\displaystyle {\widehat {f}}(0)=\int _{-\infty }^{\infty }f(x)\,dx.} The integral of f {\displaystyle f} over its domain is known as the average value or DC bias of the function. === Uniform continuity and the Riemann–Lebesgue lemma === The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties. The Fourier transform f ^ {\displaystyle {\hat {f}}} of any integrable function f {\displaystyle f} is uniformly continuous and ‖ f ^ ‖ ∞ ≤ ‖ f ‖ 1 {\displaystyle \left\|{\hat {f}}\right\|_{\infty }\leq \left\|f\right\|_{1}} By the Riemann–Lebesgue lemma, f ^ ( ξ ) → 0 as | ξ | → ∞ . {\displaystyle {\hat {f}}(\xi )\to 0{\text{ as }}|\xi |\to \infty .} However, f ^ {\displaystyle {\hat {f}}} need not be integrable. For example, the Fourier transform of the rectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improper integrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutely convergent. It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both f {\displaystyle f} and f ^ {\displaystyle {\hat {f}}} are integrable, the inverse equality f ( x ) = ∫ − ∞ ∞ f ^ ( ξ ) e i 2 π x ξ d ξ {\displaystyle f(x)=\int _{-\infty }^{\infty }{\hat {f}}(\xi )e^{i2\pi x\xi }\,d\xi } holds for almost every x. As a result, the Fourier transform is injective on L1(R). === Plancherel theorem and Parseval's theorem === Let f(x) and g(x) be integrable, and let f̂(ξ) and ĝ(ξ) be their Fourier transforms. If f(x) and g(x) are also square-integrable, then the Parseval formula follows: ⟨ f , g ⟩ L 2 = ∫ − ∞ ∞ f ( x ) g ( x ) ¯ d x = ∫ − ∞ ∞ f ^ ( ξ ) g ^ ( ξ ) ¯ d ξ , {\displaystyle \langle f,g\rangle _{L^{2}}=\int _{-\infty }^{\infty }f(x){\overline {g(x)}}\,dx=\int _{-\infty }^{\infty }{\hat {f}}(\xi ){\overline {{\hat {g}}(\xi )}}\,d\xi ,} where the bar denotes complex conjugation. The Plancherel theorem, which follows from the above, states that ‖ f ‖ L 2 2 = ∫ − ∞ ∞ | f ( x ) | 2 d x = ∫ − ∞ ∞ | f ^ ( ξ ) | 2 d ξ . {\displaystyle \|f\|_{L^{2}}^{2}=\int _{-\infty }^{\infty }\left|f(x)\right|^{2}\,dx=\int _{-\infty }^{\infty }\left|{\hat {f}}(\xi )\right|^{2}\,d\xi .} Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to a unitary operator on L2(R). On L1(R) ∩ L2(R), this extension agrees with original Fourier transform defined on L1(R), thus enlarging the domain of the Fourier transform to L1(R) + L2(R) (and consequently to Lp(R) for 1 ≤ p ≤ 2). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. The terminology of these formulas is not quite standardised. Parseval's theorem was proved only for Fourier series, and was first proved by Lyapunov. But Parseval's formula makes sense for the Fourier transform as well, and so even though in the context of the Fourier transform it was proved by Plancherel, it is still often referred to as Parseval's formula, or Parseval's relation, or even Parseval's theorem. See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups. === Convolution theorem === The Fourier transform translates between convolution and multiplication of functions. If f(x) and g(x) are integrable functions with Fourier transforms f̂(ξ) and ĝ(ξ) respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms f̂(ξ) and ĝ(ξ) (under other conventions for the definition of the Fourier transform a constant factor may appear). This means that if: h ( x ) = ( f ∗ g ) ( x ) = ∫ − ∞ ∞ f ( y ) g ( x − y ) d y , {\displaystyle h(x)=(f*g)(x)=\int _{-\infty }^{\infty }f(y)g(x-y)\,dy,} where ∗ denotes the convolution operation, then: h ^ ( ξ ) = f ^ ( ξ ) g ^ ( ξ ) . {\displaystyle {\hat {h}}(\xi )={\hat {f}}(\xi )\,{\hat {g}}(\xi ).} In linear time invariant (LTI) system theory, it is common to interpret g(x) as the impulse response of an LTI system with input f(x) and output h(x), since substituting the unit impulse for f(x) yields h(x) = g(x). In this case, ĝ(ξ) represents the frequency response of the system. Conversely, if f(x) can be decomposed as the product of two square integrable functions p(x) and q(x), then the Fourier transform of f(x) is given by the convolution of the respective Fourier transforms p̂(ξ) and q̂(ξ). === Cross-correlation theorem === In an analogous manner, it can be shown that if h(x) is the cross-correlation of f(x) and g(x): h ( x ) = ( f ⋆ g ) ( x ) = ∫ − ∞ ∞ f ( y ) ¯ g ( x + y ) d y {\displaystyle h(x)=(f\star g)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}g(x+y)\,dy} then the Fourier transform of h(x) is: h ^ ( ξ ) = f ^ ( ξ ) ¯ g ^ ( ξ ) . {\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}\,{\hat {g}}(\xi ).} As a special case, the autocorrelation of function f(x) is: h ( x ) = ( f ⋆ f ) ( x ) = ∫ − ∞ ∞ f ( y ) ¯ f ( x + y ) d y {\displaystyle h(x)=(f\star f)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}f(x+y)\,dy} for which h ^ ( ξ ) = f ^ ( ξ ) ¯ f ^ ( ξ ) = | f ^ ( ξ ) | 2 . {\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}{\hat {f}}(\xi )=\left|{\hat {f}}(\xi )\right|^{2}.} === Differentiation === Suppose f(x) is an absolutely continuous differentiable function, and both f and its derivative f′ are integrable. Then the Fourier transform of the derivative is given by f ′ ^ ( ξ ) = F { d d x f ( x ) } = i 2 π ξ f ^ ( ξ ) . {\displaystyle {\widehat {f'\,}}(\xi )={\mathcal {F}}\left\{{\frac {d}{dx}}f(x)\right\}=i2\pi \xi {\hat {f}}(\xi ).} More generally, the Fourier transformation of the nth derivative f(n) is given by f ( n ) ^ ( ξ ) = F { d n d x n f ( x ) } = ( i 2 π ξ ) n f ^ ( ξ ) . {\displaystyle {\widehat {f^{(n)}}}(\xi )={\mathcal {F}}\left\{{\frac {d^{n}}{dx^{n}}}f(x)\right\}=(i2\pi \xi )^{n}{\hat {f}}(\xi ).} Analogously, F { d n d ξ n f ^ ( ξ ) } = ( i 2 π x ) n f ( x ) {\displaystyle {\mathcal {F}}\left\{{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi )\right\}=(i2\pi x)^{n}f(x)} , so F { x n f ( x ) } = ( i 2 π ) n d n d ξ n f ^ ( ξ ) . {\displaystyle {\mathcal {F}}\left\{x^{n}f(x)\right\}=\left({\frac {i}{2\pi }}\right)^{n}{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi ).} By applying the Fourier transform and using these formulas, some ordinary differential equations can be transformed into algebraic equations, which are much easier to solve. These formulas also give rise to the rule of thumb "f(x) is smooth if and only if f̂(ξ) quickly falls to 0 for |ξ| → ∞." By using the analogous rules for the inverse Fourier transform, one can also say "f(x) quickly falls to 0 for |x| → ∞ if and only if f̂(ξ) is smooth." === Eigenfunctions === The Fourier transform is a linear transform which has eigenfunctions obeying F [ ψ ] = λ ψ , {\displaystyle {\mathcal {F}}[\psi ]=\lambda \psi ,} with λ ∈ C . {\displaystyle \lambda \in \mathbb {C} .} A set of eigenfunctions is found by noting that the homogeneous differential equation [ U ( 1 2 π d d x ) + U ( x ) ] ψ ( x ) = 0 {\displaystyle \left[U\left({\frac {1}{2\pi }}{\frac {d}{dx}}\right)+U(x)\right]\psi (x)=0} leads to eigenfunctions ψ ( x ) {\displaystyle \psi (x)} of the Fourier transform F {\displaystyle {\mathcal {F}}} as long as the form of the equation remains invariant under Fourier transform. In other words, every solution ψ ( x ) {\displaystyle \psi (x)} and its Fourier transform ψ ^ ( ξ ) {\displaystyle {\hat {\psi }}(\xi )} obey the same equation. Assuming uniqueness of the solutions, every solution ψ ( x ) {\displaystyle \psi (x)} must therefore be an eigenfunction of the Fourier transform. The form of the equation remains unchanged under Fourier transform if U ( x ) {\displaystyle U(x)} can be expanded in a power series in which for all terms the same factor of either one of ± 1 , ± i {\displaystyle \pm 1,\pm i} arises from the factors i n {\displaystyle i^{n}} introduced by the differentiation rules upon Fourier transforming the homogeneous differential equation because this factor may then be cancelled. The simplest allowable U ( x ) = x {\displaystyle U(x)=x} leads to the standard normal distribution. More generally, a set of eigenfunctions is also found by noting that the differentiation rules imply that the ordinary differential equation [ W ( i 2 π d d x ) + W ( x ) ] ψ ( x ) = C ψ ( x ) {\displaystyle \left[W\left({\frac {i}{2\pi }}{\frac {d}{dx}}\right)+W(x)\right]\psi (x)=C\psi (x)} with C {\displaystyle C} constant and W ( x ) {\displaystyle W(x)} being a non-constant even function remains invariant in form when applying the Fourier transform F {\displaystyle {\mathcal {F}}} to both sides of the equation. The simplest example is provided by W ( x ) = x 2 {\displaystyle W(x)=x^{2}} which is equivalent to considering the Schrödinger equation for the quantum harmonic oscillator. The corresponding solutions provide an important choice of an orthonormal basis for L2(R) and are given by the "physicist's" Hermite functions. Equivalently one may use ψ n ( x ) = 2 4 n ! e − π x 2 H e n ( 2 x π ) , {\displaystyle \psi _{n}(x)={\frac {\sqrt[{4}]{2}}{\sqrt {n!}}}e^{-\pi x^{2}}\mathrm {He} _{n}\left(2x{\sqrt {\pi }}\right),} where Hen(x) are the "probabilist's" Hermite polynomials, defined as H e n ( x ) = ( − 1 ) n e 1 2 x 2 ( d d x ) n e − 1 2 x 2 . {\displaystyle \mathrm {He} _{n}(x)=(-1)^{n}e^{{\frac {1}{2}}x^{2}}\left({\frac {d}{dx}}\right)^{n}e^{-{\frac {1}{2}}x^{2}}.} Under this convention for the Fourier transform, we have that ψ ^ n ( ξ ) = ( − i ) n ψ n ( ξ ) . {\displaystyle {\hat {\psi }}_{n}(\xi )=(-i)^{n}\psi _{n}(\xi ).} In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fourier transform on L2(R). However, this choice of eigenfunctions is not unique. Because of F 4 = i d {\displaystyle {\mathcal {F}}^{4}=\mathrm {id} } there are only four different eigenvalues of the Fourier transform (the fourth roots of unity ±1 and ±i) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose L2(R) as a direct sum of four spaces H0, H1, H2, and H3 where the Fourier transform acts on Hek simply by multiplication by ik. Since the complete set of Hermite functions ψn provides a resolution of the identity they diagonalize the Fourier operator, i.e. the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed: F [ f ] ( ξ ) = ∫ d x f ( x ) ∑ n ≥ 0 ( − i ) n ψ n ( x ) ψ n ( ξ ) . {\displaystyle {\mathcal {F}}[f](\xi )=\int dxf(x)\sum _{n\geq 0}(-i)^{n}\psi _{n}(x)\psi _{n}(\xi )~.} This approach to define the Fourier transform was first proposed by Norbert Wiener. Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely the fractional Fourier transform used in time–frequency analysis. In physics, this transform was introduced by Edward Condon. This change of basis functions becomes possible because the Fourier transform is a unitary transform when using the right conventions. Consequently, under the proper conditions it may be expected to result from a self-adjoint generator N {\displaystyle N} via F [ ψ ] = e − i t N ψ . {\displaystyle {\mathcal {F}}[\psi ]=e^{-itN}\psi .} The operator N {\displaystyle N} is the number operator of the quantum harmonic oscillator written as N ≡ 1 2 ( x − ∂ ∂ x ) ( x + ∂ ∂ x ) = 1 2 ( − ∂ 2 ∂ x 2 + x 2 − 1 ) . {\displaystyle N\equiv {\frac {1}{2}}\left(x-{\frac {\partial }{\partial x}}\right)\left(x+{\frac {\partial }{\partial x}}\right)={\frac {1}{2}}\left(-{\frac {\partial ^{2}}{\partial x^{2}}}+x^{2}-1\right).} It can be interpreted as the generator of fractional Fourier transforms for arbitrary values of t, and of the conventional continuous Fourier transform F {\displaystyle {\mathcal {F}}} for the particular value t = π / 2 , {\displaystyle t=\pi /2,} with the Mehler kernel implementing the corresponding active transform. The eigenfunctions of N {\displaystyle N} are the Hermite functions ψ n ( x ) {\displaystyle \psi _{n}(x)} which are therefore also eigenfunctions of F . {\displaystyle {\mathcal {F}}.} Upon extending the Fourier transform to distributions the Dirac comb is also an eigenfunction of the Fourier transform. === Inversion and periodicity === Under suitable conditions on the function f {\displaystyle f} , it can be recovered from its Fourier transform f ^ {\displaystyle {\hat {f}}} . Indeed, denoting the Fourier transform operator by F {\displaystyle {\mathcal {F}}} , so F f := f ^ {\displaystyle {\mathcal {F}}f:={\hat {f}}} , then for suitable functions, applying the Fourier transform twice simply flips the function: ( F 2 f ) ( x ) = f ( − x ) {\displaystyle \left({\mathcal {F}}^{2}f\right)(x)=f(-x)} , which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yields F 4 ( f ) = f {\displaystyle {\mathcal {F}}^{4}(f)=f} , so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times: F 3 ( f ^ ) = f {\displaystyle {\mathcal {F}}^{3}\left({\hat {f}}\right)=f} . In particular the Fourier transform is invertible (under suitable conditions). More precisely, defining the parity operator P {\displaystyle {\mathcal {P}}} such that ( P f ) ( x ) = f ( − x ) {\displaystyle ({\mathcal {P}}f)(x)=f(-x)} , we have: F 0 = i d , F 1 = F , F 2 = P , F 3 = F − 1 = P ∘ F = F ∘ P , F 4 = i d {\displaystyle {\begin{aligned}{\mathcal {F}}^{0}&=\mathrm {id} ,\\{\mathcal {F}}^{1}&={\mathcal {F}},\\{\mathcal {F}}^{2}&={\mathcal {P}},\\{\mathcal {F}}^{3}&={\mathcal {F}}^{-1}={\mathcal {P}}\circ {\mathcal {F}}={\mathcal {F}}\circ {\mathcal {P}},\\{\mathcal {F}}^{4}&=\mathrm {id} \end{aligned}}} These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equality almost everywhere?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of the Fourier inversion theorem. This fourfold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in the time–frequency domain (considering time as the x-axis and frequency as the y-axis), and the Fourier transform can be generalized to the fractional Fourier transform, which involves rotations by other angles. This can be further generalized to linear canonical transformations, which can be visualized as the action of the special linear group SL2(R) on the time–frequency plane, with the preserved symplectic form corresponding to the uncertainty principle, below. This approach is particularly studied in signal processing, under time–frequency analysis. === Connection with the Heisenberg group === The Heisenberg group is a certain group of unitary operators on the Hilbert space L2(R) of square integrable complex valued functions f on the real line, generated by the translations (Ty f)(x) = f (x + y) and multiplication by ei2πξx, (Mξ f)(x) = ei2πξx f (x). These operators do not commute, as their (group) commutator is ( M ξ − 1 T y − 1 M ξ T y f ) ( x ) = e i 2 π ξ y f ( x ) {\displaystyle \left(M_{\xi }^{-1}T_{y}^{-1}M_{\xi }T_{y}f\right)(x)=e^{i2\pi \xi y}f(x)} which is multiplication by the constant (independent of x) ei2πξy ∈ U(1) (the circle group of unit modulus complex numbers). As an abstract group, the Heisenberg group is the three-dimensional Lie group of triples (x, ξ, z) ∈ R2 × U(1), with the group law ( x 1 , ξ 1 , t 1 ) ⋅ ( x 2 , ξ 2 , t 2 ) = ( x 1 + x 2 , ξ 1 + ξ 2 , t 1 t 2 e i 2 π ( x 1 ξ 1 + x 2 ξ 2 + x 1 ξ 2 ) ) . {\displaystyle \left(x_{1},\xi _{1},t_{1}\right)\cdot \left(x_{2},\xi _{2},t_{2}\right)=\left(x_{1}+x_{2},\xi _{1}+\xi _{2},t_{1}t_{2}e^{i2\pi \left(x_{1}\xi _{1}+x_{2}\xi _{2}+x_{1}\xi _{2}\right)}\right).} Denote the Heisenberg group by H1. The above procedure describes not only the group structure, but also a standard unitary representation of H1 on a Hilbert space, which we denote by ρ : H1 → B(L2(R)). Define the linear automorphism of R2 by J ( x ξ ) = ( − ξ x ) {\displaystyle J{\begin{pmatrix}x\\\xi \end{pmatrix}}={\begin{pmatrix}-\xi \\x\end{pmatrix}}} so that J2 = −I. This J can be extended to a unique automorphism of H1: j ( x , ξ , t ) = ( − ξ , x , t e − i 2 π ξ x ) . {\displaystyle j\left(x,\xi ,t\right)=\left(-\xi ,x,te^{-i2\pi \xi x}\right).} According to the Stone–von Neumann theorem, the unitary representations ρ and ρ ∘ j are unitarily equivalent, so there is a unique intertwiner W ∈ U(L2(R)) such that ρ ∘ j = W ρ W ∗ . {\displaystyle \rho \circ j=W\rho W^{*}.} This operator W is the Fourier transform. Many of the standard properties of the Fourier transform are immediate consequences of this more general framework. For example, the square of the Fourier transform, W2, is an intertwiner associated with J2 = −I, and so we have (W2f)(x) = f (−x) is the reflection of the original function f. == Complex domain == The integral for the Fourier transform f ^ ( ξ ) = ∫ − ∞ ∞ e − i 2 π ξ t f ( t ) d t {\displaystyle {\hat {f}}(\xi )=\int _{-\infty }^{\infty }e^{-i2\pi \xi t}f(t)\,dt} can be studied for complex values of its argument ξ. Depending on the properties of f, this might not converge off the real axis at all, or it might converge to a complex analytic function for all values of ξ = σ + iτ, or something in between. The Paley–Wiener theorem says that f is smooth (i.e., n-times differentiable for all positive integers n) and compactly supported if and only if f̂ (σ + iτ) is a holomorphic function for which there exists a constant a > 0 such that for any integer n ≥ 0, | ξ n f ^ ( ξ ) | ≤ C e a | τ | {\displaystyle \left\vert \xi ^{n}{\hat {f}}(\xi )\right\vert \leq Ce^{a\vert \tau \vert }} for some constant C. (In this case, f is supported on [−a, a].) This can be expressed by saying that f̂ is an entire function which is rapidly decreasing in σ (for fixed τ) and of exponential growth in τ (uniformly in σ). (If f is not smooth, but only L2, the statement still holds provided n = 0.) The space of such functions of a complex variable is called the Paley—Wiener space. This theorem has been generalised to semisimple Lie groups. If f is supported on the half-line t ≥ 0, then f is said to be "causal" because the impulse response function of a physically realisable filter must have this property, as no effect can precede its cause. Paley and Wiener showed that then f̂ extends to a holomorphic function on the complex lower half-plane τ < 0 which tends to zero as τ goes to infinity. The converse is false and it is not known how to characterise the Fourier transform of a causal function. === Laplace transform === The Fourier transform f̂(ξ) is related to the Laplace transform F(s), which is also used for the solution of differential equations and the analysis of filters. It may happen that a function f for which the Fourier integral does not converge on the real axis at all, nevertheless has a complex Fourier transform defined in some region of the complex plane. For example, if f(t) is of exponential growth, i.e., | f ( t ) | < C e a | t | {\displaystyle \vert f(t)\vert <Ce^{a\vert t\vert }} for some constants C, a ≥ 0, then f ^ ( i τ ) = ∫ − ∞ ∞ e 2 π τ t f ( t ) d t , {\displaystyle {\hat {f}}(i\tau )=\int _{-\infty }^{\infty }e^{2\pi \tau t}f(t)\,dt,} convergent for all 2πτ < −a, is the two-sided Laplace transform of f. The more usual version ("one-sided") of the Laplace transform is F ( s ) = ∫ 0 ∞ f ( t ) e − s t d t . {\displaystyle F(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt.} If f is also causal, and analytical, then: f ^ ( i τ ) = F ( − 2 π τ ) . {\displaystyle {\hat {f}}(i\tau )=F(-2\pi \tau ).} Thus, extending the Fourier transform to the complex domain means it includes the Laplace transform as a special case in the case of causal functions—but with the change of variable s = i2πξ. From another, perhaps more classical viewpoint, the Laplace transform by its form involves an additional exponential regulating term which lets it converge outside of the imaginary line where the Fourier transform is defined. As such it can converge for at most exponentially divergent series and integrals, whereas the original Fourier decomposition cannot, enabling analysis of systems with divergent or critical elements. Two particular examples from linear signal processing are the construction of allpass filter networks from critical comb and mitigating filters via exact pole-zero cancellation on the unit circle. Such designs are common in audio processing, where highly nonlinear phase response is sought for, as in reverb. Furthermore, when extended pulselike impulse responses are sought for signal processing work, the easiest way to produce them is to have one circuit which produces a divergent time response, and then to cancel its divergence through a delayed opposite and compensatory response. There, only the delay circuit in-between admits a classical Fourier description, which is critical. Both the circuits to the side are unstable, and do not admit a convergent Fourier decomposition. However, they do admit a Laplace domain description, with identical half-planes of convergence in the complex plane (or in the discrete case, the Z-plane), wherein their effects cancel. In modern mathematics the Laplace transform is conventionally subsumed under the aegis Fourier methods. Both of them are subsumed by the far more general, and more abstract, idea of harmonic analysis. === Inversion === Still with ξ = σ + i τ {\displaystyle \xi =\sigma +i\tau } , if f ^ {\displaystyle {\widehat {f}}} is complex analytic for a ≤ τ ≤ b, then ∫ − ∞ ∞ f ^ ( σ + i a ) e i 2 π ξ t d σ = ∫ − ∞ ∞ f ^ ( σ + i b ) e i 2 π ξ t d σ {\displaystyle \int _{-\infty }^{\infty }{\hat {f}}(\sigma +ia)e^{i2\pi \xi t}\,d\sigma =\int _{-\infty }^{\infty }{\hat {f}}(\sigma +ib)e^{i2\pi \xi t}\,d\sigma } by Cauchy's integral theorem. Therefore, the Fourier inversion formula can use integration along different lines, parallel to the real axis. Theorem: If f(t) = 0 for t < 0, and |f(t)| < Cea|t| for some constants C, a > 0, then f ( t ) = ∫ − ∞ ∞ f ^ ( σ + i τ ) e i 2 π ξ t d σ , {\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}(\sigma +i\tau )e^{i2\pi \xi t}\,d\sigma ,} for any τ < −⁠a/2π⁠. This theorem implies the Mellin inversion formula for the Laplace transformation, f ( t ) = 1 i 2 π ∫ b − i ∞ b + i ∞ F ( s ) e s t d s {\displaystyle f(t)={\frac {1}{i2\pi }}\int _{b-i\infty }^{b+i\infty }F(s)e^{st}\,ds} for any b > a, where F(s) is the Laplace transform of f(t). The hypotheses can be weakened, as in the results of Carleson and Hunt, to f(t) e−at being L1, provided that f be of bounded variation in a closed neighborhood of t (cf. Dini test), the value of f at t be taken to be the arithmetic mean of the left and right limits, and that the integrals be taken in the sense of Cauchy principal values. L2 versions of these inversion formulas are also available. == Fourier transform on Euclidean space == The Fourier transform can be defined in any arbitrary number of dimensions n. As with the one-dimensional case, there are many conventions. For an integrable function f(x), this article takes the definition: f ^ ( ξ ) = F ( f ) ( ξ ) = ∫ R n f ( x ) e − i 2 π ξ ⋅ x d x {\displaystyle {\hat {f}}({\boldsymbol {\xi }})={\mathcal {F}}(f)({\boldsymbol {\xi }})=\int _{\mathbb {R} ^{n}}f(\mathbf {x} )e^{-i2\pi {\boldsymbol {\xi }}\cdot \mathbf {x} }\,d\mathbf {x} } where x and ξ are n-dimensional vectors, and x · ξ is the dot product of the vectors. Alternatively, ξ can be viewed as belonging to the dual vector space R n ⋆ {\displaystyle \mathbb {R} ^{n\star }} , in which case the dot product becomes the contraction of x and ξ, usually written as ⟨x, ξ⟩. All of the basic properties listed above hold for the n-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the Riemann–Lebesgue lemma holds. === Uncertainty principle === Generally speaking, the more concentrated f(x) is, the more spread out its Fourier transform f̂(ξ) must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we squeeze a function in x, its Fourier transform stretches out in ξ. It is not possible to arbitrarily concentrate both a function and its Fourier transform. The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an uncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to the symplectic form on the time–frequency domain: from the point of view of the linear canonical transformation, the Fourier transform is rotation by 90° in the time–frequency domain, and preserves the symplectic form. Suppose f(x) is an integrable and square-integrable function. Without loss of generality, assume that f(x) is normalized: ∫ − ∞ ∞ | f ( x ) | 2 d x = 1. {\displaystyle \int _{-\infty }^{\infty }|f(x)|^{2}\,dx=1.} It follows from the Plancherel theorem that f̂(ξ) is also normalized. The spread around x = 0 may be measured by the dispersion about zero defined by D 0 ( f ) = ∫ − ∞ ∞ x 2 | f ( x ) | 2 d x . {\displaystyle D_{0}(f)=\int _{-\infty }^{\infty }x^{2}|f(x)|^{2}\,dx.} In probability terms, this is the second moment of |f(x)|2 about zero. The uncertainty principle states that, if f(x) is absolutely continuous and the functions x·f(x) and f′(x) are square integrable, then D 0 ( f ) D 0 ( f ^ ) ≥ 1 16 π 2 . {\displaystyle D_{0}(f)D_{0}({\hat {f}})\geq {\frac {1}{16\pi ^{2}}}.} The equality is attained only in the case f ( x ) = C 1 e − π x 2 σ 2 ∴ f ^ ( ξ ) = σ C 1 e − π σ 2 ξ 2 {\displaystyle {\begin{aligned}f(x)&=C_{1}\,e^{-\pi {\frac {x^{2}}{\sigma ^{2}}}}\\\therefore {\hat {f}}(\xi )&=\sigma C_{1}\,e^{-\pi \sigma ^{2}\xi ^{2}}\end{aligned}}} where σ > 0 is arbitrary and C1 = ⁠4√2/√σ⁠ so that f is L2-normalized. In other words, where f is a (normalized) Gaussian function with variance σ2/2π, centered at zero, and its Fourier transform is a Gaussian function with variance σ−2/2π. Gaussian functions are examples of Schwartz functions (see the discussion on tempered distributions below). In fact, this inequality implies that: ( ∫ − ∞ ∞ ( x − x 0 ) 2 | f ( x ) | 2 d x ) ( ∫ − ∞ ∞ ( ξ − ξ 0 ) 2 | f ^ ( ξ ) | 2 d ξ ) ≥ 1 16 π 2 , ∀ x 0 , ξ 0 ∈ R . {\displaystyle \left(\int _{-\infty }^{\infty }(x-x_{0})^{2}|f(x)|^{2}\,dx\right)\left(\int _{-\infty }^{\infty }(\xi -\xi _{0})^{2}\left|{\hat {f}}(\xi )\right|^{2}\,d\xi \right)\geq {\frac {1}{16\pi ^{2}}},\quad \forall x_{0},\xi _{0}\in \mathbb {R} .} In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, up to a factor of the Planck constant. With this constant properly taken into account, the inequality above becomes the statement of the Heisenberg uncertainty principle. A stronger uncertainty principle is the Hirschman uncertainty principle, which is expressed as: H ( | f | 2 ) + H ( | f ^ | 2 ) ≥ log ⁡ ( e 2 ) {\displaystyle H\left(\left|f\right|^{2}\right)+H\left(\left|{\hat {f}}\right|^{2}\right)\geq \log \left({\frac {e}{2}}\right)} where H(p) is the differential entropy of the probability density function p(x): H ( p ) = − ∫ − ∞ ∞ p ( x ) log ⁡ ( p ( x ) ) d x {\displaystyle H(p)=-\int _{-\infty }^{\infty }p(x)\log {\bigl (}p(x){\bigr )}\,dx} where the logarithms may be in any base that is consistent. The equality is attained for a Gaussian, as in the previous case. === Sine and cosine transforms === Fourier's original formulation of the transform did not use complex numbers, but rather sines and cosines. Statisticians and others still use this form. An absolutely integrable function f for which Fourier inversion holds can be expanded in terms of genuine frequencies (avoiding negative frequencies, which are sometimes considered hard to interpret physically) λ by f ( t ) = ∫ 0 ∞ ( a ( λ ) cos ⁡ ( 2 π λ t ) + b ( λ ) sin ⁡ ( 2 π λ t ) ) d λ . {\displaystyle f(t)=\int _{0}^{\infty }{\bigl (}a(\lambda )\cos(2\pi \lambda t)+b(\lambda )\sin(2\pi \lambda t){\bigr )}\,d\lambda .} This is called an expansion as a trigonometric integral, or a Fourier integral expansion. The coefficient functions a and b can be found by using variants of the Fourier cosine transform and the Fourier sine transform (the normalisations are, again, not standardised): a ( λ ) = 2 ∫ − ∞ ∞ f ( t ) cos ⁡ ( 2 π λ t ) d t {\displaystyle a(\lambda )=2\int _{-\infty }^{\infty }f(t)\cos(2\pi \lambda t)\,dt} and b ( λ ) = 2 ∫ − ∞ ∞ f ( t ) sin ⁡ ( 2 π λ t ) d t . {\displaystyle b(\lambda )=2\int _{-\infty }^{\infty }f(t)\sin(2\pi \lambda t)\,dt.} Older literature refers to the two transform functions, the Fourier cosine transform, a, and the Fourier sine transform, b. The function f can be recovered from the sine and cosine transform using f ( t ) = 2 ∫ 0 ∞ ∫ − ∞ ∞ f ( τ ) cos ⁡ ( 2 π λ ( τ − t ) ) d τ d λ . {\displaystyle f(t)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(\tau )\cos {\bigl (}2\pi \lambda (\tau -t){\bigr )}\,d\tau \,d\lambda .} together with trigonometric identities. This is referred to as Fourier's integral formula. === Spherical harmonics === Let the set of homogeneous harmonic polynomials of degree k on Rn be denoted by Ak. The set Ak consists of the solid spherical harmonics of degree k. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if f(x) = e−π|x|2P(x) for some P(x) in Ak, then f̂(ξ) = i−k f(ξ). Let the set Hk be the closure in L2(Rn) of linear combinations of functions of the form f(|x|)P(x) where P(x) is in Ak. The space L2(Rn) is then a direct sum of the spaces Hk and the Fourier transform maps each space Hk to itself and is possible to characterize the action of the Fourier transform on each space Hk. Let f(x) = f0(|x|)P(x) (with P(x) in Ak), then f ^ ( ξ ) = F 0 ( | ξ | ) P ( ξ ) {\displaystyle {\hat {f}}(\xi )=F_{0}(|\xi |)P(\xi )} where F 0 ( r ) = 2 π i − k r − n + 2 k − 2 2 ∫ 0 ∞ f 0 ( s ) J n + 2 k − 2 2 ( 2 π r s ) s n + 2 k 2 d s . {\displaystyle F_{0}(r)=2\pi i^{-k}r^{-{\frac {n+2k-2}{2}}}\int _{0}^{\infty }f_{0}(s)J_{\frac {n+2k-2}{2}}(2\pi rs)s^{\frac {n+2k}{2}}\,ds.} Here J(n + 2k − 2)/2 denotes the Bessel function of the first kind with order ⁠n + 2k − 2/2⁠. When k = 0 this gives a useful formula for the Fourier transform of a radial function. This is essentially the Hankel transform. Moreover, there is a simple recursion relating the cases n + 2 and n allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one. === Restriction problems === In higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general class of square integrable functions. As such, the restriction of the Fourier transform of an L2(Rn) function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in Lp for 1 < p < 2. It is possible in some cases to define the restriction of a Fourier transform to a set S, provided S has non-zero curvature. The case when S is the unit sphere in Rn is of particular interest. In this case the Tomas–Stein restriction theorem states that the restriction of the Fourier transform to the unit sphere in Rn is a bounded operator on Lp provided 1 ≤ p ≤ ⁠2n + 2/n + 3⁠. One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets ER indexed by R ∈ (0,∞): such as balls of radius R centered at the origin, or cubes of side 2R. For a given integrable function f, consider the function fR defined by: f R ( x ) = ∫ E R f ^ ( ξ ) e i 2 π x ⋅ ξ d ξ , x ∈ R n . {\displaystyle f_{R}(x)=\int _{E_{R}}{\hat {f}}(\xi )e^{i2\pi x\cdot \xi }\,d\xi ,\quad x\in \mathbb {R} ^{n}.} Suppose in addition that f ∈ Lp(Rn). For n = 1 and 1 < p < ∞, if one takes ER = (−R, R), then fR converges to f in Lp as R tends to infinity, by the boundedness of the Hilbert transform. Naively one may hope the same holds true for n > 1. In the case that ER is taken to be a cube with side length R, then convergence still holds. Another natural candidate is the Euclidean ball ER = {ξ : |ξ| < R}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in Lp(Rn). For n ≥ 2 it is a celebrated theorem of Charles Fefferman that the multiplier for the unit ball is never bounded unless p = 2. In fact, when p ≠ 2, this shows that not only may fR fail to converge to f in Lp, but for some functions f ∈ Lp(Rn), fR is not even an element of Lp. == Fourier transform on function spaces == The definition of the Fourier transform naturally extends from L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} to L 1 ( R n ) {\displaystyle L^{1}(\mathbb {R} ^{n})} . That is, if f ∈ L 1 ( R n ) {\displaystyle f\in L^{1}(\mathbb {R} ^{n})} then the Fourier transform F : L 1 ( R n ) → L ∞ ( R n ) {\displaystyle {\mathcal {F}}:L^{1}(\mathbb {R} ^{n})\to L^{\infty }(\mathbb {R} ^{n})} is given by f ( x ) ↦ f ^ ( ξ ) = ∫ R n f ( x ) e − i 2 π ξ ⋅ x d x , ∀ ξ ∈ R n . {\displaystyle f(x)\mapsto {\hat {f}}(\xi )=\int _{\mathbb {R} ^{n}}f(x)e^{-i2\pi \xi \cdot x}\,dx,\quad \forall \xi \in \mathbb {R} ^{n}.} This operator is bounded as sup ξ ∈ R n | f ^ ( ξ ) | ≤ ∫ R n | f ( x ) | d x , {\displaystyle \sup _{\xi \in \mathbb {R} ^{n}}\left\vert {\hat {f}}(\xi )\right\vert \leq \int _{\mathbb {R} ^{n}}\vert f(x)\vert \,dx,} which shows that its operator norm is bounded by 1. The Riemann–Lebesgue lemma shows that if f ∈ L 1 ( R n ) {\displaystyle f\in L^{1}(\mathbb {R} ^{n})} then its Fourier transform actually belongs to the space of continuous functions which vanish at infinity, i.e., f ^ ∈ C 0 ( R n ) ⊂ L ∞ ( R n ) {\displaystyle {\hat {f}}\in C_{0}(\mathbb {R} ^{n})\subset L^{\infty }(\mathbb {R} ^{n})} . Furthermore, the image of L 1 {\displaystyle L^{1}} under F {\displaystyle {\mathcal {F}}} is a strict subset of C 0 ( R n ) {\displaystyle C_{0}(\mathbb {R} ^{n})} . Similarly to the case of one variable, the Fourier transform can be defined on L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} . The Fourier transform in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} is no longer given by an ordinary Lebesgue integral, although it can be computed by an improper integral, i.e., f ^ ( ξ ) = lim R → ∞ ∫ | x | ≤ R f ( x ) e − i 2 π ξ ⋅ x d x {\displaystyle {\hat {f}}(\xi )=\lim _{R\to \infty }\int _{|x|\leq R}f(x)e^{-i2\pi \xi \cdot x}\,dx} where the limit is taken in the L2 sense. Furthermore, F : L 2 ( R n ) → L 2 ( R n ) {\displaystyle {\mathcal {F}}:L^{2}(\mathbb {R} ^{n})\to L^{2}(\mathbb {R} ^{n})} is a unitary operator. For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for any f, g ∈ L2(Rn) we have ∫ R n f ( x ) F g ( x ) d x = ∫ R n F f ( x ) g ( x ) d x . {\displaystyle \int _{\mathbb {R} ^{n}}f(x){\mathcal {F}}g(x)\,dx=\int _{\mathbb {R} ^{n}}{\mathcal {F}}f(x)g(x)\,dx.} In particular, the image of L2(Rn) is itself under the Fourier transform. === On other Lp === For 1 < p < 2 {\displaystyle 1<p<2} , the Fourier transform can be defined on L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} by Marcinkiewicz interpolation, which amounts to decomposing such functions into a fat tail part in L2 plus a fat body part in L1. In each of these spaces, the Fourier transform of a function in Lp(Rn) is in Lq(Rn), where q = ⁠p/p − 1⁠ is the Hölder conjugate of p (by the Hausdorff–Young inequality). However, except for p = 2, the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions in Lp for the range 2 < p < ∞ requires the study of distributions. In fact, it can be shown that there are functions in Lp with p > 2 so that the Fourier transform is not defined as a function. === Tempered distributions === One might consider enlarging the domain of the Fourier transform from L 1 + L 2 {\displaystyle L^{1}+L^{2}} by considering generalized functions, or distributions. A distribution on R n {\displaystyle \mathbb {R} ^{n}} is a continuous linear functional on the space C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} of compactly supported smooth functions (i.e. bump functions), equipped with a suitable topology. Since C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} is dense in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} , the Plancherel theorem allows one to extend the definition of the Fourier transform to general functions in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} by continuity arguments. The strategy is then to consider the action of the Fourier transform on C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} and pass to distributions by duality. The obstruction to doing this is that the Fourier transform does not map C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} to C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} . In fact the Fourier transform of an element in C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} can not vanish on an open set; see the above discussion on the uncertainty principle. The Fourier transform can also be defined for tempered distributions S ′ ( R n ) {\displaystyle {\mathcal {S}}'(\mathbb {R} ^{n})} , dual to the space of Schwartz functions S ( R n ) {\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})} . A Schwartz function is a smooth function that decays at infinity, along with all of its derivatives, hence C c ∞ ( R n ) ⊂ S ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})\subset {\mathcal {S}}(\mathbb {R} ^{n})} and: F : C c ∞ ( R n ) → S ( R n ) ∖ C c ∞ ( R n ) . {\displaystyle {\mathcal {F}}:C_{c}^{\infty }(\mathbb {R} ^{n})\rightarrow S(\mathbb {R} ^{n})\setminus C_{c}^{\infty }(\mathbb {R} ^{n}).} The Fourier transform is an automorphism of the Schwartz space and, by duality, also an automorphism of the space of tempered distributions. The tempered distributions include well-behaved functions of polynomial growth, distributions of compact support as well as all the integrable functions mentioned above. For the definition of the Fourier transform of a tempered distribution, let f {\displaystyle f} and g {\displaystyle g} be integrable functions, and let f ^ {\displaystyle {\hat {f}}} and g ^ {\displaystyle {\hat {g}}} be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula, ∫ R n f ^ ( x ) g ( x ) d x = ∫ R n f ( x ) g ^ ( x ) d x . {\displaystyle \int _{\mathbb {R} ^{n}}{\hat {f}}(x)g(x)\,dx=\int _{\mathbb {R} ^{n}}f(x){\hat {g}}(x)\,dx.} Every integrable function f {\displaystyle f} defines (induces) a distribution T f {\displaystyle T_{f}} by the relation T f ( ϕ ) = ∫ R n f ( x ) ϕ ( x ) d x , ∀ ϕ ∈ S ( R n ) . {\displaystyle T_{f}(\phi )=\int _{\mathbb {R} ^{n}}f(x)\phi (x)\,dx,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).} So it makes sense to define the Fourier transform of a tempered distribution T f ∈ S ′ ( R ) {\displaystyle T_{f}\in {\mathcal {S}}'(\mathbb {R} )} by the duality: ⟨ T ^ f , ϕ ⟩ = ⟨ T f , ϕ ^ ⟩ , ∀ ϕ ∈ S ( R n ) . {\displaystyle \langle {\widehat {T}}_{f},\phi \rangle =\langle T_{f},{\widehat {\phi }}\rangle ,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).} Extending this to all tempered distributions T {\displaystyle T} gives the general definition of the Fourier transform. Distributions can be differentiated and the above-mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions. == Generalizations == === Fourier–Stieltjes transform on measurable spaces === The Fourier transform of a finite Borel measure μ on Rn is given by the continuous function: μ ^ ( ξ ) = ∫ R n e − i 2 π x ⋅ ξ d μ , {\displaystyle {\hat {\mu }}(\xi )=\int _{\mathbb {R} ^{n}}e^{-i2\pi x\cdot \xi }\,d\mu ,} and called the Fourier-Stieltjes transform due to its connection with the Riemann-Stieltjes integral representation of (Radon) measures. If μ {\displaystyle \mu } is the probability distribution of a random variable X {\displaystyle X} then its Fourier–Stieltjes transform is, by definition, a characteristic function. If, in addition, the probability distribution has a probability density function, this definition is subject to the usual Fourier transform. Stated more generally, when μ {\displaystyle \mu } is absolutely continuous with respect to the Lebesgue measure, i.e., d μ = f ( x ) d x , {\displaystyle d\mu =f(x)dx,} then μ ^ ( ξ ) = f ^ ( ξ ) , {\displaystyle {\hat {\mu }}(\xi )={\hat {f}}(\xi ),} and the Fourier-Stieltjes transform reduces to the usual definition of the Fourier transform. That is, the notable difference with the Fourier transform of integrable functions is that the Fourier-Stieltjes transform need not vanish at infinity, i.e., the Riemann–Lebesgue lemma fails for measures. Bochner's theorem characterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle. One example of a finite Borel measure that is not a function is the Dirac measure. Its Fourier transform is a constant function (whose value depends on the form of the Fourier transform used). === Locally compact abelian groups === The Fourier transform may be generalized to any locally compact abelian group, i.e., an abelian group that is also a locally compact Hausdorff space such that the group operation is continuous. If G is a locally compact abelian group, it has a translation invariant measure μ, called Haar measure. For a locally compact abelian group G, the set of irreducible, i.e. one-dimensional, unitary representations are called its characters. With its natural group structure and the topology of uniform convergence on compact sets (that is, the topology induced by the compact-open topology on the space of all continuous functions from G {\displaystyle G} to the circle group), the set of characters Ĝ is itself a locally compact abelian group, called the Pontryagin dual of G. For a function f in L1(G), its Fourier transform is defined by f ^ ( ξ ) = ∫ G ξ ( x ) f ( x ) d μ for any ξ ∈ G ^ . {\displaystyle {\hat {f}}(\xi )=\int _{G}\xi (x)f(x)\,d\mu \quad {\text{for any }}\xi \in {\hat {G}}.} The Riemann–Lebesgue lemma holds in this case; f̂(ξ) is a function vanishing at infinity on Ĝ. The Fourier transform on T = R/Z is an example; here T is a locally compact abelian group, and the Haar measure μ on T can be thought of as the Lebesgue measure on [0,1). Consider the representation of T on the complex plane C that is a 1-dimensional complex vector space. There are a group of representations (which are irreducible since C is 1-dim) { e k : T → G L 1 ( C ) = C ∗ ∣ k ∈ Z } {\displaystyle \{e_{k}:T\rightarrow GL_{1}(C)=C^{*}\mid k\in Z\}} where e k ( x ) = e i 2 π k x {\displaystyle e_{k}(x)=e^{i2\pi kx}} for x ∈ T {\displaystyle x\in T} . The character of such representation, that is the trace of e k ( x ) {\displaystyle e_{k}(x)} for each x ∈ T {\displaystyle x\in T} and k ∈ Z {\displaystyle k\in Z} , is e i 2 π k x {\displaystyle e^{i2\pi kx}} itself. In the case of representation of finite group, the character table of the group G are rows of vectors such that each row is the character of one irreducible representation of G, and these vectors form an orthonormal basis of the space of class functions that map from G to C by Schur's lemma. Now the group T is no longer finite but still compact, and it preserves the orthonormality of character table. Each row of the table is the function e k ( x ) {\displaystyle e_{k}(x)} of x ∈ T , {\displaystyle x\in T,} and the inner product between two class functions (all functions being class functions since T is abelian) f , g ∈ L 2 ( T , d μ ) {\displaystyle f,g\in L^{2}(T,d\mu )} is defined as ⟨ f , g ⟩ = 1 | T | ∫ [ 0 , 1 ) f ( y ) g ¯ ( y ) d μ ( y ) {\textstyle \langle f,g\rangle ={\frac {1}{|T|}}\int _{[0,1)}f(y){\overline {g}}(y)d\mu (y)} with the normalizing factor | T | = 1 {\displaystyle |T|=1} . The sequence { e k ∣ k ∈ Z } {\displaystyle \{e_{k}\mid k\in Z\}} is an orthonormal basis of the space of class functions L 2 ( T , d μ ) {\displaystyle L^{2}(T,d\mu )} . For any representation V of a finite group G, χ v {\displaystyle \chi _{v}} can be expressed as the span ∑ i ⟨ χ v , χ v i ⟩ χ v i {\textstyle \sum _{i}\left\langle \chi _{v},\chi _{v_{i}}\right\rangle \chi _{v_{i}}} ( V i {\displaystyle V_{i}} are the irreps of G), such that ⟨ χ v , χ v i ⟩ = 1 | G | ∑ g ∈ G χ v ( g ) χ ¯ v i ( g ) {\textstyle \left\langle \chi _{v},\chi _{v_{i}}\right\rangle ={\frac {1}{|G|}}\sum _{g\in G}\chi _{v}(g){\overline {\chi }}_{v_{i}}(g)} . Similarly for G = T {\displaystyle G=T} and f ∈ L 2 ( T , d μ ) {\displaystyle f\in L^{2}(T,d\mu )} , f ( x ) = ∑ k ∈ Z f ^ ( k ) e k {\textstyle f(x)=\sum _{k\in Z}{\hat {f}}(k)e_{k}} . The Pontriagin dual T ^ {\displaystyle {\hat {T}}} is { e k } ( k ∈ Z ) {\displaystyle \{e_{k}\}(k\in Z)} and for f ∈ L 2 ( T , d μ ) {\displaystyle f\in L^{2}(T,d\mu )} , f ^ ( k ) = 1 | T | ∫ [ 0 , 1 ) f ( y ) e − i 2 π k y d y {\textstyle {\hat {f}}(k)={\frac {1}{|T|}}\int _{[0,1)}f(y)e^{-i2\pi ky}dy} is its Fourier transform for e k ∈ T ^ {\displaystyle e_{k}\in {\hat {T}}} . === Gelfand transform === The Fourier transform is also a special case of Gelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above. Given an abelian locally compact Hausdorff topological group G, as before we consider space L1(G), defined using a Haar measure. With convolution as multiplication, L1(G) is an abelian Banach algebra. It also has an involution * given by f ∗ ( g ) = f ( g − 1 ) ¯ . {\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}}.} Taking the completion with respect to the largest possibly C*-norm gives its enveloping C*-algebra, called the group C*-algebra C*(G) of G. (Any C*-norm on L1(G) is bounded by the L1 norm, therefore their supremum exists.) Given any abelian C*-algebra A, the Gelfand transform gives an isomorphism between A and C0(A^), where A^ is the multiplicative linear functionals, i.e. one-dimensional representations, on A with the weak-* topology. The map is simply given by a ↦ ( φ ↦ φ ( a ) ) {\displaystyle a\mapsto {\bigl (}\varphi \mapsto \varphi (a){\bigr )}} It turns out that the multiplicative linear functionals of C*(G), after suitable identification, are exactly the characters of G, and the Gelfand transform, when restricted to the dense subset L1(G) is the Fourier–Pontryagin transform. === Compact non-abelian groups === The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators. The Fourier transform on compact groups is a major tool in representation theory and non-commutative harmonic analysis. Let G be a compact Hausdorff topological group. Let Σ denote the collection of all isomorphism classes of finite-dimensional irreducible unitary representations, along with a definite choice of representation U(σ) on the Hilbert space Hσ of finite dimension dσ for each σ ∈ Σ. If μ is a finite Borel measure on G, then the Fourier–Stieltjes transform of μ is the operator on Hσ defined by ⟨ μ ^ ξ , η ⟩ H σ = ∫ G ⟨ U ¯ g ( σ ) ξ , η ⟩ d μ ( g ) {\displaystyle \left\langle {\hat {\mu }}\xi ,\eta \right\rangle _{H_{\sigma }}=\int _{G}\left\langle {\overline {U}}_{g}^{(\sigma )}\xi ,\eta \right\rangle \,d\mu (g)} where U(σ) is the complex-conjugate representation of U(σ) acting on Hσ. If μ is absolutely continuous with respect to the left-invariant probability measure λ on G, represented as d μ = f d λ {\displaystyle d\mu =f\,d\lambda } for some f ∈ L1(λ), one identifies the Fourier transform of f with the Fourier–Stieltjes transform of μ. The mapping μ ↦ μ ^ {\displaystyle \mu \mapsto {\hat {\mu }}} defines an isomorphism between the Banach space M(G) of finite Borel measures (see rca space) and a closed subspace of the Banach space C∞(Σ) consisting of all sequences E = (Eσ) indexed by Σ of (bounded) linear operators Eσ : Hσ → Hσ for which the norm ‖ E ‖ = sup σ ∈ Σ ‖ E σ ‖ {\displaystyle \|E\|=\sup _{\sigma \in \Sigma }\left\|E_{\sigma }\right\|} is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism of C*-algebras into a subspace of C∞(Σ). Multiplication on M(G) is given by convolution of measures and the involution * defined by f ∗ ( g ) = f ( g − 1 ) ¯ , {\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}},} and C∞(Σ) has a natural C*-algebra structure as Hilbert space operators. The Peter–Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: if f ∈ L2(G), then f ( g ) = ∑ σ ∈ Σ d σ tr ⁡ ( f ^ ( σ ) U g ( σ ) ) {\displaystyle f(g)=\sum _{\sigma \in \Sigma }d_{\sigma }\operatorname {tr} \left({\hat {f}}(\sigma )U_{g}^{(\sigma )}\right)} where the summation is understood as convergent in the L2 sense. The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of noncommutative geometry. In this context, a categorical generalization of the Fourier transform to noncommutative groups is Tannaka–Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions. == Alternatives == In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), and standing waves are not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notably transients, or any signal of finite extent. As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms or time–frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, fractional Fourier transform, Synchrosqueezing Fourier transform, or other functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform. == Example == The following figures provide a visual illustration of how the Fourier transform's integral measures whether a frequency is present in a particular function. The first image depicts the function f ( t ) = cos ⁡ ( 2 π 3 t ) e − π t 2 , {\displaystyle f(t)=\cos(2\pi \ 3t)\ e^{-\pi t^{2}},} which is a 3 Hz cosine wave (the first term) shaped by a Gaussian envelope function (the second term) that smoothly turns the wave on and off. The next 2 images show the product f ( t ) e − i 2 π 3 t , {\displaystyle f(t)e^{-i2\pi 3t},} which must be integrated to calculate the Fourier transform at +3 Hz. The real part of the integrand has a non-negative average value, because the alternating signs of f ( t ) {\displaystyle f(t)} and Re ⁡ ( e − i 2 π 3 t ) {\displaystyle \operatorname {Re} (e^{-i2\pi 3t})} oscillate at the same rate and in phase, whereas f ( t ) {\displaystyle f(t)} and Im ⁡ ( e − i 2 π 3 t ) {\displaystyle \operatorname {Im} (e^{-i2\pi 3t})} oscillate at the same rate but with orthogonal phase. The absolute value of the Fourier transform at +3 Hz is 0.5, which is relatively large. When added to the Fourier transform at -3 Hz (which is identical because we started with a real signal), we find that the amplitude of the 3 Hz frequency component is 1. However, when you try to measure a frequency that is not present, both the real and imaginary component of the integral vary rapidly between positive and negative values. For instance, the red curve is looking for 5 Hz. The absolute value of its integral is nearly zero, indicating that almost no 5 Hz component was in the signal. The general situation is usually more complicated than this, but heuristically this is how the Fourier transform measures how much of an individual frequency is present in a function f ( t ) . {\displaystyle f(t).} To re-enforce an earlier point, the reason for the response at ξ = − 3 {\displaystyle \xi =-3} Hz is because cos ⁡ ( 2 π 3 t ) {\displaystyle \cos(2\pi 3t)} and cos ⁡ ( 2 π ( − 3 ) t ) {\displaystyle \cos(2\pi (-3)t)} are indistinguishable. The transform of e i 2 π 3 t ⋅ e − π t 2 {\displaystyle e^{i2\pi 3t}\cdot e^{-\pi t^{2}}} would have just one response, whose amplitude is the integral of the smooth envelope: e − π t 2 , {\displaystyle e^{-\pi t^{2}},} whereas Re ⁡ ( f ( t ) ⋅ e − i 2 π 3 t ) {\displaystyle \operatorname {Re} (f(t)\cdot e^{-i2\pi 3t})} is e − π t 2 ( 1 + cos ⁡ ( 2 π 6 t ) ) / 2. {\displaystyle e^{-\pi t^{2}}(1+\cos(2\pi 6t))/2.} == Applications == Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the time domain corresponds to multiplication by the frequency, so some differential equations are easier to analyze in the frequency domain. Also, convolution in the time domain corresponds to ordinary multiplication in the frequency domain (see Convolution theorem). After performing the desired operations, transformation of the result can be made back to the time domain. Harmonic analysis is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics. === Analysis of differential equations === Perhaps the most important use of the Fourier transformation is to solve partial differential equations. Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is ∂ 2 y ( x , t ) ∂ 2 x = ∂ y ( x , t ) ∂ t . {\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial y(x,t)}{\partial t}}.} The example we will give, a slightly more difficult one, is the wave equation in one dimension, ∂ 2 y ( x , t ) ∂ 2 x = ∂ 2 y ( x , t ) ∂ 2 t . {\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial ^{2}y(x,t)}{\partial ^{2}t}}.} As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions" y ( x , 0 ) = f ( x ) , ∂ y ( x , 0 ) ∂ t = g ( x ) . {\displaystyle y(x,0)=f(x),\qquad {\frac {\partial y(x,0)}{\partial t}}=g(x).} Here, f and g are given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutions y which satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution. It is easier to find the Fourier transform ŷ of the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. After ŷ is determined, we can apply the inverse Fourier transformation to find y. Fourier's method is as follows. First, note that any function of the forms cos ⁡ ( 2 π ξ ( x ± t ) ) or sin ⁡ ( 2 π ξ ( x ± t ) ) {\displaystyle \cos {\bigl (}2\pi \xi (x\pm t){\bigr )}{\text{ or }}\sin {\bigl (}2\pi \xi (x\pm t){\bigr )}} satisfies the wave equation. These are called the elementary solutions. Second, note that therefore any integral y ( x , t ) = ∫ 0 ∞ d ξ [ a + ( ξ ) cos ⁡ ( 2 π ξ ( x + t ) ) + a − ( ξ ) cos ⁡ ( 2 π ξ ( x − t ) ) + b + ( ξ ) sin ⁡ ( 2 π ξ ( x + t ) ) + b − ( ξ ) sin ⁡ ( 2 π ξ ( x − t ) ) ] {\displaystyle {\begin{aligned}y(x,t)=\int _{0}^{\infty }d\xi {\Bigl [}&a_{+}(\xi )\cos {\bigl (}2\pi \xi (x+t){\bigr )}+a_{-}(\xi )\cos {\bigl (}2\pi \xi (x-t){\bigr )}+{}\\&b_{+}(\xi )\sin {\bigl (}2\pi \xi (x+t){\bigr )}+b_{-}(\xi )\sin \left(2\pi \xi (x-t)\right){\Bigr ]}\end{aligned}}} satisfies the wave equation for arbitrary a+, a−, b+, b−. This integral may be interpreted as a continuous linear combination of solutions for the linear equation. Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform of a± and b± in the variable x. The third step is to examine how to find the specific unknown coefficient functions a± and b± that will lead to y satisfying the boundary conditions. We are interested in the values of these solutions at t = 0. So we will set t = 0. Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variable x) of both sides and obtain 2 ∫ − ∞ ∞ y ( x , 0 ) cos ⁡ ( 2 π ξ x ) d x = a + + a − {\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\cos(2\pi \xi x)\,dx=a_{+}+a_{-}} and 2 ∫ − ∞ ∞ y ( x , 0 ) sin ⁡ ( 2 π ξ x ) d x = b + + b − . {\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\sin(2\pi \xi x)\,dx=b_{+}+b_{-}.} Similarly, taking the derivative of y with respect to t and then applying the Fourier sine and cosine transformations yields 2 ∫ − ∞ ∞ ∂ y ( u , 0 ) ∂ t sin ⁡ ( 2 π ξ x ) d x = ( 2 π ξ ) ( − a + + a − ) {\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\sin(2\pi \xi x)\,dx=(2\pi \xi )\left(-a_{+}+a_{-}\right)} and 2 ∫ − ∞ ∞ ∂ y ( u , 0 ) ∂ t cos ⁡ ( 2 π ξ x ) d x = ( 2 π ξ ) ( b + − b − ) . {\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\cos(2\pi \xi x)\,dx=(2\pi \xi )\left(b_{+}-b_{-}\right).} These are four linear equations for the four unknowns a± and b±, in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found. In summary, we chose a set of elementary solutions, parametrized by ξ, of which the general solution would be a (continuous) linear combination in the form of an integral over the parameter ξ. But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functions f and g. But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functions a± and b± in terms of the given boundary conditions f and g. From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in both x and t rather than operate as Fourier did, who only transformed in the spatial variables. Note that ŷ must be considered in the sense of a distribution since y(x, t) is not going to be L1: as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation in x to multiplication by i2πξ and differentiation with respect to t to multiplication by i2πf where f is the frequency. Then the wave equation becomes an algebraic equation in ŷ: ξ 2 y ^ ( ξ , f ) = f 2 y ^ ( ξ , f ) . {\displaystyle \xi ^{2}{\hat {y}}(\xi ,f)=f^{2}{\hat {y}}(\xi ,f).} This is equivalent to requiring ŷ(ξ, f) = 0 unless ξ = ±f. Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviously f̂ = δ(ξ ± f) will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conic ξ2 − f2 = 0. We may as well consider the distributions supported on the conic that are given by distributions of one variable on the line ξ = f plus distributions on the line ξ = −f as follows: if Φ is any test function, ∬ y ^ ϕ ( ξ , f ) d ξ d f = ∫ s + ϕ ( ξ , ξ ) d ξ + ∫ s − ϕ ( ξ , − ξ ) d ξ , {\displaystyle \iint {\hat {y}}\phi (\xi ,f)\,d\xi \,df=\int s_{+}\phi (\xi ,\xi )\,d\xi +\int s_{-}\phi (\xi ,-\xi )\,d\xi ,} where s+, and s−, are distributions of one variable. Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (put Φ(ξ, f) = ei2π(xξ+tf), which is clearly of polynomial growth): y ( x , 0 ) = ∫ { s + ( ξ ) + s − ( ξ ) } e i 2 π ξ x + 0 d ξ {\displaystyle y(x,0)=\int {\bigl \{}s_{+}(\xi )+s_{-}(\xi ){\bigr \}}e^{i2\pi \xi x+0}\,d\xi } and ∂ y ( x , 0 ) ∂ t = ∫ { s + ( ξ ) − s − ( ξ ) } i 2 π ξ e i 2 π ξ x + 0 d ξ . {\displaystyle {\frac {\partial y(x,0)}{\partial t}}=\int {\bigl \{}s_{+}(\xi )-s_{-}(\xi ){\bigr \}}i2\pi \xi e^{i2\pi \xi x+0}\,d\xi .} Now, as before, applying the one-variable Fourier transformation in the variable x to these functions of x yields two equations in the two unknown distributions s± (which can be taken to be ordinary functions if the boundary conditions are L1 or L2). From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used. The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well. === Fourier-transform spectroscopy === The Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in magnetic resonance imaging (MRI) and mass spectrometry. === Quantum mechanics === The Fourier transform is useful in quantum mechanics in at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs of complementary variables, connected by the Heisenberg uncertainty principle. For example, in one dimension, the spatial variable q of, say, a particle, can only be measured by the quantum mechanical "position operator" at the cost of losing information about the momentum p of the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", of q or by a function of p but not by a function of both variables. The variable p is called the conjugate variable to q. In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to both p and q simultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with a p-axis and a q-axis called the phase space. In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, the q-axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing the p-axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such that ϕ ( p ) = ∫ d q ψ ( q ) e − i p q / h , {\displaystyle \phi (p)=\int dq\,\psi (q)e^{-ipq/h},} or, equivalently, ψ ( q ) = ∫ d p ϕ ( p ) e i p q / h . {\displaystyle \psi (q)=\int dp\,\phi (p)e^{ipq/h}.} Physically realisable states are L2, and so by the Plancherel theorem, their Fourier transforms are also L2. (Note that since q is in units of distance and p is in units of momentum, the presence of the Planck constant in the exponent makes the exponent dimensionless, as it should be.) Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberg uncertainty principle. The other use of the Fourier transform in both quantum mechanics and quantum field theory is to solve the applicable wave equation. In non-relativistic quantum mechanics, the Schrödinger equation for a time-varying wave function in one-dimension, not subject to external forces, is − ∂ 2 ∂ x 2 ψ ( x , t ) = i h 2 π ∂ ∂ t ψ ( x , t ) . {\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} This is the same as the heat equation except for the presence of the imaginary unit i. Fourier methods can be used to solve this equation. In the presence of a potential, given by the potential energy function V(x), the equation becomes − ∂ 2 ∂ x 2 ψ ( x , t ) + V ( x ) ψ ( x , t ) = i h 2 π ∂ ∂ t ψ ( x , t ) . {\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)+V(x)\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution of ψ given its values for t = 0. Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important. In relativistic quantum mechanics, the Schrödinger equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units, ( ∂ 2 ∂ x 2 + 1 ) ψ ( x , t ) = ∂ 2 ∂ t 2 ψ ( x , t ) . {\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+1\right)\psi (x,t)={\frac {\partial ^{2}}{\partial t^{2}}}\psi (x,t).} This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions. Finally, the number operator of the quantum harmonic oscillator can be interpreted, for example via the Mehler kernel, as the generator of the Fourier transform F {\displaystyle {\mathcal {F}}} . === Signal processing === The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function. The autocorrelation function R of a function f is defined by R f ( τ ) = lim T → ∞ 1 2 T ∫ − T T f ( t ) f ( t + τ ) d t . {\displaystyle R_{f}(\tau )=\lim _{T\rightarrow \infty }{\frac {1}{2T}}\int _{-T}^{T}f(t)f(t+\tau )\,dt.} This function is a function of the time-lag τ elapsing between the values of f to be correlated. For most functions f that occur in practice, R is a bounded even function of the time-lag τ and for typical noisy signals it turns out to be uniformly continuous with a maximum at τ = 0. The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values of f separated by a time lag. This is a way of searching for the correlation of f with its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, if f(t) represents the temperature at time t, one expects a strong correlation with the temperature at a time lag of 24 hours. It possesses a Fourier transform, P f ( ξ ) = ∫ − ∞ ∞ R f ( τ ) e − i 2 π ξ τ d τ . {\displaystyle P_{f}(\xi )=\int _{-\infty }^{\infty }R_{f}(\tau )e^{-i2\pi \xi \tau }\,d\tau .} This Fourier transform is called the power spectral density function of f. (Unless all periodic components are first filtered out from f, this integral will diverge, but it is easy to filter out such periodicities.) The power spectrum, as indicated by this density function P, measures the amount of variance contributed to the data by the frequency ξ. In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series (ANOVA). Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data. The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out. Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool. == Other notations == Other common notations for f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} include: f ~ ( ξ ) , F ( ξ ) , F ( f ) ( ξ ) , ( F f ) ( ξ ) , F ( f ) , F { f } , F ( f ( t ) ) , F { f ( t ) } . {\displaystyle {\tilde {f}}(\xi ),\ F(\xi ),\ {\mathcal {F}}\left(f\right)(\xi ),\ \left({\mathcal {F}}f\right)(\xi ),\ {\mathcal {F}}(f),\ {\mathcal {F}}\{f\},\ {\mathcal {F}}{\bigl (}f(t){\bigr )},\ {\mathcal {F}}{\bigl \{}f(t){\bigr \}}.} In the sciences and engineering it is also common to make substitutions like these: ξ → f , x → t , f → x , f ^ → X . {\displaystyle \xi \rightarrow f,\quad x\rightarrow t,\quad f\rightarrow x,\quad {\hat {f}}\rightarrow X.} So the transform pair f ( x ) ⟺ F f ^ ( ξ ) {\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ {\hat {f}}(\xi )} can become x ( t ) ⟺ F X ( f ) {\displaystyle x(t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ X(f)} A disadvantage of the capital letter notation is when expressing a transform such as f ⋅ g ^ {\displaystyle {\widehat {f\cdot g}}} or f ′ ^ , {\displaystyle {\widehat {f'}},} which become the more awkward F { f ⋅ g } {\displaystyle {\mathcal {F}}\{f\cdot g\}} and F { f ′ } . {\displaystyle {\mathcal {F}}\{f'\}.} In some contexts such as particle physics, the same symbol f {\displaystyle f} may be used for both for a function as well as it Fourier transform, with the two only distinguished by their argument I.e. f ( k 1 + k 2 ) {\displaystyle f(k_{1}+k_{2})} would refer to the Fourier transform because of the momentum argument, while f ( x 0 + π r → ) {\displaystyle f(x_{0}+\pi {\vec {r}})} would refer to the original function because of the positional argument. Although tildes may be used as in f ~ {\displaystyle {\tilde {f}}} to indicate Fourier transforms, tildes may also be used to indicate a modification of a quantity with a more Lorentz invariant form, such as d k ~ = d k ( 2 π ) 3 2 ω {\displaystyle {\tilde {dk}}={\frac {dk}{(2\pi )^{3}2\omega }}} , so care must be taken. Similarly, f ^ {\displaystyle {\hat {f}}} often denotes the Hilbert transform of f {\displaystyle f} . The interpretation of the complex function f̂(ξ) may be aided by expressing it in polar coordinate form f ^ ( ξ ) = A ( ξ ) e i φ ( ξ ) {\displaystyle {\hat {f}}(\xi )=A(\xi )e^{i\varphi (\xi )}} in terms of the two real functions A(ξ) and φ(ξ) where: A ( ξ ) = | f ^ ( ξ ) | , {\displaystyle A(\xi )=\left|{\hat {f}}(\xi )\right|,} is the amplitude and φ ( ξ ) = arg ⁡ ( f ^ ( ξ ) ) , {\displaystyle \varphi (\xi )=\arg \left({\hat {f}}(\xi )\right),} is the phase (see arg function). Then the inverse transform can be written: f ( x ) = ∫ − ∞ ∞ A ( ξ ) e i ( 2 π ξ x + φ ( ξ ) ) d ξ , {\displaystyle f(x)=\int _{-\infty }^{\infty }A(\xi )\ e^{i{\bigl (}2\pi \xi x+\varphi (\xi ){\bigr )}}\,d\xi ,} which is a recombination of all the frequency components of f(x). Each component is a complex sinusoid of the form e2πixξ whose amplitude is A(ξ) and whose initial phase angle (at x = 0) is φ(ξ). The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted F and F(f) is used to denote the Fourier transform of the function f. This mapping is linear, which means that F can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function f) can be used to write F f instead of F(f). Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value ξ for its variable, and this is denoted either as F f(ξ) or as (F f)(ξ). Notice that in the former case, it is implicitly understood that F is applied first to f and then the resulting function is evaluated at ξ, not the other way around. In mathematics and various applied sciences, it is often necessary to distinguish between a function f and the value of f when its variable equals x, denoted f(x). This means that a notation like F(f(x)) formally can be interpreted as the Fourier transform of the values of f at x. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, F ( rect ⁡ ( x ) ) = sinc ⁡ ( ξ ) {\displaystyle {\mathcal {F}}{\bigl (}\operatorname {rect} (x){\bigr )}=\operatorname {sinc} (\xi )} is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or F ( f ( x + x 0 ) ) = F ( f ( x ) ) e i 2 π x 0 ξ {\displaystyle {\mathcal {F}}{\bigl (}f(x+x_{0}){\bigr )}={\mathcal {F}}{\bigl (}f(x){\bigr )}\,e^{i2\pi x_{0}\xi }} is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function of x, not of x0. As discussed above, the characteristic function of a random variable is the same as the Fourier–Stieltjes transform of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined E ( e i t ⋅ X ) = ∫ e i t ⋅ x d μ X ( x ) . {\displaystyle E\left(e^{it\cdot X}\right)=\int e^{it\cdot x}\,d\mu _{X}(x).} As in the case of the "non-unitary angular frequency" convention above, the factor of 2π appears in neither the normalizing constant nor the exponent. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponent. == Computation methods == The appropriate computation method largely depends how the original mathematical function is represented and the desired form of the output function. In this section we consider both functions of a continuous variable, f ( x ) , {\displaystyle f(x),} and functions of a discrete variable (i.e. ordered pairs of x {\displaystyle x} and f {\displaystyle f} values). For discrete-valued x , {\displaystyle x,} the transform integral becomes a summation of sinusoids, which is still a continuous function of frequency ( ξ {\displaystyle \xi } or ω {\displaystyle \omega } ). When the sinusoids are harmonically related (i.e. when the x {\displaystyle x} -values are spaced at integer multiples of an interval), the transform is called discrete-time Fourier transform (DTFT). === Discrete Fourier transforms and fast Fourier transforms === Sampling the DTFT at equally-spaced values of frequency is the most common modern method of computation. Efficient procedures, depending on the frequency resolution needed, are described at Discrete-time Fourier transform § Sampling the DTFT. The discrete Fourier transform (DFT), used there, is usually computed by a fast Fourier transform (FFT) algorithm. === Analytic integration of closed-form functions === Tables of closed-form Fourier transforms, such as § Square-integrable functions, one-dimensional and § Table of discrete-time Fourier transforms, are created by mathematically evaluating the Fourier analysis integral (or summation) into another closed-form function of frequency ( ξ {\displaystyle \xi } or ω {\displaystyle \omega } ). When mathematically possible, this provides a transform for a continuum of frequency values. Many computer algebra systems such as Matlab and Mathematica that are capable of symbolic integration are capable of computing Fourier transforms analytically. For example, to compute the Fourier transform of cos(6πt) e−πt2 one might enter the command integrate cos(6*pi*t) exp(−pi*t^2) exp(-i*2*pi*f*t) from -inf to inf into Wolfram Alpha. === Numerical integration of closed-form continuous functions === Discrete sampling of the Fourier transform can also be done by numerical integration of the definition at each value of frequency for which transform is desired. The numerical integration approach works on a much broader class of functions than the analytic approach. === Numerical integration of a series of ordered pairs === If the input function is a series of ordered pairs, numerical integration reduces to just a summation over the set of data pairs. The DTFT is a common subcase of this more general situation. == Tables of important Fourier transforms == The following tables record some closed-form Fourier transforms. For functions f(x) and g(x) denote their Fourier transforms by f̂ and ĝ. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse. === Functional relationships, one-dimensional === The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix). === Square-integrable functions, one-dimensional === The Fourier transforms in this table may be found in Campbell & Foster (1948), Erdélyi (1954), or Kammler (2000, appendix). === Distributions, one-dimensional === The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix). === Two-dimensional functions === === Formulas for general n-dimensional functions === == See also == == Notes == == Citations == == References == == External links == Media related to Fourier transformation at Wikimedia Commons Encyclopedia of Mathematics Weisstein, Eric W. "Fourier Transform". MathWorld. Fourier Transform in Crystallography
Wikipedia/Continuous_Fourier_transform
In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal x [ n ] {\displaystyle x[n]} with a finite impulse response (FIR) filter h [ n ] {\displaystyle h[n]} : where h [ m ] = 0 {\displaystyle h[m]=0} for m {\displaystyle m} outside the region [ 1 , M ] . {\displaystyle [1,M].} This article uses common abstract notations, such as y ( t ) = x ( t ) ∗ h ( t ) , {\textstyle y(t)=x(t)*h(t),} or y ( t ) = H { x ( t ) } , {\textstyle y(t)={\mathcal {H}}\{x(t)\},} in which it is understood that the functions should be thought of in their totality, rather than at specific instants t {\textstyle t} (see Convolution#Notation). The concept is to divide the problem into multiple convolutions of h [ n ] {\displaystyle h[n]} with short segments of x [ n ] {\displaystyle x[n]} : x k [ n ] ≜ { x [ n + k L ] , n = 1 , 2 , … , L 0 , otherwise , {\displaystyle x_{k}[n]\ \triangleq \ {\begin{cases}x[n+kL],&n=1,2,\ldots ,L\\0,&{\text{otherwise}},\end{cases}}} where L {\displaystyle L} is an arbitrary segment length. Then: x [ n ] = ∑ k x k [ n − k L ] , {\displaystyle x[n]=\sum _{k}x_{k}[n-kL],\,} and y [ n ] {\displaystyle y[n]} can be written as a sum of short convolutions: y [ n ] = ( ∑ k x k [ n − k L ] ) ∗ h [ n ] = ∑ k ( x k [ n − k L ] ∗ h [ n ] ) = ∑ k y k [ n − k L ] , {\displaystyle {\begin{aligned}y[n]=\left(\sum _{k}x_{k}[n-kL]\right)*h[n]&=\sum _{k}\left(x_{k}[n-kL]*h[n]\right)\\&=\sum _{k}y_{k}[n-kL],\end{aligned}}} where the linear convolution y k [ n ] ≜ x k [ n ] ∗ h [ n ] {\displaystyle y_{k}[n]\ \triangleq \ x_{k}[n]*h[n]\,} is zero outside the region [ 1 , L + M − 1 ] . {\displaystyle [1,L+M-1].} And for any parameter N ≥ L + M − 1 , {\displaystyle N\geq L+M-1,\,} it is equivalent to the N {\displaystyle N} -point circular convolution of x k [ n ] {\displaystyle x_{k}[n]\,} with h [ n ] {\displaystyle h[n]\,} in the region [ 1 , N ] . {\displaystyle [1,N].} The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem: where: DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over N {\displaystyle N} discrete points, and L {\displaystyle L} is customarily chosen such that N = L + M − 1 {\displaystyle N=L+M-1} is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency. == Pseudocode == The following is a pseudocode of the algorithm: (Overlap-add algorithm for linear convolution) h = FIR_filter M = length(h) Nx = length(x) N = 8 × 2^ceiling( log2(M) ) (8 times the smallest power of two bigger than filter length M. See next section for a slightly better choice.) step_size = N - (M-1) (L in the text above) H = DFT(h, N) position = 0 y(1 : Nx + M-1) = 0 while position + step_size ≤ Nx do y(position+(1:N)) = y(position+(1:N)) + IDFT(DFT(x(position+(1:step_size)), N) × H) position = position + step_size end == Efficiency considerations == When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about N (log2(N) + 1) complex multiplications for the FFT, product of arrays, and IFFT. Each iteration produces N-M+1 output samples, so the number of complex multiplications per output sample is about: For example, when M = 201 {\displaystyle M=201} and N = 1024 , {\displaystyle N=1024,} Eq.3 equals 13.67 , {\displaystyle 13.67,} whereas direct evaluation of Eq.1 would require up to 201 {\displaystyle 201} complex multiplications per output sample, the worst case being when both x {\displaystyle x} and h {\displaystyle h} are complex-valued. Also note that for any given M , {\displaystyle M,} Eq.3 has a minimum with respect to N . {\displaystyle N.} Figure 2 is a graph of the values of N {\displaystyle N} that minimize Eq.3 for a range of filter lengths ( M {\displaystyle M} ). Instead of Eq.1, we can also consider applying Eq.2 to a long sequence of length N x {\displaystyle N_{x}} samples. The total number of complex multiplications would be: N x ⋅ ( log 2 ⁡ ( N x ) + 1 ) . {\displaystyle N_{x}\cdot (\log _{2}(N_{x})+1).} Comparatively, the number of complex multiplications required by the pseudocode algorithm is: N x ⋅ ( log 2 ⁡ ( N ) + 1 ) ⋅ N N − M + 1 . {\displaystyle N_{x}\cdot (\log _{2}(N)+1)\cdot {\frac {N}{N-M+1}}.} Hence the cost of the overlap–add method scales almost as O ( N x log 2 ⁡ N ) {\displaystyle O\left(N_{x}\log _{2}N\right)} while the cost of a single, large circular convolution is almost O ( N x log 2 ⁡ N x ) {\displaystyle O\left(N_{x}\log _{2}N_{x}\right)} . The two methods are also compared in Figure 3, created by Matlab simulation. The contours are lines of constant ratio of the times it takes to perform both methods. When the overlap-add method is faster, the ratio exceeds 1, and ratios as high as 3 are seen. == See also == Overlap–save method Circular_convolution#Example == Notes == == References == == Further reading == Oppenheim, Alan V.; Schafer, Ronald W. (1975). Digital signal processing. Englewood Cliffs, N.J.: Prentice-Hall. ISBN 0-13-214635-5. Hayes, M. Horace (1999). Digital Signal Processing. Schaum's Outline Series. New York: McGraw Hill. ISBN 0-07-027389-8. Senobari, Nader Shakibay; Funning, Gareth J.; Keogh, Eamonn; Zhu, Yan; Yeh, Chin-Chia Michael; Zimmerman, Zachary; Mueen, Abdullah (2019). "Super-Efficient Cross-Correlation (SEC-C): A Fast Matched Filtering Code Suitable for Desktop Computers" (PDF). Seismological Research Letters. 90 (1): 322–334. doi:10.1785/0220180122. ISSN 0895-0695.
Wikipedia/Overlap-add_method
In mathematics, the floor function is the function that takes as input a real number x, and gives as output the greatest integer less than or equal to x, denoted ⌊x⌋ or floor(x). Similarly, the ceiling function maps x to the least integer greater than or equal to x, denoted ⌈x⌉ or ceil(x). For example, for floor: ⌊2.4⌋ = 2, ⌊−2.4⌋ = −3, and for ceiling: ⌈2.4⌉ = 3, and ⌈−2.4⌉ = −2. The floor of x is also called the integral part, integer part, greatest integer, or entier of x, and was historically denoted [x] (among other notations). However, the same term, integer part, is also used for truncation towards zero, which differs from the floor function for negative numbers. For an integer n, ⌊n⌋ = ⌈n⌉ = n. Although floor(x + 1) and ceil(x) produce graphs that appear exactly alike, they are not the same when the value of x is an exact integer. For example, when x = 2.0001, ⌊2.0001 + 1⌋ = ⌈2.0001⌉ = 3. However, if x = 2, then ⌊2 + 1⌋ = 3, while ⌈2⌉ = 2. == Notation == The integral part or integer part of a number (partie entière in the original) was first defined in 1798 by Adrien-Marie Legendre in his proof of the Legendre's formula. Carl Friedrich Gauss introduced the square bracket notation [x] in his third proof of quadratic reciprocity (1808). This remained the standard in mathematics until Kenneth E. Iverson introduced, in his 1962 book A Programming Language, the names "floor" and "ceiling" and the corresponding notations ⌊x⌋ and ⌈x⌉. (Iverson used square brackets for a different purpose, the Iverson bracket notation.) Both notations are now used in mathematics, although Iverson's notation will be followed in this article. In some sources, boldface or double brackets ⟦x⟧ are used for floor, and reversed brackets ⟧x⟦ or ]x[ for ceiling. The fractional part is the sawtooth function, denoted by {x} for real x and defined by the formula {x} = x − ⌊x⌋ For all x, 0 ≤ {x} < 1. These characters are provided in Unicode: U+2308 ⌈ LEFT CEILING (&lceil;, &LeftCeiling;) U+2309 ⌉ RIGHT CEILING (&rceil;, &RightCeiling;) U+230A ⌊ LEFT FLOOR (&LeftFloor;, &lfloor;) U+230B ⌋ RIGHT FLOOR (&rfloor;, &RightFloor;) In the LaTeX typesetting system, these symbols can be specified with the \lceil, \rceil, \lfloor, and \rfloor commands in math mode. LaTeX has supported UTF-8 since 2018, so the Unicode characters can now be used directly. Larger versions are\left\lceil, \right\rceil, \left\lfloor, and \right\rfloor. == Definition and properties == Given real numbers x and y, integers m and n and the set of integers Z {\displaystyle \mathbb {Z} } , floor and ceiling may be defined by the equations ⌊ x ⌋ = max { m ∈ Z ∣ m ≤ x } , {\displaystyle \lfloor x\rfloor =\max\{m\in \mathbb {Z} \mid m\leq x\},} ⌈ x ⌉ = min { n ∈ Z ∣ n ≥ x } . {\displaystyle \lceil x\rceil =\min\{n\in \mathbb {Z} \mid n\geq x\}.} Since there is exactly one integer in a half-open interval of length one, for any real number x, there are unique integers m and n satisfying the equation x − 1 < m ≤ x ≤ n < x + 1. {\displaystyle x-1<m\leq x\leq n<x+1.} where ⌊ x ⌋ = m {\displaystyle \lfloor x\rfloor =m} and ⌈ x ⌉ = n {\displaystyle \lceil x\rceil =n} may also be taken as the definition of floor and ceiling. === Equivalences === These formulas can be used to simplify expressions involving floors and ceilings. ⌊ x ⌋ = m if and only if m ≤ x < m + 1 , ⌈ x ⌉ = n if and only if n − 1 < x ≤ n , ⌊ x ⌋ = m if and only if x − 1 < m ≤ x , ⌈ x ⌉ = n if and only if x ≤ n < x + 1. {\displaystyle {\begin{alignedat}{3}\lfloor x\rfloor &=m\ \ &&{\mbox{ if and only if }}&m&\leq x<m+1,\\\lceil x\rceil &=n&&{\mbox{ if and only if }}&\ \ n-1&<x\leq n,\\\lfloor x\rfloor &=m&&{\mbox{ if and only if }}&x-1&<m\leq x,\\\lceil x\rceil &=n&&{\mbox{ if and only if }}&x&\leq n<x+1.\end{alignedat}}} In the language of order theory, the floor function is a residuated mapping, that is, part of a Galois connection: it is the upper adjoint of the function that embeds the integers into the reals. x < n if and only if ⌊ x ⌋ < n , n < x if and only if n < ⌈ x ⌉ , x ≤ n if and only if ⌈ x ⌉ ≤ n , n ≤ x if and only if n ≤ ⌊ x ⌋ . {\displaystyle {\begin{aligned}x<n&\;\;{\mbox{ if and only if }}&\lfloor x\rfloor &<n,\\n<x&\;\;{\mbox{ if and only if }}&n&<\lceil x\rceil ,\\x\leq n&\;\;{\mbox{ if and only if }}&\lceil x\rceil &\leq n,\\n\leq x&\;\;{\mbox{ if and only if }}&n&\leq \lfloor x\rfloor .\end{aligned}}} These formulas show how adding an integer n to the arguments affects the functions: ⌊ x + n ⌋ = ⌊ x ⌋ + n , ⌈ x + n ⌉ = ⌈ x ⌉ + n , { x + n } = { x } . {\displaystyle {\begin{aligned}\lfloor x+n\rfloor &=\lfloor x\rfloor +n,\\\lceil x+n\rceil &=\lceil x\rceil +n,\\\{x+n\}&=\{x\}.\end{aligned}}} The above are never true if n is not an integer; however, for every x and y, the following inequalities hold: ⌊ x ⌋ + ⌊ y ⌋ ≤ ⌊ x + y ⌋ ≤ ⌊ x ⌋ + ⌊ y ⌋ + 1 , ⌈ x ⌉ + ⌈ y ⌉ − 1 ≤ ⌈ x + y ⌉ ≤ ⌈ x ⌉ + ⌈ y ⌉ . {\displaystyle {\begin{aligned}\lfloor x\rfloor +\lfloor y\rfloor &\leq \lfloor x+y\rfloor \leq \lfloor x\rfloor +\lfloor y\rfloor +1,\\[3mu]\lceil x\rceil +\lceil y\rceil -1&\leq \lceil x+y\rceil \leq \lceil x\rceil +\lceil y\rceil .\end{aligned}}} === Monotonicity === Both floor and ceiling functions are monotonically non-decreasing functions: x 1 ≤ x 2 ⇒ ⌊ x 1 ⌋ ≤ ⌊ x 2 ⌋ , x 1 ≤ x 2 ⇒ ⌈ x 1 ⌉ ≤ ⌈ x 2 ⌉ . {\displaystyle {\begin{aligned}x_{1}\leq x_{2}&\Rightarrow \lfloor x_{1}\rfloor \leq \lfloor x_{2}\rfloor ,\\x_{1}\leq x_{2}&\Rightarrow \lceil x_{1}\rceil \leq \lceil x_{2}\rceil .\end{aligned}}} === Relations among the functions === It is clear from the definitions that ⌊ x ⌋ ≤ ⌈ x ⌉ , {\displaystyle \lfloor x\rfloor \leq \lceil x\rceil ,} with equality if and only if x is an integer, i.e. ⌈ x ⌉ − ⌊ x ⌋ = { 0 if x ∈ Z 1 if x ∉ Z {\displaystyle \lceil x\rceil -\lfloor x\rfloor ={\begin{cases}0&{\mbox{ if }}x\in \mathbb {Z} \\1&{\mbox{ if }}x\not \in \mathbb {Z} \end{cases}}} In fact, for integers n, both floor and ceiling functions are the identity: ⌊ n ⌋ = ⌈ n ⌉ = n . {\displaystyle \lfloor n\rfloor =\lceil n\rceil =n.} Negating the argument switches floor and ceiling and changes the sign: ⌊ x ⌋ + ⌈ − x ⌉ = 0 − ⌊ x ⌋ = ⌈ − x ⌉ − ⌈ x ⌉ = ⌊ − x ⌋ {\displaystyle {\begin{aligned}\lfloor x\rfloor +\lceil -x\rceil &=0\\-\lfloor x\rfloor &=\lceil -x\rceil \\-\lceil x\rceil &=\lfloor -x\rfloor \end{aligned}}} and: ⌊ x ⌋ + ⌊ − x ⌋ = { 0 if x ∈ Z − 1 if x ∉ Z , {\displaystyle \lfloor x\rfloor +\lfloor -x\rfloor ={\begin{cases}0&{\text{if }}x\in \mathbb {Z} \\-1&{\text{if }}x\not \in \mathbb {Z} ,\end{cases}}} ⌈ x ⌉ + ⌈ − x ⌉ = { 0 if x ∈ Z 1 if x ∉ Z . {\displaystyle \lceil x\rceil +\lceil -x\rceil ={\begin{cases}0&{\text{if }}x\in \mathbb {Z} \\1&{\text{if }}x\not \in \mathbb {Z} .\end{cases}}} Negating the argument complements the fractional part: { x } + { − x } = { 0 if x ∈ Z 1 if x ∉ Z . {\displaystyle \{x\}+\{-x\}={\begin{cases}0&{\text{if }}x\in \mathbb {Z} \\1&{\text{if }}x\not \in \mathbb {Z} .\end{cases}}} The floor, ceiling, and fractional part functions are idempotent: ⌊ ⌊ x ⌋ ⌋ = ⌊ x ⌋ , ⌈ ⌈ x ⌉ ⌉ = ⌈ x ⌉ , { { x } } = { x } . {\displaystyle {\begin{aligned}{\big \lfloor }\lfloor x\rfloor {\big \rfloor }&=\lfloor x\rfloor ,\\{\big \lceil }\lceil x\rceil {\big \rceil }&=\lceil x\rceil ,\\{\big \{}\{x\}{\big \}}&=\{x\}.\end{aligned}}} The result of nested floor or ceiling functions is the innermost function: ⌊ ⌈ x ⌉ ⌋ = ⌈ x ⌉ , ⌈ ⌊ x ⌋ ⌉ = ⌊ x ⌋ {\displaystyle {\begin{aligned}{\big \lfloor }\lceil x\rceil {\big \rfloor }&=\lceil x\rceil ,\\{\big \lceil }\lfloor x\rfloor {\big \rceil }&=\lfloor x\rfloor \end{aligned}}} due to the identity property for integers. === Quotients === If m and n are integers and n ≠ 0, 0 ≤ { m n } ≤ 1 − 1 | n | . {\displaystyle 0\leq \left\{{\frac {m}{n}}\right\}\leq 1-{\frac {1}{|n|}}.} If n is positive ⌊ x + m n ⌋ = ⌊ ⌊ x ⌋ + m n ⌋ , {\displaystyle \left\lfloor {\frac {x+m}{n}}\right\rfloor =\left\lfloor {\frac {\lfloor x\rfloor +m}{n}}\right\rfloor ,} ⌈ x + m n ⌉ = ⌈ ⌈ x ⌉ + m n ⌉ . {\displaystyle \left\lceil {\frac {x+m}{n}}\right\rceil =\left\lceil {\frac {\lceil x\rceil +m}{n}}\right\rceil .} If m is positive n = ⌈ n 1 m ⌉ + ⌈ n − 1 m ⌉ + ⋯ + ⌈ n − m + 1 m ⌉ , {\displaystyle n=\left\lceil {\frac {n{\vphantom {1}}}{m}}\right\rceil +\left\lceil {\frac {n-1}{m}}\right\rceil +\dots +\left\lceil {\frac {n-m+1}{m}}\right\rceil ,} n = ⌊ n 1 m ⌋ + ⌊ n + 1 m ⌋ + ⋯ + ⌊ n + m − 1 m ⌋ . {\displaystyle n=\left\lfloor {\frac {n{\vphantom {1}}}{m}}\right\rfloor +\left\lfloor {\frac {n+1}{m}}\right\rfloor +\dots +\left\lfloor {\frac {n+m-1}{m}}\right\rfloor .} For m = 2 these imply n = ⌊ n 1 2 ⌋ + ⌈ n 1 2 ⌉ . {\displaystyle n=\left\lfloor {\frac {n{\vphantom {1}}}{2}}\right\rfloor +\left\lceil {\frac {n{\vphantom {1}}}{2}}\right\rceil .} More generally, for positive m (See Hermite's identity) ⌈ m x ⌉ = ⌈ x ⌉ + ⌈ x − 1 m ⌉ + ⋯ + ⌈ x − m − 1 m ⌉ , {\displaystyle \lceil mx\rceil =\left\lceil x\right\rceil +\left\lceil x-{\frac {1}{m}}\right\rceil +\dots +\left\lceil x-{\frac {m-1}{m}}\right\rceil ,} ⌊ m x ⌋ = ⌊ x ⌋ + ⌊ x + 1 m ⌋ + ⋯ + ⌊ x + m − 1 m ⌋ . {\displaystyle \lfloor mx\rfloor =\left\lfloor x\right\rfloor +\left\lfloor x+{\frac {1}{m}}\right\rfloor +\dots +\left\lfloor x+{\frac {m-1}{m}}\right\rfloor .} The following can be used to convert floors to ceilings and vice versa (with m being positive) ⌈ n 1 m ⌉ = ⌊ n + m − 1 m ⌋ = ⌊ n − 1 m ⌋ + 1 , {\displaystyle \left\lceil {\frac {n{\vphantom {1}}}{m}}\right\rceil =\left\lfloor {\frac {n+m-1}{m}}\right\rfloor =\left\lfloor {\frac {n-1}{m}}\right\rfloor +1,} ⌊ n 1 m ⌋ = ⌈ n − m + 1 m ⌉ = ⌈ n + 1 m ⌉ − 1 , {\displaystyle \left\lfloor {\frac {n{\vphantom {1}}}{m}}\right\rfloor =\left\lceil {\frac {n-m+1}{m}}\right\rceil =\left\lceil {\frac {n+1}{m}}\right\rceil -1,} For all m and n strictly positive integers: ∑ k = 1 n − 1 ⌊ k m n ⌋ = ( m − 1 ) ( n − 1 ) + gcd ( m , n ) − 1 2 , {\displaystyle \sum _{k=1}^{n-1}\left\lfloor {\frac {km}{n}}\right\rfloor ={\frac {(m-1)(n-1)+\gcd(m,n)-1}{2}},} which, for positive and coprime m and n, reduces to ∑ k = 1 n − 1 ⌊ k m n ⌋ = 1 2 ( m − 1 ) ( n − 1 ) , {\displaystyle \sum _{k=1}^{n-1}\left\lfloor {\frac {km}{n}}\right\rfloor ={\tfrac {1}{2}}(m-1)(n-1),} and similarly for the ceiling and fractional part functions (still for positive and coprime m and n), ∑ k = 1 n − 1 ⌈ k m n ⌉ = 1 2 ( m + 1 ) ( n − 1 ) , {\displaystyle \sum _{k=1}^{n-1}\left\lceil {\frac {km}{n}}\right\rceil ={\tfrac {1}{2}}(m+1)(n-1),} ∑ k = 1 n − 1 { k m n } = 1 2 ( n − 1 ) . {\displaystyle \sum _{k=1}^{n-1}\left\{{\frac {km}{n}}\right\}={\tfrac {1}{2}}(n-1).} Since the right-hand side of the general case is symmetrical in m and n, this implies that ⌊ m 1 n ⌋ + ⌊ 2 m n ⌋ + ⋯ + ⌊ ( n − 1 ) m n ⌋ = ⌊ n 1 m ⌋ + ⌊ 2 n m ⌋ + ⋯ + ⌊ ( m − 1 ) n m ⌋ . {\displaystyle \left\lfloor {\frac {m{\vphantom {1}}}{n}}\right\rfloor +\left\lfloor {\frac {2m}{n}}\right\rfloor +\dots +\left\lfloor {\frac {(n-1)m}{n}}\right\rfloor =\left\lfloor {\frac {n{\vphantom {1}}}{m}}\right\rfloor +\left\lfloor {\frac {2n}{m}}\right\rfloor +\dots +\left\lfloor {\frac {(m-1)n}{m}}\right\rfloor .} More generally, if m and n are positive, ⌊ x 1 n ⌋ + ⌊ m + x n ⌋ + ⌊ 2 m + x n ⌋ + ⋯ + ⌊ ( n − 1 ) m + x n ⌋ = ⌊ x 1 m ⌋ + ⌊ n + x m ⌋ + ⌊ 2 n + x m ⌋ + ⋯ + ⌊ ( m − 1 ) n + x m ⌋ . {\displaystyle {\begin{aligned}&\left\lfloor {\frac {x{\vphantom {1}}}{n}}\right\rfloor +\left\lfloor {\frac {m+x}{n}}\right\rfloor +\left\lfloor {\frac {2m+x}{n}}\right\rfloor +\dots +\left\lfloor {\frac {(n-1)m+x}{n}}\right\rfloor \\[5mu]=&\left\lfloor {\frac {x{\vphantom {1}}}{m}}\right\rfloor +\left\lfloor {\frac {n+x}{m}}\right\rfloor +\left\lfloor {\frac {2n+x}{m}}\right\rfloor +\cdots +\left\lfloor {\frac {(m-1)n+x}{m}}\right\rfloor .\end{aligned}}} This is sometimes called a reciprocity law. Division by positive integers gives rise to an interesting and sometimes useful property. Assuming m , n > 0 {\displaystyle m,n>0} , m ≤ ⌊ x n ⌋ ⟺ n ≤ ⌊ x m ⌋ ⟺ n ≤ ⌊ x ⌋ m . {\displaystyle m\leq \left\lfloor {\frac {x}{n}}\right\rfloor \iff n\leq \left\lfloor {\frac {x}{m}}\right\rfloor \iff n\leq {\frac {\lfloor x\rfloor }{m}}.} Similarly, m ≥ ⌈ x n ⌉ ⟺ n ≥ ⌈ x m ⌉ ⟺ n ≥ ⌈ x ⌉ m . {\displaystyle m\geq \left\lceil {\frac {x}{n}}\right\rceil \iff n\geq \left\lceil {\frac {x}{m}}\right\rceil \iff n\geq {\frac {\lceil x\rceil }{m}}.} Indeed, m ≤ ⌊ x n ⌋ ⟹ m ≤ x n ⟹ n ≤ x m ⟹ n ≤ ⌊ x m ⌋ ⟹ … ⟹ m ≤ ⌊ x n ⌋ , {\displaystyle m\leq \left\lfloor {\frac {x}{n}}\right\rfloor \implies m\leq {\frac {x}{n}}\implies n\leq {\frac {x}{m}}\implies n\leq \left\lfloor {\frac {x}{m}}\right\rfloor \implies \ldots \implies m\leq \left\lfloor {\frac {x}{n}}\right\rfloor ,} keeping in mind that ⌊ x n ⌋ = ⌊ ⌊ x ⌋ n ⌋ . {\textstyle \left\lfloor {\frac {x}{n}}\right\rfloor =\left\lfloor {\frac {\lfloor x\rfloor }{n}}\right\rfloor .} The second equivalence involving the ceiling function can be proved similarly. === Nested divisions === For a positive integer n, and arbitrary real numbers m and x: ⌊ ⌊ x m ⌋ n ⌋ = ⌊ x m n ⌋ ⌈ ⌈ x m ⌉ n ⌉ = ⌈ x m n ⌉ . {\displaystyle {\begin{aligned}\left\lfloor {\frac {\left\lfloor {\frac {x}{m}}\right\rfloor }{n}}\right\rfloor &=\left\lfloor {\frac {x}{mn}}\right\rfloor \\[4px]\left\lceil {\frac {\left\lceil {\frac {x}{m}}\right\rceil }{n}}\right\rceil &=\left\lceil {\frac {x}{mn}}\right\rceil .\end{aligned}}} === Continuity and series expansions === None of the functions discussed in this article are continuous, but all are piecewise linear: the functions ⌊ x ⌋ {\displaystyle \lfloor x\rfloor } , ⌈ x ⌉ {\displaystyle \lceil x\rceil } , and { x } {\displaystyle \{x\}} have discontinuities at the integers. ⌊ x ⌋ {\displaystyle \lfloor x\rfloor } is upper semi-continuous and ⌈ x ⌉ {\displaystyle \lceil x\rceil } and { x } {\displaystyle \{x\}} are lower semi-continuous. Since none of the functions discussed in this article are continuous, none of them have a power series expansion. Since floor and ceiling are not periodic, they do not have uniformly convergent Fourier series expansions. The fractional part function has Fourier series expansion { x } = 1 2 − 1 π ∑ k = 1 ∞ sin ⁡ ( 2 π k x ) k {\displaystyle \{x\}={\frac {1}{2}}-{\frac {1}{\pi }}\sum _{k=1}^{\infty }{\frac {\sin(2\pi kx)}{k}}} for x not an integer. At points of discontinuity, a Fourier series converges to a value that is the average of its limits on the left and the right, unlike the floor, ceiling and fractional part functions: for y fixed and x a multiple of y the Fourier series given converges to y/2, rather than to x mod y = 0. At points of continuity the series converges to the true value. Using the formula ⌊ x ⌋ = x − { x } {\displaystyle \lfloor x\rfloor =x-\{x\}} gives ⌊ x ⌋ = x − 1 2 + 1 π ∑ k = 1 ∞ sin ⁡ ( 2 π k x ) k {\displaystyle \lfloor x\rfloor =x-{\frac {1}{2}}+{\frac {1}{\pi }}\sum _{k=1}^{\infty }{\frac {\sin(2\pi kx)}{k}}} for x not an integer. == Applications == === Mod operator === For an integer x and a positive integer y, the modulo operation, denoted by x mod y, gives the value of the remainder when x is divided by y. This definition can be extended to real x and y, y ≠ 0, by the formula x mod y = x − y ⌊ x y ⌋ . {\displaystyle x{\bmod {y}}=x-y\left\lfloor {\frac {x}{y}}\right\rfloor .} Then it follows from the definition of floor function that this extended operation satisfies many natural properties. Notably, x mod y is always between 0 and y, i.e., if y is positive, 0 ≤ x mod y < y , {\displaystyle 0\leq x{\bmod {y}}<y,} and if y is negative, 0 ≥ x mod y > y . {\displaystyle 0\geq x{\bmod {y}}>y.} === Quadratic reciprocity === Gauss's third proof of quadratic reciprocity, as modified by Eisenstein, has two basic steps. Let p and q be distinct positive odd prime numbers, and let m = 1 2 ( p − 1 ) , {\displaystyle m={\tfrac {1}{2}}(p-1),} n = 1 2 ( q − 1 ) . {\displaystyle n={\tfrac {1}{2}}(q-1).} First, Gauss's lemma is used to show that the Legendre symbols are given by ( q p ) = ( − 1 ) ⌊ q p ⌋ + ⌊ 2 q p ⌋ + ⋯ + ⌊ m q p ⌋ , ( p q ) = ( − 1 ) ⌊ p q ⌋ + ⌊ 2 p q ⌋ + ⋯ + ⌊ n p q ⌋ . {\displaystyle {\begin{aligned}\left({\frac {q}{p}}\right)&=(-1)^{\left\lfloor {\frac {q}{p}}\right\rfloor +\left\lfloor {\frac {2q}{p}}\right\rfloor +\dots +\left\lfloor {\frac {mq}{p}}\right\rfloor },\\[5mu]\left({\frac {p}{q}}\right)&=(-1)^{\left\lfloor {\frac {p}{q}}\right\rfloor +\left\lfloor {\frac {2p}{q}}\right\rfloor +\dots +\left\lfloor {\frac {np}{q}}\right\rfloor }.\end{aligned}}} The second step is to use a geometric argument to show that ⌊ q p ⌋ + ⌊ 2 q p ⌋ + ⋯ + ⌊ m q p ⌋ + ⌊ p q ⌋ + ⌊ 2 p q ⌋ + ⋯ + ⌊ n p q ⌋ = m n . {\displaystyle \left\lfloor {\frac {q}{p}}\right\rfloor +\left\lfloor {\frac {2q}{p}}\right\rfloor +\dots +\left\lfloor {\frac {mq}{p}}\right\rfloor +\left\lfloor {\frac {p}{q}}\right\rfloor +\left\lfloor {\frac {2p}{q}}\right\rfloor +\dots +\left\lfloor {\frac {np}{q}}\right\rfloor =mn.} Combining these formulas gives quadratic reciprocity in the form ( p q ) ( q p ) = ( − 1 ) m n = ( − 1 ) p − 1 2 q − 1 2 . {\displaystyle \left({\frac {p}{q}}\right)\left({\frac {q}{p}}\right)=(-1)^{mn}=(-1)^{{\frac {p-1}{2}}{\frac {q-1}{2}}}.} There are formulas that use floor to express the quadratic character of small numbers mod odd primes p: ( 2 p ) = ( − 1 ) ⌊ p + 1 4 ⌋ , ( 3 p ) = ( − 1 ) ⌊ p + 1 6 ⌋ . {\displaystyle {\begin{aligned}\left({\frac {2}{p}}\right)&=(-1)^{\left\lfloor {\frac {p+1}{4}}\right\rfloor },\\[5mu]\left({\frac {3}{p}}\right)&=(-1)^{\left\lfloor {\frac {p+1}{6}}\right\rfloor }.\end{aligned}}} === Rounding === For an arbitrary real number x {\displaystyle x} , rounding x {\displaystyle x} to the nearest integer with tie breaking towards positive infinity is given by rpi ( x ) = ⌊ x + 1 2 ⌋ = ⌈ 1 2 ⌊ 2 x ⌋ ⌉ ; {\displaystyle {\text{rpi}}(x)=\left\lfloor x+{\tfrac {1}{2}}\right\rfloor =\left\lceil {\tfrac {1}{2}}\lfloor 2x\rfloor \right\rceil ;} rounding towards negative infinity is given as rni ( x ) = ⌈ x − 1 2 ⌉ = ⌊ 1 2 ⌈ 2 x ⌉ ⌋ . {\displaystyle {\text{rni}}(x)=\left\lceil x-{\tfrac {1}{2}}\right\rceil =\left\lfloor {\tfrac {1}{2}}\lceil 2x\rceil \right\rfloor .} If tie-breaking is away from 0, then the rounding function is ri ( x ) = sgn ⁡ ( x ) ⌊ | x | + 1 2 ⌋ {\displaystyle {\text{ri}}(x)=\operatorname {sgn}(x)\left\lfloor |x|+{\tfrac {1}{2}}\right\rfloor } (where sgn {\displaystyle \operatorname {sgn} } is the sign function), and rounding towards even can be expressed with the more cumbersome ⌊ x ⌉ = ⌊ x + 1 2 ⌋ + ⌈ 1 4 ( 2 x − 1 ) ⌉ − ⌊ 1 4 ( 2 x − 1 ) ⌋ − 1 , {\displaystyle \lfloor x\rceil =\left\lfloor x+{\tfrac {1}{2}}\right\rfloor +\left\lceil {\tfrac {1}{4}}(2x-1)\right\rceil -\left\lfloor {\tfrac {1}{4}}(2x-1)\right\rfloor -1,} which is the above expression for rounding towards positive infinity rpi ( x ) {\displaystyle {\text{rpi}}(x)} minus an integrality indicator for 1 4 ( 2 x − 1 ) {\displaystyle {\tfrac {1}{4}}(2x-1)} . Rounding a real number x {\displaystyle x} to the nearest integer value forms a very basic type of quantizer – a uniform one. A typical (mid-tread) uniform quantizer with a quantization step size equal to some value Δ {\displaystyle \Delta } can be expressed as Q ( x ) = Δ ⋅ ⌊ x Δ + 1 2 ⌋ {\displaystyle Q(x)=\Delta \cdot \left\lfloor {\frac {x}{\Delta }}+{\frac {1}{2}}\right\rfloor } , === Number of digits === The number of digits in base b of a positive integer k is ⌊ log b ⁡ k ⌋ + 1 = ⌈ log b ⁡ ( k + 1 ) ⌉ . {\displaystyle \lfloor \log _{b}{k}\rfloor +1=\lceil \log _{b}{(k+1)}\rceil .} === Number of strings without repeated characters === The number of possible strings of arbitrary length that doesn't use any character twice is given by ( n ) 0 + ⋯ + ( n ) n = ⌊ e n ! ⌋ {\displaystyle (n)_{0}+\cdots +(n)_{n}=\lfloor en!\rfloor } where: n > 0 is the number of letters in the alphabet (e.g., 26 in English) the falling factorial ( n ) k = n ( n − 1 ) ⋯ ( n − k + 1 ) {\displaystyle (n)_{k}=n(n-1)\cdots (n-k+1)} denotes the number of strings of length k that don't use any character twice. n! denotes the factorial of n e = 2.718... is Euler's number For n = 26, this comes out to 1096259850353149530222034277. === Factors of factorials === Let n be a positive integer and p a positive prime number. The exponent of the highest power of p that divides n! is given by a version of Legendre's formula ⌊ n p ⌋ + ⌊ n p 2 ⌋ + ⌊ n p 3 ⌋ + ⋯ = n − ∑ k a k p − 1 {\displaystyle \left\lfloor {\frac {n}{p}}\right\rfloor +\left\lfloor {\frac {n}{p^{2}}}\right\rfloor +\left\lfloor {\frac {n}{p^{3}}}\right\rfloor +\dots ={\frac {n-\sum _{k}a_{k}}{p-1}}} where n = ∑ k a k p k {\textstyle n=\sum _{k}a_{k}p^{k}} is the way of writing n in base p. This is a finite sum, since the floors are zero when pk > n. === Beatty sequence === The Beatty sequence shows how every positive irrational number gives rise to a partition of the natural numbers into two sequences via the floor function. === Euler's constant (γ) === There are formulas for Euler's constant γ = 0.57721 56649 ... that involve the floor and ceiling, e.g. γ = ∫ 1 ∞ ( 1 ⌊ x ⌋ − 1 x ) d x , {\displaystyle \gamma =\int _{1}^{\infty }\left({1 \over \lfloor x\rfloor }-{1 \over x}\right)\,dx,} γ = lim n → ∞ 1 n ∑ k = 1 n ( ⌈ n k ⌉ − n k ) , {\displaystyle \gamma =\lim _{n\to \infty }{\frac {1}{n}}\sum _{k=1}^{n}\left(\left\lceil {\frac {n}{k}}\right\rceil -{\frac {n}{k}}\right),} and γ = ∑ k = 2 ∞ ( − 1 ) k ⌊ log 2 ⁡ k ⌋ k = 1 2 − 1 3 + 2 ( 1 4 − 1 5 + 1 6 − 1 7 ) + 3 ( 1 8 − ⋯ − 1 15 ) + ⋯ {\displaystyle \gamma =\sum _{k=2}^{\infty }(-1)^{k}{\frac {\left\lfloor \log _{2}k\right\rfloor }{k}}={\tfrac {1}{2}}-{\tfrac {1}{3}}+2\left({\tfrac {1}{4}}-{\tfrac {1}{5}}+{\tfrac {1}{6}}-{\tfrac {1}{7}}\right)+3\left({\tfrac {1}{8}}-\cdots -{\tfrac {1}{15}}\right)+\cdots } === Riemann zeta function (ζ) === The fractional part function also shows up in integral representations of the Riemann zeta function. It is straightforward to prove (using integration by parts) that if φ ( x ) {\displaystyle \varphi (x)} is any function with a continuous derivative in the closed interval [a, b], ∑ a < n ≤ b φ ( n ) = ∫ a b φ ( x ) d x + ∫ a b ( { x } − 1 2 ) φ ′ ( x ) d x + ( { a } − 1 2 ) φ ( a ) − ( { b } − 1 2 ) φ ( b ) . {\displaystyle \sum _{a<n\leq b}\varphi (n)=\int _{a}^{b}\varphi (x)\,dx+\int _{a}^{b}\left(\{x\}-{\tfrac {1}{2}}\right)\varphi '(x)\,dx+\left(\{a\}-{\tfrac {1}{2}}\right)\varphi (a)-\left(\{b\}-{\tfrac {1}{2}}\right)\varphi (b).} Letting φ ( n ) = n − s {\displaystyle \varphi (n)=n^{-s}} for real part of s greater than 1 and letting a and b be integers, and letting b approach infinity gives ζ ( s ) = s ∫ 1 ∞ 1 2 − { x } x s + 1 d x + 1 s − 1 + 1 2 . {\displaystyle \zeta (s)=s\int _{1}^{\infty }{\frac {{\frac {1}{2}}-\{x\}}{x^{s+1}}}\,dx+{\frac {1}{s-1}}+{\frac {1}{2}}.} This formula is valid for all s with real part greater than −1, (except s = 1, where there is a pole) and combined with the Fourier expansion for {x} can be used to extend the zeta function to the entire complex plane and to prove its functional equation. For s = σ + it in the critical strip 0 < σ < 1, ζ ( s ) = s ∫ − ∞ ∞ e − σ ω ( ⌊ e ω ⌋ − e ω ) e − i t ω d ω . {\displaystyle \zeta (s)=s\int _{-\infty }^{\infty }e^{-\sigma \omega }(\lfloor e^{\omega }\rfloor -e^{\omega })e^{-it\omega }\,d\omega .} In 1947 van der Pol used this representation to construct an analogue computer for finding roots of the zeta function. === Formulas for prime numbers === The floor function appears in several formulas characterizing prime numbers. For example, since ⌊ n m ⌋ − ⌊ n − 1 m ⌋ = { 1 if m divides n 0 otherwise , {\displaystyle \left\lfloor {\frac {n}{m}}\right\rfloor -\left\lfloor {\frac {n-1}{m}}\right\rfloor ={\begin{cases}1&{\text{if }}m{\text{ divides }}n\\0&{\text{otherwise}},\end{cases}}} it follows that a positive integer n is a prime if and only if ∑ m = 1 ∞ ( ⌊ n m ⌋ − ⌊ n − 1 m ⌋ ) = 2. {\displaystyle \sum _{m=1}^{\infty }\left(\left\lfloor {\frac {n}{m}}\right\rfloor -\left\lfloor {\frac {n-1}{m}}\right\rfloor \right)=2.} One may also give formulas for producing the prime numbers. For example, let pn be the n-th prime, and for any integer r > 1, define the real number α by the sum α = ∑ m = 1 ∞ p m r − m 2 . {\displaystyle \alpha =\sum _{m=1}^{\infty }p_{m}r^{-m^{2}}.} Then p n = ⌊ r n 2 α ⌋ − r 2 n − 1 ⌊ r ( n − 1 ) 2 α ⌋ . {\displaystyle p_{n}=\left\lfloor r^{n^{2}}\alpha \right\rfloor -r^{2n-1}\left\lfloor r^{(n-1)^{2}}\alpha \right\rfloor .} A similar result is that there is a number θ = 1.3064... (Mills' constant) with the property that ⌊ θ 3 ⌋ , ⌊ θ 9 ⌋ , ⌊ θ 27 ⌋ , … {\displaystyle \left\lfloor \theta ^{3}\right\rfloor ,\left\lfloor \theta ^{9}\right\rfloor ,\left\lfloor \theta ^{27}\right\rfloor ,\dots } are all prime. There is also a number ω = 1.9287800... with the property that ⌊ 2 ω ⌋ , ⌊ 2 2 ω ⌋ , ⌊ 2 2 2 ω ⌋ , … {\displaystyle \left\lfloor 2^{\omega }\right\rfloor ,\left\lfloor 2^{2^{\omega }}\right\rfloor ,\left\lfloor 2^{2^{2^{\omega }}}\right\rfloor ,\dots } are all prime. Let π(x) be the number of primes less than or equal to x. It is a straightforward deduction from Wilson's theorem that π ( n ) = ∑ j = 2 n ⌊ ( j − 1 ) ! + 1 j − ⌊ ( j − 1 ) ! j ⌋ ⌋ . {\displaystyle \pi (n)=\sum _{j=2}^{n}{\Biggl \lfloor }{\frac {(j-1)!+1}{j}}-\left\lfloor {\frac {(j-1)!}{j}}\right\rfloor {\Biggr \rfloor }.} Also, if n ≥ 2, π ( n ) = ∑ j = 2 n ⌊ 1 ∑ k = 2 j ⌊ ⌊ j k ⌋ k j ⌋ ⌋ . {\displaystyle \pi (n)=\sum _{j=2}^{n}\left\lfloor {\frac {1}{\displaystyle \sum _{k=2}^{j}\left\lfloor \left\lfloor {\frac {j}{k}}\right\rfloor {\frac {k}{j}}\right\rfloor }}\right\rfloor .} None of the formulas in this section are of any practical use. === Solved problems === Ramanujan submitted these problems to the Journal of the Indian Mathematical Society. If n is a positive integer, prove that ⌊ n 3 ⌋ + ⌊ n + 2 6 ⌋ + ⌊ n + 4 6 ⌋ = ⌊ n 2 ⌋ + ⌊ n + 3 6 ⌋ , {\displaystyle \left\lfloor {\tfrac {n}{3}}\right\rfloor +\left\lfloor {\tfrac {n+2}{6}}\right\rfloor +\left\lfloor {\tfrac {n+4}{6}}\right\rfloor =\left\lfloor {\tfrac {n}{2}}\right\rfloor +\left\lfloor {\tfrac {n+3}{6}}\right\rfloor ,} ⌊ 1 2 + n + 1 2 ⌋ = ⌊ 1 2 + n + 1 4 ⌋ , {\displaystyle \left\lfloor {\tfrac {1}{2}}+{\sqrt {n+{\tfrac {1}{2}}}}\right\rfloor =\left\lfloor {\tfrac {1}{2}}+{\sqrt {n+{\tfrac {1}{4}}}}\right\rfloor ,} ⌊ n + n + 1 ⌋ = ⌊ 4 n + 2 ⌋ . {\displaystyle \left\lfloor {\sqrt {n}}+{\sqrt {n+1}}\right\rfloor =\left\lfloor {\sqrt {4n+2}}\right\rfloor .} Some generalizations to the above floor function identities have been proven. === Unsolved problem === The study of Waring's problem has led to an unsolved problem: Are there any positive integers k ≥ 6 such that 3 k − 2 k ⌊ ( 3 2 ) k ⌋ > 2 k − ⌊ ( 3 2 ) k ⌋ − 2 ? {\displaystyle 3^{k}-2^{k}{\Bigl \lfloor }{\bigl (}{\tfrac {3}{2}}{\bigr )}^{k}{\Bigr \rfloor }>2^{k}-{\Bigl \lfloor }{\bigl (}{\tfrac {3}{2}}{\bigr )}^{k}{\Bigr \rfloor }-2\ ?} Mahler has proved there can only be a finite number of such k; none are known. == Computer implementations == In most programming languages, the simplest method to convert a floating point number to an integer does not do floor or ceiling, but truncation. The reason for this is historical, as the first machines used ones' complement and truncation was simpler to implement (floor is simpler in two's complement). FORTRAN was defined to require this behavior and thus almost all processors implement conversion this way. Some consider this to be an unfortunate historical design decision that has led to bugs handling negative offsets and graphics on the negative side of the origin. An arithmetic right-shift of a signed integer x {\displaystyle x} by n {\displaystyle n} is the same as ⌊ x 2 n ⌋ {\displaystyle \left\lfloor {\tfrac {x}{2^{n}}}\right\rfloor } . Division by a power of 2 is often written as a right-shift, not for optimization as might be assumed, but because the floor of negative results is required. Assuming such shifts are "premature optimization" and replacing them with division can break software. Many programming languages (including C, C++, C#, Java, Julia, PHP, R, and Python) provide standard functions for floor and ceiling, usually called floor and ceil, or less commonly ceiling. The language APL uses ⌊x for floor. The J Programming Language, a follow-on to APL that is designed to use standard keyboard symbols, uses <. for floor and >. for ceiling. ALGOL usesentier for floor. In Microsoft Excel the function INT rounds down rather than toward zero, while FLOOR rounds toward zero, the opposite of what "int" and "floor" do in other languages. Since 2010 FLOOR has been changed to error if the number is negative. The OpenDocument file format, as used by OpenOffice.org, Libreoffice and others, INT and FLOOR both do floor, and FLOOR has a third argument to reproduce Excel's earlier behavior. == See also == Bracket (mathematics) Integer-valued function Step function Modulo operation == Citations == == References == J.W.S. Cassels (1957), An introduction to Diophantine approximation, Cambridge Tracts in Mathematics and Mathematical Physics, vol. 45, Cambridge University Press Crandall, Richard; Pomerance, Carl (2001), Prime Numbers: A Computational Perspective, New York: Springer, ISBN 0-387-94777-9 Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994), Concrete Mathematics, Reading Ma.: Addison-Wesley, ISBN 0-201-55802-5 Hardy, G. H.; Wright, E. M. (1980), An Introduction to the Theory of Numbers (Fifth edition), Oxford: Oxford University Press, ISBN 978-0-19-853171-5 Nicholas J. Higham, Handbook of writing for the mathematical sciences, SIAM. ISBN 0-89871-420-6, p. 25 ISO/IEC. ISO/IEC 9899::1999(E): Programming languages — C (2nd ed), 1999; Section 6.3.1.4, p. 43. Iverson, Kenneth E. (1962), A Programming Language, Wiley Lemmermeyer, Franz (2000), Reciprocity Laws: from Euler to Eisenstein, Berlin: Springer, ISBN 3-540-66957-4 Ramanujan, Srinivasa (2000), Collected Papers, Providence RI: AMS / Chelsea, ISBN 978-0-8218-2076-6 Ribenboim, Paulo (1996), The New Book of Prime Number Records, New York: Springer, ISBN 0-387-94457-5 Michael Sullivan. Precalculus, 8th edition, p. 86 Titchmarsh, Edward Charles; Heath-Brown, David Rodney ("Roger") (1986), The Theory of the Riemann Zeta-function (2nd ed.), Oxford: Oxford U. P., ISBN 0-19-853369-1 == External links == "Floor function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Štefan Porubský, "Integer rounding functions", Interactive Information Portal for Algorithmic Mathematics, Institute of Computer Science of the Czech Academy of Sciences, Prague, Czech Republic, retrieved 24 October 2008 Weisstein, Eric W. "Floor Function". MathWorld. Weisstein, Eric W. "Ceiling Function". MathWorld.
Wikipedia/Ceiling_function
In applied mathematics, the Joukowsky transform (sometimes transliterated Joukovsky, Joukowski or Zhukovsky) is a conformal map historically used to understand some principles of airfoil design. It is named after Nikolai Zhukovsky, who published it in 1910. The transform is z = ζ + 1 ζ , {\displaystyle z=\zeta +{\frac {1}{\zeta }},} where z = x + i y {\displaystyle z=x+iy} is a complex variable in the new space and ζ = χ + i η {\displaystyle \zeta =\chi +i\eta } is a complex variable in the original space. In aerodynamics, the transform is used to solve for the two-dimensional potential flow around a class of airfoils known as Joukowsky airfoils. A Joukowsky airfoil is generated in the complex plane ( z {\displaystyle z} -plane) by applying the Joukowsky transform to a circle in the ζ {\displaystyle \zeta } -plane. The coordinates of the centre of the circle are variables, and varying them modifies the shape of the resulting airfoil. The circle encloses the point ζ = − 1 {\displaystyle \zeta =-1} (where the derivative is zero) and intersects the point ζ = 1. {\displaystyle \zeta =1.} This can be achieved for any allowable centre position μ x + i μ y {\displaystyle \mu _{x}+i\mu _{y}} by varying the radius of the circle. Joukowsky airfoils have a cusp at their trailing edge. A closely related conformal mapping, the Kármán–Trefftz transform, generates the broader class of Kármán–Trefftz airfoils by controlling the trailing edge angle. When a trailing edge angle of zero is specified, the Kármán–Trefftz transform reduces to the Joukowsky transform. == General Joukowsky transform == The Joukowsky transform of any complex number ζ {\displaystyle \zeta } to z {\displaystyle z} is as follows: z = x + i y = ζ + 1 ζ = χ + i η + 1 χ + i η = χ + i η + χ − i η χ 2 + η 2 = χ ( 1 + 1 χ 2 + η 2 ) + i η ( 1 − 1 χ 2 + η 2 ) . {\displaystyle {\begin{aligned}z&=x+iy=\zeta +{\frac {1}{\zeta }}\\&=\chi +i\eta +{\frac {1}{\chi +i\eta }}\\[2pt]&=\chi +i\eta +{\frac {\chi -i\eta }{\chi ^{2}+\eta ^{2}}}\\[2pt]&=\chi \left(1+{\frac {1}{\chi ^{2}+\eta ^{2}}}\right)+i\eta \left(1-{\frac {1}{\chi ^{2}+\eta ^{2}}}\right).\end{aligned}}} So the real ( x {\displaystyle x} ) and imaginary ( y {\displaystyle y} ) components are: x = χ ( 1 + 1 χ 2 + η 2 ) , y = η ( 1 − 1 χ 2 + η 2 ) . {\displaystyle {\begin{aligned}x&=\chi \left(1+{\frac {1}{\chi ^{2}+\eta ^{2}}}\right),\\[2pt]y&=\eta \left(1-{\frac {1}{\chi ^{2}+\eta ^{2}}}\right).\end{aligned}}} === Sample Joukowsky airfoil === The transformation of all complex numbers on the unit circle is a special case. | ζ | = χ 2 + η 2 = 1 , {\displaystyle |\zeta |={\sqrt {\chi ^{2}+\eta ^{2}}}=1,} which gives χ 2 + η 2 = 1. {\displaystyle \chi ^{2}+\eta ^{2}=1.} So the real component becomes x = χ ( 1 + 1 ) = 2 χ {\textstyle x=\chi (1+1)=2\chi } and the imaginary component becomes y = η ( 1 − 1 ) = 0 {\textstyle y=\eta (1-1)=0} . Thus the complex unit circle maps to a flat plate on the real-number line from −2 to +2. Transformations from other circles make a wide range of airfoil shapes. == Velocity field and circulation for the Joukowsky airfoil == The solution to potential flow around a circular cylinder is analytic and well known. It is the superposition of uniform flow, a doublet, and a vortex. The complex conjugate velocity W ~ = u ~ x − i u ~ y , {\displaystyle {\widetilde {W}}={\widetilde {u}}_{x}-i{\widetilde {u}}_{y},} around the circle in the ζ {\displaystyle \zeta } -plane is W ~ = V ∞ e − i α + i Γ 2 π ( ζ − μ ) − V ∞ R 2 e i α ( ζ − μ ) 2 , {\displaystyle {\widetilde {W}}=V_{\infty }e^{-i\alpha }+{\frac {i\Gamma }{2\pi (\zeta -\mu )}}-{\frac {V_{\infty }R^{2}e^{i\alpha }}{(\zeta -\mu )^{2}}},} where μ = μ x + i μ y {\displaystyle \mu =\mu _{x}+i\mu _{y}} is the complex coordinate of the centre of the circle, V ∞ {\displaystyle V_{\infty }} is the freestream velocity of the fluid, α {\displaystyle \alpha } is the angle of attack of the airfoil with respect to the freestream flow, R {\displaystyle R} is the radius of the circle, calculated using R = ( 1 − μ x ) 2 + μ y 2 {\textstyle R={\sqrt {\left(1-\mu _{x}\right)^{2}+\mu _{y}^{2}}}} , Γ {\displaystyle \Gamma } is the circulation, found using the Kutta condition, which reduces in this case to Γ = 4 π V ∞ R sin ⁡ ( α + sin − 1 ⁡ μ y R ) . {\displaystyle \Gamma =4\pi V_{\infty }R\sin \left(\alpha +\sin ^{-1}{\frac {\mu _{y}}{R}}\right).} The complex velocity W {\displaystyle W} around the airfoil in the z {\displaystyle z} -plane is, according to the rules of conformal mapping and using the Joukowsky transformation, W = W ~ d z d ζ = W ~ 1 − 1 ζ 2 . {\displaystyle W={\frac {\widetilde {W}}{\frac {dz}{d\zeta }}}={\frac {\widetilde {W}}{1-{\frac {1}{\zeta ^{2}}}}}.} Here W = u x − i u y , {\displaystyle W=u_{x}-iu_{y},} with u x {\displaystyle u_{x}} and u y {\displaystyle u_{y}} the velocity components in the x {\displaystyle x} and y {\displaystyle y} directions respectively ( z = x + i y , {\displaystyle z=x+iy,} with x {\displaystyle x} and y {\displaystyle y} real-valued). From this velocity, other properties of interest of the flow, such as the coefficient of pressure and lift per unit of span can be calculated. == Kármán–Trefftz transform == The Kármán–Trefftz transform is a conformal map closely related to the Joukowsky transform. While a Joukowsky airfoil has a cusped trailing edge, a Kármán–Trefftz airfoil—which is the result of the transform of a circle in the ζ {\displaystyle \zeta } -plane to the physical z {\displaystyle z} -plane, analogue to the definition of the Joukowsky airfoil—has a non-zero angle at the trailing edge, between the upper and lower airfoil surface. The Kármán–Trefftz transform therefore requires an additional parameter: the trailing-edge angle α . {\displaystyle \alpha .} This transform is where b {\displaystyle b} is a real constant that determines the positions where d z / d ζ = 0 {\displaystyle dz/d\zeta =0} , and n {\displaystyle n} is slightly smaller than 2. The angle α {\displaystyle \alpha } between the tangents of the upper and lower airfoil surfaces at the trailing edge is related to n {\displaystyle n} as α = 2 π − n π , n = 2 − α π . {\displaystyle \alpha =2\pi -n\pi ,\quad n=2-{\frac {\alpha }{\pi }}.} The derivative d z / d ζ {\displaystyle dz/d\zeta } , required to compute the velocity field, is d z d ζ = 4 n 2 ζ 2 − 1 ( 1 + 1 ζ ) n ( 1 − 1 ζ ) n [ ( 1 + 1 ζ ) n − ( 1 − 1 ζ ) n ] 2 . {\displaystyle {\frac {dz}{d\zeta }}={\frac {4n^{2}}{\zeta ^{2}-1}}{\frac {\left(1+{\frac {1}{\zeta }}\right)^{n}\left(1-{\frac {1}{\zeta }}\right)^{n}}{\left[\left(1+{\frac {1}{\zeta }}\right)^{n}-\left(1-{\frac {1}{\zeta }}\right)^{n}\right]^{2}}}.} === Background === First, add and subtract 2 from the Joukowsky transform, as given above: z + 2 = ζ + 2 + 1 ζ = 1 ζ ( ζ + 1 ) 2 , z − 2 = ζ − 2 + 1 ζ = 1 ζ ( ζ − 1 ) 2 . {\displaystyle {\begin{aligned}z+2&=\zeta +2+{\frac {1}{\zeta }}={\frac {1}{\zeta }}(\zeta +1)^{2},\\[3pt]z-2&=\zeta -2+{\frac {1}{\zeta }}={\frac {1}{\zeta }}(\zeta -1)^{2}.\end{aligned}}} Dividing the left and right hand sides gives z − 2 z + 2 = ( ζ − 1 ζ + 1 ) 2 . {\displaystyle {\frac {z-2}{z+2}}=\left({\frac {\zeta -1}{\zeta +1}}\right)^{2}.} The right hand side contains (as a factor) the simple second-power law from potential flow theory, applied at the trailing edge near ζ = + 1. {\displaystyle \zeta =+1.} From conformal mapping theory, this quadratic map is known to change a half plane in the ζ {\displaystyle \zeta } -space into potential flow around a semi-infinite straight line. Further, values of the power less than 2 will result in flow around a finite angle. So, by changing the power in the Joukowsky transform to a value slightly less than 2, the result is a finite angle instead of a cusp. Replacing 2 by n {\displaystyle n} in the previous equation gives z − n z + n = ( ζ − 1 ζ + 1 ) n , {\displaystyle {\frac {z-n}{z+n}}=\left({\frac {\zeta -1}{\zeta +1}}\right)^{n},} which is the Kármán–Trefftz transform. Solving for z {\displaystyle z} gives it in the form of equation A. == Symmetrical Joukowsky airfoils == In 1943 Hsue-shen Tsien published a transform of a circle of radius a {\displaystyle a} into a symmetrical airfoil that depends on parameter ϵ {\displaystyle \epsilon } and angle of inclination α {\displaystyle \alpha } : z = e i α ( ζ − ϵ + 1 ζ − ϵ + 2 ϵ 2 a + ϵ ) . {\displaystyle z=e^{i\alpha }\left(\zeta -\epsilon +{\frac {1}{\zeta -\epsilon }}+{\frac {2\epsilon ^{2}}{a+\epsilon }}\right).} The parameter ϵ {\displaystyle \epsilon } yields a flat plate when zero, and a circle when infinite; thus it corresponds to the thickness of the airfoil. Furthermore the radius of the cylinder a = 1 + ϵ {\displaystyle a=1+\epsilon } . == Notes == == References == == External links == Joukowski Transform NASA Applet Joukowsky Transform Interactive WebApp
Wikipedia/Joukowsky_transform
In mathematics the finite Fourier transform may refer to either another name for discrete-time Fourier transform (DTFT) of a finite-length series. E.g., F.J.Harris (pp. 52–53) describes the finite Fourier transform as a "continuous periodic function" and the discrete Fourier transform (DFT) as "a set of samples of the finite Fourier transform". In actual implementation, that is not two separate steps; the DFT replaces the DTFT. So J.Cooley (pp. 77–78) describes the implementation as discrete finite Fourier transform. or another name for the Fourier series coefficients. or another name for one snapshot of a short-time Fourier transform. == See also == Fourier transform == Notes == == References ==
Wikipedia/Finite_Fourier_transform_(disambiguation)
In signal processing, overlap–save is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal x [ n ] {\displaystyle x[n]} and a finite impulse response (FIR) filter h [ n ] {\displaystyle h[n]} : where h[m] = 0 for m outside the region [1, M]. This article uses common abstract notations, such as y ( t ) = x ( t ) ∗ h ( t ) , {\textstyle y(t)=x(t)*h(t),} or y ( t ) = H { x ( t ) } , {\textstyle y(t)={\mathcal {H}}\{x(t)\},} in which it is understood that the functions should be thought of in their totality, rather than at specific instants t {\textstyle t} (see Convolution#Notation). The concept is to compute short segments of y[n] of an arbitrary length L, and concatenate the segments together. That requires longer input segments that overlap the next input segment. The overlapped data gets "saved" and used a second time. First we describe that process with just conventional convolution for each output segment. Then we describe how to replace that convolution with a more efficient method. Consider a segment that begins at n = kL + M, for any integer k, and define: x k [ n ] ≜ { x [ n + k L ] , 1 ≤ n ≤ L + M − 1 0 , otherwise . {\displaystyle x_{k}[n]\ \triangleq {\begin{cases}x[n+kL],&1\leq n\leq L+M-1\\0,&{\textrm {otherwise}}.\end{cases}}} y k [ n ] ≜ x k [ n ] ∗ h [ n ] = ∑ m = 1 M h [ m ] ⋅ x k [ n − m ] . {\displaystyle y_{k}[n]\ \triangleq \ x_{k}[n]*h[n]=\sum _{m=1}^{M}h[m]\cdot x_{k}[n-m].} Then, for k L + M + 1 ≤ n ≤ k L + L + M {\displaystyle kL+M+1\leq n\leq kL+L+M} , and equivalently M + 1 ≤ n − k L ≤ L + M {\displaystyle M+1\leq n-kL\leq L+M} , we can write: y [ n ] = ∑ m = 1 M h [ m ] ⋅ x k [ n − k L − m ] ≜ y k [ n − k L ] . {\displaystyle y[n]=\sum _{m=1}^{M}h[m]\cdot x_{k}[n-kL-m]\ \ \triangleq \ \ y_{k}[n-kL].} With the substitution j = n − k L {\displaystyle j=n-kL} , the task is reduced to computing y k [ j ] {\displaystyle y_{k}[j]} for M + 1 ≤ j ≤ L + M {\displaystyle M+1\leq j\leq L+M} . These steps are illustrated in the first 3 traces of Figure 1, except that the desired portion of the output (third trace) corresponds to 1 ≤ j ≤ L. If we periodically extend xk[n] with period N ≥ L + M − 1, according to: x k , N [ n ] ≜ ∑ ℓ = − ∞ ∞ x k [ n − ℓ N ] , {\displaystyle x_{k,N}[n]\ \triangleq \ \sum _{\ell =-\infty }^{\infty }x_{k}[n-\ell N],} the convolutions ( x k , N ) ∗ h {\displaystyle (x_{k,N})*h\,} and x k ∗ h {\displaystyle x_{k}*h\,} are equivalent in the region M + 1 ≤ n ≤ L + M {\displaystyle M+1\leq n\leq L+M} . It is therefore sufficient to compute the N-point circular (or cyclic) convolution of x k [ n ] {\displaystyle x_{k}[n]\,} with h [ n ] {\displaystyle h[n]\,} in the region [1, N]. The subregion [M + 1, L + M] is appended to the output stream, and the other values are discarded. The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem: where: DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over N discrete points, and L is customarily chosen such that N = L+M-1 is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency. The leading and trailing edge-effects of circular convolution are overlapped and added, and subsequently discarded. == Pseudocode == (Overlap-save algorithm for linear convolution) h = FIR_impulse_response M = length(h) overlap = M − 1 N = 8 × overlap (see next section for a better choice) step_size = N − overlap H = DFT(h, N) position = 0 while position + N ≤ length(x) yt = IDFT(DFT(x(position+(1:N))) × H) y(position+(1:step_size)) = yt(M : N) (discard M−1 y-values) position = position + step_size end == Efficiency considerations == When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about N (log2(N) + 1) complex multiplications for the FFT, product of arrays, and IFFT. Each iteration produces N-M+1 output samples, so the number of complex multiplications per output sample is about: For example, when M = 201 {\displaystyle M=201} and N = 1024 , {\displaystyle N=1024,} Eq.3 equals 13.67 , {\displaystyle 13.67,} whereas direct evaluation of Eq.1 would require up to 201 {\displaystyle 201} complex multiplications per output sample, the worst case being when both x {\displaystyle x} and h {\displaystyle h} are complex-valued. Also note that for any given M , {\displaystyle M,} Eq.3 has a minimum with respect to N . {\displaystyle N.} Figure 2 is a graph of the values of N {\displaystyle N} that minimize Eq.3 for a range of filter lengths ( M {\displaystyle M} ). Instead of Eq.1, we can also consider applying Eq.2 to a long sequence of length N x {\displaystyle N_{x}} samples. The total number of complex multiplications would be: N x ⋅ ( log 2 ⁡ ( N x ) + 1 ) . {\displaystyle N_{x}\cdot (\log _{2}(N_{x})+1).} Comparatively, the number of complex multiplications required by the pseudocode algorithm is: N x ⋅ ( log 2 ⁡ ( N ) + 1 ) ⋅ N N − M + 1 . {\displaystyle N_{x}\cdot (\log _{2}(N)+1)\cdot {\frac {N}{N-M+1}}.} Hence the cost of the overlap–save method scales almost as O ( N x log 2 ⁡ N ) {\displaystyle O\left(N_{x}\log _{2}N\right)} while the cost of a single, large circular convolution is almost O ( N x log 2 ⁡ N x ) {\displaystyle O\left(N_{x}\log _{2}N_{x}\right)} . == Overlap–discard == Overlap–discard and Overlap–scrap are less commonly used labels for the same method described here. However, these labels are actually better (than overlap–save) to distinguish from overlap–add, because both methods "save", but only one discards. "Save" merely refers to the fact that M − 1 input (or output) samples from segment k are needed to process segment k + 1. === Extending overlap–save === The overlap–save algorithm can be extended to include other common operations of a system: additional IFFT channels can be processed more cheaply than the first by reusing the forward FFT sampling rates can be changed by using different sized forward and inverse FFTs frequency translation (mixing) can be accomplished by rearranging frequency bins == See also == Overlap–add method Circular convolution#Example == Notes == == References == == External links == Dr. Deepa Kundur, Overlap Add and Overlap Save, University of Toronto
Wikipedia/Overlap-save_method
A time-variant system is a system whose output response depends on moment of observation as well as moment of input signal application. In other words, a time delay or time advance of input not only shifts the output signal in time but also changes other parameters and behavior. Time variant systems respond differently to the same input at different times. The opposite is true for time invariant systems (TIV). == Overview == There are many well developed techniques for dealing with the response of linear time invariant systems, such as Laplace and Fourier transforms. However, these techniques are not strictly valid for time-varying systems. A system undergoing slow time variation in comparison to its time constants can usually be considered to be time invariant: they are close to time invariant on a small scale. An example of this is the aging and wear of electronic components, which happens on a scale of years, and thus does not result in any behaviour qualitatively different from that observed in a time invariant system: day-to-day, they are effectively time invariant, though year to year, the parameters may change. Other linear time variant systems may behave more like nonlinear systems, if the system changes quickly – significantly differing between measurements. The following things can be said about a time-variant system: It has explicit dependence on time. It does not have an impulse response in the normal sense. The system can be characterized by an impulse response except the impulse response must be known at each and every time instant. It is not stationary in the sense of constancy of the signal's distributional frequency. This means that the parameters which govern the signal's process exhibit varaition with the passage of time. See Stationarity (statistics) for in-depth theoretics regarding this property. == Linear time-variant systems == Linear-time variant (LTV) systems are the ones whose parameters vary with time according to previously specified laws. Mathematically, there is a well defined dependence of the system over time and over the input parameters that change over time. y ( t ) = f ( x ( t ) , t ) {\displaystyle y(t)=f(x(t),t)} In order to solve time-variant systems, the algebraic methods consider initial conditions of the system i.e. whether the system is zero-input or non-zero input system. == Examples of time-variant systems == The following time varying systems cannot be modelled by assuming that they are time invariant: The Earth's thermodynamic response to incoming Solar irradiance varies with time due to changes in the Earth's albedo and the presence of greenhouse gases in the atmosphere. Discrete wavelet transform, often used in modern signal processing, is time variant because it makes use of the decimation operation. == See also == Control system Control theory System analysis Time-invariant system == References ==
Wikipedia/Function_of_time
In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence. The polynomials arise in: signal processing as Hermitian wavelets for wavelet transform analysis probability, such as the Edgeworth series, as well as in connection with Brownian motion; combinatorics, as an example of an Appell sequence, obeying the umbral calculus; numerical analysis as Gaussian quadrature; physics, where they give rise to the eigenstates of the quantum harmonic oscillator; and they also occur in some cases of the heat equation (when the term x u x {\displaystyle {\begin{aligned}xu_{x}\end{aligned}}} is present); systems theory in connection with nonlinear operations on Gaussian noise. random matrix theory in Gaussian ensembles. Hermite polynomials were defined by Pierre-Simon Laplace in 1810, though in scarcely recognizable form, and studied in detail by Pafnuty Chebyshev in 1859. Chebyshev's work was overlooked, and they were named later after Charles Hermite, who wrote on the polynomials in 1864, describing them as new. They were consequently not new, although Hermite was the first to define the multidimensional polynomials. == Definition == Like the other classical orthogonal polynomials, the Hermite polynomials can be defined from several different starting points. Noting from the outset that there are two different standardizations in common use, one convenient method is as follows: The "probabilist's Hermite polynomials" are given by He n ⁡ ( x ) = ( − 1 ) n e x 2 2 d n d x n e − x 2 2 , {\displaystyle \operatorname {He} _{n}(x)=(-1)^{n}e^{\frac {x^{2}}{2}}{\frac {d^{n}}{dx^{n}}}e^{-{\frac {x^{2}}{2}}},} while the "physicist's Hermite polynomials" are given by H n ( x ) = ( − 1 ) n e x 2 d n d x n e − x 2 . {\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}e^{-x^{2}}.} These equations have the form of a Rodrigues' formula and can also be written as, He n ⁡ ( x ) = ( x − d d x ) n ⋅ 1 , H n ( x ) = ( 2 x − d d x ) n ⋅ 1. {\displaystyle \operatorname {He} _{n}(x)=\left(x-{\frac {d}{dx}}\right)^{n}\cdot 1,\quad H_{n}(x)=\left(2x-{\frac {d}{dx}}\right)^{n}\cdot 1.} The two definitions are not exactly identical; each is a rescaling of the other: H n ( x ) = 2 n 2 He n ⁡ ( 2 x ) , He n ⁡ ( x ) = 2 − n 2 H n ( x 2 ) . {\displaystyle H_{n}(x)=2^{\frac {n}{2}}\operatorname {He} _{n}\left({\sqrt {2}}\,x\right),\quad \operatorname {He} _{n}(x)=2^{-{\frac {n}{2}}}H_{n}\left({\frac {x}{\sqrt {2}}}\right).} These are Hermite polynomial sequences of different variances; see the material on variances below. The notation He and H is that used in the standard references. The polynomials Hen are sometimes denoted by Hn, especially in probability theory, because 1 2 π e − x 2 2 {\displaystyle {\frac {1}{\sqrt {2\pi }}}e^{-{\frac {x^{2}}{2}}}} is the probability density function for the normal distribution with expected value 0 and standard deviation 1. The first eleven probabilist's Hermite polynomials are: He 0 ⁡ ( x ) = 1 , He 1 ⁡ ( x ) = x , He 2 ⁡ ( x ) = x 2 − 1 , He 3 ⁡ ( x ) = x 3 − 3 x , He 4 ⁡ ( x ) = x 4 − 6 x 2 + 3 , He 5 ⁡ ( x ) = x 5 − 10 x 3 + 15 x , He 6 ⁡ ( x ) = x 6 − 15 x 4 + 45 x 2 − 15 , He 7 ⁡ ( x ) = x 7 − 21 x 5 + 105 x 3 − 105 x , He 8 ⁡ ( x ) = x 8 − 28 x 6 + 210 x 4 − 420 x 2 + 105 , He 9 ⁡ ( x ) = x 9 − 36 x 7 + 378 x 5 − 1260 x 3 + 945 x , He 10 ⁡ ( x ) = x 10 − 45 x 8 + 630 x 6 − 3150 x 4 + 4725 x 2 − 945. {\displaystyle {\begin{aligned}\operatorname {He} _{0}(x)&=1,\\\operatorname {He} _{1}(x)&=x,\\\operatorname {He} _{2}(x)&=x^{2}-1,\\\operatorname {He} _{3}(x)&=x^{3}-3x,\\\operatorname {He} _{4}(x)&=x^{4}-6x^{2}+3,\\\operatorname {He} _{5}(x)&=x^{5}-10x^{3}+15x,\\\operatorname {He} _{6}(x)&=x^{6}-15x^{4}+45x^{2}-15,\\\operatorname {He} _{7}(x)&=x^{7}-21x^{5}+105x^{3}-105x,\\\operatorname {He} _{8}(x)&=x^{8}-28x^{6}+210x^{4}-420x^{2}+105,\\\operatorname {He} _{9}(x)&=x^{9}-36x^{7}+378x^{5}-1260x^{3}+945x,\\\operatorname {He} _{10}(x)&=x^{10}-45x^{8}+630x^{6}-3150x^{4}+4725x^{2}-945.\end{aligned}}} The first eleven physicist's Hermite polynomials are: H 0 ( x ) = 1 , H 1 ( x ) = 2 x , H 2 ( x ) = 4 x 2 − 2 , H 3 ( x ) = 8 x 3 − 12 x , H 4 ( x ) = 16 x 4 − 48 x 2 + 12 , H 5 ( x ) = 32 x 5 − 160 x 3 + 120 x , H 6 ( x ) = 64 x 6 − 480 x 4 + 720 x 2 − 120 , H 7 ( x ) = 128 x 7 − 1344 x 5 + 3360 x 3 − 1680 x , H 8 ( x ) = 256 x 8 − 3584 x 6 + 13440 x 4 − 13440 x 2 + 1680 , H 9 ( x ) = 512 x 9 − 9216 x 7 + 48384 x 5 − 80640 x 3 + 30240 x , H 10 ( x ) = 1024 x 10 − 23040 x 8 + 161280 x 6 − 403200 x 4 + 302400 x 2 − 30240. {\displaystyle {\begin{aligned}H_{0}(x)&=1,\\H_{1}(x)&=2x,\\H_{2}(x)&=4x^{2}-2,\\H_{3}(x)&=8x^{3}-12x,\\H_{4}(x)&=16x^{4}-48x^{2}+12,\\H_{5}(x)&=32x^{5}-160x^{3}+120x,\\H_{6}(x)&=64x^{6}-480x^{4}+720x^{2}-120,\\H_{7}(x)&=128x^{7}-1344x^{5}+3360x^{3}-1680x,\\H_{8}(x)&=256x^{8}-3584x^{6}+13440x^{4}-13440x^{2}+1680,\\H_{9}(x)&=512x^{9}-9216x^{7}+48384x^{5}-80640x^{3}+30240x,\\H_{10}(x)&=1024x^{10}-23040x^{8}+161280x^{6}-403200x^{4}+302400x^{2}-30240.\end{aligned}}} == Properties == The nth-order Hermite polynomial is a polynomial of degree n. The probabilist's version Hen has leading coefficient 1, while the physicist's version Hn has leading coefficient 2n. === Symmetry === From the Rodrigues formulae given above, we can see that Hn(x) and Hen(x) are even or odd functions depending on n: H n ( − x ) = ( − 1 ) n H n ( x ) , He n ⁡ ( − x ) = ( − 1 ) n He n ⁡ ( x ) . {\displaystyle H_{n}(-x)=(-1)^{n}H_{n}(x),\quad \operatorname {He} _{n}(-x)=(-1)^{n}\operatorname {He} _{n}(x).} === Orthogonality === Hn(x) and Hen(x) are nth-degree polynomials for n = 0, 1, 2, 3,.... These polynomials are orthogonal with respect to the weight function (measure) w ( x ) = e − x 2 2 ( for He ) {\displaystyle w(x)=e^{-{\frac {x^{2}}{2}}}\quad ({\text{for }}\operatorname {He} )} or w ( x ) = e − x 2 ( for H ) , {\displaystyle w(x)=e^{-x^{2}}\quad ({\text{for }}H),} i.e., we have ∫ − ∞ ∞ H m ( x ) H n ( x ) w ( x ) d x = 0 for all m ≠ n . {\displaystyle \int _{-\infty }^{\infty }H_{m}(x)H_{n}(x)\,w(x)\,dx=0\quad {\text{for all }}m\neq n.} Furthermore, ∫ − ∞ ∞ H m ( x ) H n ( x ) e − x 2 d x = π 2 n n ! δ n m , {\displaystyle \int _{-\infty }^{\infty }H_{m}(x)H_{n}(x)\,e^{-x^{2}}\,dx={\sqrt {\pi }}\,2^{n}n!\,\delta _{nm},} and ∫ − ∞ ∞ He m ⁡ ( x ) He n ⁡ ( x ) e − x 2 2 d x = 2 π n ! δ n m , {\displaystyle \int _{-\infty }^{\infty }\operatorname {He} _{m}(x)\operatorname {He} _{n}(x)\,e^{-{\frac {x^{2}}{2}}}\,dx={\sqrt {2\pi }}\,n!\,\delta _{nm},} where δ n m {\displaystyle \delta _{nm}} is the Kronecker delta. The probabilist polynomials are thus orthogonal with respect to the standard normal probability density function. === Completeness === The Hermite polynomials (probabilist's or physicist's) form an orthogonal basis of the Hilbert space of functions satisfying ∫ − ∞ ∞ | f ( x ) | 2 w ( x ) d x < ∞ , {\displaystyle \int _{-\infty }^{\infty }{\bigl |}f(x){\bigr |}^{2}\,w(x)\,dx<\infty ,} in which the inner product is given by the integral ⟨ f , g ⟩ = ∫ − ∞ ∞ f ( x ) g ( x ) ¯ w ( x ) d x {\displaystyle \langle f,g\rangle =\int _{-\infty }^{\infty }f(x){\overline {g(x)}}\,w(x)\,dx} including the Gaussian weight function w(x) defined in the preceding section. An orthogonal basis for L2(R, w(x) dx) is a complete orthogonal system. For an orthogonal system, completeness is equivalent to the fact that the 0 function is the only function f ∈ L2(R, w(x) dx) orthogonal to all functions in the system. Since the linear span of Hermite polynomials is the space of all polynomials, one has to show (in physicist case) that if f satisfies ∫ − ∞ ∞ f ( x ) x n e − x 2 d x = 0 {\displaystyle \int _{-\infty }^{\infty }f(x)x^{n}e^{-x^{2}}\,dx=0} for every n ≥ 0, then f = 0. One possible way to do this is to appreciate that the entire function F ( z ) = ∫ − ∞ ∞ f ( x ) e z x − x 2 d x = ∑ n = 0 ∞ z n n ! ∫ f ( x ) x n e − x 2 d x = 0 {\displaystyle F(z)=\int _{-\infty }^{\infty }f(x)e^{zx-x^{2}}\,dx=\sum _{n=0}^{\infty }{\frac {z^{n}}{n!}}\int f(x)x^{n}e^{-x^{2}}\,dx=0} vanishes identically. The fact then that F(it) = 0 for every real t means that the Fourier transform of f(x)e−x2 is 0, hence f is 0 almost everywhere. Variants of the above completeness proof apply to other weights with exponential decay. In the Hermite case, it is also possible to prove an explicit identity that implies completeness (see section on the Completeness relation below). An equivalent formulation of the fact that Hermite polynomials are an orthogonal basis for L2(R, w(x) dx) consists in introducing Hermite functions (see below), and in saying that the Hermite functions are an orthonormal basis for L2(R). === Hermite's differential equation === The probabilist's Hermite polynomials are solutions of the differential equation ( e − 1 2 x 2 u ′ ) ′ + λ e − 1 2 x 2 u = 0 , {\displaystyle \left(e^{-{\frac {1}{2}}x^{2}}u'\right)'+\lambda e^{-{\frac {1}{2}}x^{2}}u=0,} where λ is a constant. Imposing the boundary condition that u should be polynomially bounded at infinity, the equation has solutions only if λ is a non-negative integer, and the solution is uniquely given by u ( x ) = C 1 He λ ⁡ ( x ) {\displaystyle u(x)=C_{1}\operatorname {He} _{\lambda }(x)} , where C 1 {\displaystyle C_{1}} denotes a constant. Rewriting the differential equation as an eigenvalue problem L [ u ] = u ″ − x u ′ = − λ u , {\displaystyle L[u]=u''-xu'=-\lambda u,} the Hermite polynomials He λ ⁡ ( x ) {\displaystyle \operatorname {He} _{\lambda }(x)} may be understood as eigenfunctions of the differential operator L [ u ] {\displaystyle L[u]} . This eigenvalue problem is called the Hermite equation, although the term is also used for the closely related equation u ″ − 2 x u ′ = − 2 λ u . {\displaystyle u''-2xu'=-2\lambda u.} whose solution is uniquely given in terms of physicist's Hermite polynomials in the form u ( x ) = C 1 H λ ( x ) {\displaystyle u(x)=C_{1}H_{\lambda }(x)} , where C 1 {\displaystyle C_{1}} denotes a constant, after imposing the boundary condition that u should be polynomially bounded at infinity. The general solutions to the above second-order differential equations are in fact linear combinations of both Hermite polynomials and confluent hypergeometric functions of the first kind. For example, for the physicist's Hermite equation u ″ − 2 x u ′ + 2 λ u = 0 , {\displaystyle u''-2xu'+2\lambda u=0,} the general solution takes the form u ( x ) = C 1 H λ ( x ) + C 2 h λ ( x ) , {\displaystyle u(x)=C_{1}H_{\lambda }(x)+C_{2}h_{\lambda }(x),} where C 1 {\displaystyle C_{1}} and C 2 {\displaystyle C_{2}} are constants, H λ ( x ) {\displaystyle H_{\lambda }(x)} are physicist's Hermite polynomials (of the first kind), and h λ ( x ) {\displaystyle h_{\lambda }(x)} are physicist's Hermite functions (of the second kind). The latter functions are compactly represented as h λ ( x ) = 1 F 1 ( − λ 2 ; 1 2 ; x 2 ) {\displaystyle h_{\lambda }(x)={}_{1}F_{1}(-{\tfrac {\lambda }{2}};{\tfrac {1}{2}};x^{2})} where 1 F 1 ( a ; b ; z ) {\displaystyle {}_{1}F_{1}(a;b;z)} are Confluent hypergeometric functions of the first kind. The conventional Hermite polynomials may also be expressed in terms of confluent hypergeometric functions, see below. With more general boundary conditions, the Hermite polynomials can be generalized to obtain more general analytic functions for complex-valued λ. An explicit formula of Hermite polynomials in terms of contour integrals (Courant & Hilbert 1989) is also possible. === Recurrence relation === The sequence of probabilist's Hermite polynomials also satisfies the recurrence relation He n + 1 ⁡ ( x ) = x He n ⁡ ( x ) − He n ′ ⁡ ( x ) . {\displaystyle \operatorname {He} _{n+1}(x)=x\operatorname {He} _{n}(x)-\operatorname {He} _{n}'(x).} Individual coefficients are related by the following recursion formula: a n + 1 , k = { − ( k + 1 ) a n , k + 1 k = 0 , a n , k − 1 − ( k + 1 ) a n , k + 1 k > 0 , {\displaystyle a_{n+1,k}={\begin{cases}-(k+1)a_{n,k+1}&k=0,\\a_{n,k-1}-(k+1)a_{n,k+1}&k>0,\end{cases}}} and a0,0 = 1, a1,0 = 0, a1,1 = 1. For the physicist's polynomials, assuming H n ( x ) = ∑ k = 0 n a n , k x k , {\displaystyle H_{n}(x)=\sum _{k=0}^{n}a_{n,k}x^{k},} we have H n + 1 ( x ) = 2 x H n ( x ) − H n ′ ( x ) . {\displaystyle H_{n+1}(x)=2xH_{n}(x)-H_{n}'(x).} Individual coefficients are related by the following recursion formula: a n + 1 , k = { − a n , k + 1 k = 0 , 2 a n , k − 1 − ( k + 1 ) a n , k + 1 k > 0 , {\displaystyle a_{n+1,k}={\begin{cases}-a_{n,k+1}&k=0,\\2a_{n,k-1}-(k+1)a_{n,k+1}&k>0,\end{cases}}} and a0,0 = 1, a1,0 = 0, a1,1 = 2. The Hermite polynomials constitute an Appell sequence, i.e., they are a polynomial sequence satisfying the identity He n ′ ⁡ ( x ) = n He n − 1 ⁡ ( x ) , H n ′ ( x ) = 2 n H n − 1 ( x ) . {\displaystyle {\begin{aligned}\operatorname {He} _{n}'(x)&=n\operatorname {He} _{n-1}(x),\\H_{n}'(x)&=2nH_{n-1}(x).\end{aligned}}} An integral recurrence that is deduced and demonstrated in is as follows: He n + 1 ⁡ ( x ) = ( n + 1 ) ∫ 0 x He n ⁡ ( t ) d t − H e n ′ ( 0 ) , {\displaystyle \operatorname {He} _{n+1}(x)=(n+1)\int _{0}^{x}\operatorname {He} _{n}(t)dt-He'_{n}(0),} H n + 1 ( x ) = 2 ( n + 1 ) ∫ 0 x H n ( t ) d t − H n ′ ( 0 ) . {\displaystyle H_{n+1}(x)=2(n+1)\int _{0}^{x}H_{n}(t)dt-H'_{n}(0).} Equivalently, by Taylor-expanding, He n ⁡ ( x + y ) = ∑ k = 0 n ( n k ) x n − k He k ⁡ ( y ) = 2 − n 2 ∑ k = 0 n ( n k ) He n − k ⁡ ( x 2 ) He k ⁡ ( y 2 ) , H n ( x + y ) = ∑ k = 0 n ( n k ) H k ( x ) ( 2 y ) n − k = 2 − n 2 ⋅ ∑ k = 0 n ( n k ) H n − k ( x 2 ) H k ( y 2 ) . {\displaystyle {\begin{aligned}\operatorname {He} _{n}(x+y)&=\sum _{k=0}^{n}{\binom {n}{k}}x^{n-k}\operatorname {He} _{k}(y)&&=2^{-{\frac {n}{2}}}\sum _{k=0}^{n}{\binom {n}{k}}\operatorname {He} _{n-k}\left(x{\sqrt {2}}\right)\operatorname {He} _{k}\left(y{\sqrt {2}}\right),\\H_{n}(x+y)&=\sum _{k=0}^{n}{\binom {n}{k}}H_{k}(x)(2y)^{n-k}&&=2^{-{\frac {n}{2}}}\cdot \sum _{k=0}^{n}{\binom {n}{k}}H_{n-k}\left(x{\sqrt {2}}\right)H_{k}\left(y{\sqrt {2}}\right).\end{aligned}}} These umbral identities are self-evident and included in the differential operator representation detailed below, He n ⁡ ( x ) = e − D 2 2 x n , H n ( x ) = 2 n e − D 2 4 x n . {\displaystyle {\begin{aligned}\operatorname {He} _{n}(x)&=e^{-{\frac {D^{2}}{2}}}x^{n},\\H_{n}(x)&=2^{n}e^{-{\frac {D^{2}}{4}}}x^{n}.\end{aligned}}} In consequence, for the mth derivatives the following relations hold: He n ( m ) ⁡ ( x ) = n ! ( n − m ) ! He n − m ⁡ ( x ) = m ! ( n m ) He n − m ⁡ ( x ) , H n ( m ) ( x ) = 2 m n ! ( n − m ) ! H n − m ( x ) = 2 m m ! ( n m ) H n − m ( x ) . {\displaystyle {\begin{aligned}\operatorname {He} _{n}^{(m)}(x)&={\frac {n!}{(n-m)!}}\operatorname {He} _{n-m}(x)&&=m!{\binom {n}{m}}\operatorname {He} _{n-m}(x),\\H_{n}^{(m)}(x)&=2^{m}{\frac {n!}{(n-m)!}}H_{n-m}(x)&&=2^{m}m!{\binom {n}{m}}H_{n-m}(x).\end{aligned}}} It follows that the Hermite polynomials also satisfy the recurrence relation He n + 1 ⁡ ( x ) = x He n ⁡ ( x ) − n He n − 1 ⁡ ( x ) , H n + 1 ( x ) = 2 x H n ( x ) − 2 n H n − 1 ( x ) . {\displaystyle {\begin{aligned}\operatorname {He} _{n+1}(x)&=x\operatorname {He} _{n}(x)-n\operatorname {He} _{n-1}(x),\\H_{n+1}(x)&=2xH_{n}(x)-2nH_{n-1}(x).\end{aligned}}} These last relations, together with the initial polynomials H0(x) and H1(x), can be used in practice to compute the polynomials quickly. Turán's inequalities are H n ( x ) 2 − H n − 1 ( x ) H n + 1 ( x ) = ( n − 1 ) ! ∑ i = 0 n − 1 2 n − i i ! H i ( x ) 2 > 0. {\displaystyle {\mathit {H}}_{n}(x)^{2}-{\mathit {H}}_{n-1}(x){\mathit {H}}_{n+1}(x)=(n-1)!\sum _{i=0}^{n-1}{\frac {2^{n-i}}{i!}}{\mathit {H}}_{i}(x)^{2}>0.} Moreover, the following multiplication theorem holds: H n ( γ x ) = ∑ i = 0 ⌊ n 2 ⌋ γ n − 2 i ( γ 2 − 1 ) i ( n 2 i ) ( 2 i ) ! i ! H n − 2 i ( x ) , He n ⁡ ( γ x ) = ∑ i = 0 ⌊ n 2 ⌋ γ n − 2 i ( γ 2 − 1 ) i ( n 2 i ) ( 2 i ) ! i ! 2 − i He n − 2 i ⁡ ( x ) . {\displaystyle {\begin{aligned}H_{n}(\gamma x)&=\sum _{i=0}^{\left\lfloor {\tfrac {n}{2}}\right\rfloor }\gamma ^{n-2i}(\gamma ^{2}-1)^{i}{\binom {n}{2i}}{\frac {(2i)!}{i!}}H_{n-2i}(x),\\\operatorname {He} _{n}(\gamma x)&=\sum _{i=0}^{\left\lfloor {\tfrac {n}{2}}\right\rfloor }\gamma ^{n-2i}(\gamma ^{2}-1)^{i}{\binom {n}{2i}}{\frac {(2i)!}{i!}}2^{-i}\operatorname {He} _{n-2i}(x).\end{aligned}}} === Explicit expression === The physicist's Hermite polynomials can be written explicitly as H n ( x ) = { n ! ∑ l = 0 n 2 ( − 1 ) n 2 − l ( 2 l ) ! ( n 2 − l ) ! ( 2 x ) 2 l for even n , n ! ∑ l = 0 n − 1 2 ( − 1 ) n − 1 2 − l ( 2 l + 1 ) ! ( n − 1 2 − l ) ! ( 2 x ) 2 l + 1 for odd n . {\displaystyle H_{n}(x)={\begin{cases}\displaystyle n!\sum _{l=0}^{\frac {n}{2}}{\frac {(-1)^{{\tfrac {n}{2}}-l}}{(2l)!\left({\tfrac {n}{2}}-l\right)!}}(2x)^{2l}&{\text{for even }}n,\\\displaystyle n!\sum _{l=0}^{\frac {n-1}{2}}{\frac {(-1)^{{\frac {n-1}{2}}-l}}{(2l+1)!\left({\frac {n-1}{2}}-l\right)!}}(2x)^{2l+1}&{\text{for odd }}n.\end{cases}}} These two equations may be combined into one using the floor function: H n ( x ) = n ! ∑ m = 0 ⌊ n 2 ⌋ ( − 1 ) m m ! ( n − 2 m ) ! ( 2 x ) n − 2 m . {\displaystyle H_{n}(x)=n!\sum _{m=0}^{\left\lfloor {\tfrac {n}{2}}\right\rfloor }{\frac {(-1)^{m}}{m!(n-2m)!}}(2x)^{n-2m}.} The probabilist's Hermite polynomials He have similar formulas, which may be obtained from these by replacing the power of 2x with the corresponding power of √2 x and multiplying the entire sum by 2−⁠n/2⁠: He n ⁡ ( x ) = n ! ∑ m = 0 ⌊ n 2 ⌋ ( − 1 ) m m ! ( n − 2 m ) ! x n − 2 m 2 m . {\displaystyle \operatorname {He} _{n}(x)=n!\sum _{m=0}^{\left\lfloor {\tfrac {n}{2}}\right\rfloor }{\frac {(-1)^{m}}{m!(n-2m)!}}{\frac {x^{n-2m}}{2^{m}}}.} === Inverse explicit expression === The inverse of the above explicit expressions, that is, those for monomials in terms of probabilist's Hermite polynomials He are x n = n ! ∑ m = 0 ⌊ n 2 ⌋ 1 2 m m ! ( n − 2 m ) ! He n − 2 m ⁡ ( x ) . {\displaystyle x^{n}=n!\sum _{m=0}^{\left\lfloor {\tfrac {n}{2}}\right\rfloor }{\frac {1}{2^{m}m!(n-2m)!}}\operatorname {He} _{n-2m}(x).} The corresponding expressions for the physicist's Hermite polynomials H follow directly by properly scaling this: x n = n ! 2 n ∑ m = 0 ⌊ n 2 ⌋ 1 m ! ( n − 2 m ) ! H n − 2 m ( x ) . {\displaystyle x^{n}={\frac {n!}{2^{n}}}\sum _{m=0}^{\left\lfloor {\tfrac {n}{2}}\right\rfloor }{\frac {1}{m!(n-2m)!}}H_{n-2m}(x).} === Generating function === The Hermite polynomials are given by the exponential generating function e x t − 1 2 t 2 = ∑ n = 0 ∞ He n ⁡ ( x ) t n n ! , e 2 x t − t 2 = ∑ n = 0 ∞ H n ( x ) t n n ! . {\displaystyle {\begin{aligned}e^{xt-{\frac {1}{2}}t^{2}}&=\sum _{n=0}^{\infty }\operatorname {He} _{n}(x){\frac {t^{n}}{n!}},\\e^{2xt-t^{2}}&=\sum _{n=0}^{\infty }H_{n}(x){\frac {t^{n}}{n!}}.\end{aligned}}} This equality is valid for all complex values of x and t, and can be obtained by writing the Taylor expansion at x of the entire function z → e−z2 (in the physicist's case). One can also derive the (physicist's) generating function by using Cauchy's integral formula to write the Hermite polynomials as H n ( x ) = ( − 1 ) n e x 2 d n d x n e − x 2 = ( − 1 ) n e x 2 n ! 2 π i ∮ γ e − z 2 ( z − x ) n + 1 d z . {\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}e^{-x^{2}}=(-1)^{n}e^{x^{2}}{\frac {n!}{2\pi i}}\oint _{\gamma }{\frac {e^{-z^{2}}}{(z-x)^{n+1}}}\,dz.} Using this in the sum ∑ n = 0 ∞ H n ( x ) t n n ! , {\displaystyle \sum _{n=0}^{\infty }H_{n}(x){\frac {t^{n}}{n!}},} one can evaluate the remaining integral using the calculus of residues and arrive at the desired generating function. A slight generalization states e 2 x t − t 2 H k ( x − t ) = ∑ n = 0 ∞ H n + k ( x ) t n n ! {\displaystyle e^{2xt-t^{2}}H_{k}(x-t)=\sum _{n=0}^{\infty }{\frac {H_{n+k}(x)t^{n}}{n!}}} === Expected values === If X is a random variable with a normal distribution with standard deviation 1 and expected value μ, then E ⁡ [ He n ⁡ ( X ) ] = μ n . {\displaystyle \operatorname {\mathbb {E} } \left[\operatorname {He} _{n}(X)\right]=\mu ^{n}.} The moments of the standard normal (with expected value zero) may be read off directly from the relation for even indices: E ⁡ [ X 2 n ] = ( − 1 ) n He 2 n ⁡ ( 0 ) = ( 2 n − 1 ) ! ! , {\displaystyle \operatorname {\mathbb {E} } \left[X^{2n}\right]=(-1)^{n}\operatorname {He} _{2n}(0)=(2n-1)!!,} where (2n − 1)!! is the double factorial. Note that the above expression is a special case of the representation of the probabilist's Hermite polynomials as moments: He n ⁡ ( x ) = 1 2 π ∫ − ∞ ∞ ( x + i y ) n e − y 2 2 d y . {\displaystyle \operatorname {He} _{n}(x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }(x+iy)^{n}e^{-{\frac {y^{2}}{2}}}\,dy.} === Integral representations === From the generating-function representation above, we see that the Hermite polynomials have a representation in terms of a contour integral, as He n ⁡ ( x ) = n ! 2 π i ∮ C e t x − t 2 2 t n + 1 d t , H n ( x ) = n ! 2 π i ∮ C e 2 t x − t 2 t n + 1 d t , {\displaystyle {\begin{aligned}\operatorname {He} _{n}(x)&={\frac {n!}{2\pi i}}\oint _{C}{\frac {e^{tx-{\frac {t^{2}}{2}}}}{t^{n+1}}}\,dt,\\H_{n}(x)&={\frac {n!}{2\pi i}}\oint _{C}{\frac {e^{2tx-t^{2}}}{t^{n+1}}}\,dt,\end{aligned}}} with the contour encircling the origin. Using the Fourier transform of the gaussian e − x 2 = 1 π ∫ e − t 2 + 2 i x t d t {\displaystyle e^{-x^{2}}={\frac {1}{\sqrt {\pi }}}\int e^{-t^{2}+2ixt}dt} , we have H n ( x ) = ( − 1 ) n e x 2 d n d x n e − x 2 = ( − 2 i ) n e x 2 π ∫ t n e − t 2 + 2 i x t d t He n ⁡ ( x ) = ( − i ) n e x 2 / 2 2 π ∫ t n e − t 2 / 2 + i x t d t . {\displaystyle {\begin{aligned}H_{n}(x)&=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}e^{-x^{2}}={\frac {(-2i)^{n}e^{x^{2}}}{\sqrt {\pi }}}\int t^{n}e^{-t^{2}+2ixt}dt\\\operatorname {He} _{n}(x)&={\frac {(-i)^{n}e^{x^{2}/2}}{\sqrt {2\pi }}}\int t^{n}\,e^{-t^{2}/2+ixt}\,dt.\end{aligned}}} === Other properties === The addition theorem, or the summation theorem, states that: 8.958  ( ∑ k = 1 r a k 2 ) n 2 n ! H n ( ∑ k = 1 r a k x k ∑ k = 1 r a k 2 ) = ∑ m 1 + m 2 + … + m r = n , m i ≥ 0 ∏ k = 1 r { a k m k m k ! H m k ( x k ) } {\displaystyle {\frac {\left(\sum _{k=1}^{r}a_{k}^{2}\right)^{\frac {n}{2}}}{n!}}H_{n}\left({\frac {\sum _{k=1}^{r}a_{k}x_{k}}{\sqrt {\sum _{k=1}^{r}a_{k}^{2}}}}\right)=\sum _{m_{1}+m_{2}+\ldots +m_{r}=n,m_{i}\geq 0}\prod _{k=1}^{r}\left\{{\frac {a_{k}^{m_{k}}}{m_{k}!}}H_{m_{k}}\left(x_{k}\right)\right\}} for any nonzero vector a 1 : r {\displaystyle a_{1:r}} . The multiplication theorem states that H n ( λ x ) = λ n ∑ ℓ = 0 ⌊ n / 2 ⌋ ( − n ) 2 ℓ ℓ ! ( 1 − λ − 2 ) ℓ H n − 2 ℓ ( x ) {\displaystyle H_{n}\left(\lambda x\right)=\lambda ^{n}\sum _{\ell =0}^{\left\lfloor n/2\right\rfloor }{\frac {\left(-n\right)_{2\ell }}{\ell !}}(1-\lambda ^{-2})^{\ell }H_{n-2\ell }\left(x\right)} for any nonzero λ {\displaystyle \lambda } . Feldheim formula: Eq 46  1 a π ∫ − ∞ + ∞ e − x 2 a H m ( x + y λ ) H n ( x + z μ ) d x = ( 1 − a λ 2 ) m 2 ( 1 − a μ 2 ) n 2 ∑ r = 0 min ( m , n ) r ! ( m r ) ( n r ) ( 2 a ( λ 2 − a ) ( μ 2 − a ) ) r H m − r ( y λ 2 − a ) H n − r ( z μ 2 − a ) {\displaystyle {\begin{aligned}{\frac {1}{\sqrt {a\pi }}}&\int _{-\infty }^{+\infty }e^{-{\frac {x^{2}}{a}}}H_{m}\left({\frac {x+y}{\lambda }}\right)H_{n}\left({\frac {x+z}{\mu }}\right)dx\\&=\left(1-{\frac {a}{\lambda ^{2}}}\right)^{\frac {m}{2}}\left(1-{\frac {a}{\mu ^{2}}}\right)^{\frac {n}{2}}\sum _{r=0}^{\min(m,n)}r!{\binom {m}{r}}{\binom {n}{r}}\left({\frac {2a}{\sqrt {\left(\lambda ^{2}-a\right)\left(\mu ^{2}-a\right)}}}\right)^{r}H_{m-r}\left({\frac {y}{\sqrt {\lambda ^{2}-a}}}\right)H_{n-r}\left({\frac {z}{\sqrt {\mu ^{2}-a}}}\right)\end{aligned}}} where a ∈ C {\displaystyle a\in \mathbb {C} } has a positive real part. As a special case,: Eq 52  1 π ∫ − ∞ + ∞ e − t 2 H m ( t sin ⁡ θ + v cos ⁡ θ ) H n ( t cos ⁡ θ − v sin ⁡ θ ) d t = ( − 1 ) n cos m ⁡ θ sin n ⁡ θ H m + n ( v ) {\displaystyle {\frac {1}{\sqrt {\pi }}}\int _{-\infty }^{+\infty }e^{-t^{2}}H_{m}(t\sin \theta +v\cos \theta )H_{n}(t\cos \theta -v\sin \theta )dt=(-1)^{n}\cos ^{m}\theta \sin ^{n}\theta H_{m+n}(v)} === Asymptotic expansion === Asymptotically, as n → ∞, the expansion e − x 2 2 ⋅ H n ( x ) ∼ 2 n π Γ ( n + 1 2 ) cos ⁡ ( x 2 n − n π 2 ) {\displaystyle e^{-{\frac {x^{2}}{2}}}\cdot H_{n}(x)\sim {\frac {2^{n}}{\sqrt {\pi }}}\Gamma \left({\frac {n+1}{2}}\right)\cos \left(x{\sqrt {2n}}-{\frac {n\pi }{2}}\right)} holds true. For certain cases concerning a wider range of evaluation, it is necessary to include a factor for changing amplitude: e − x 2 2 ⋅ H n ( x ) ∼ 2 n π Γ ( n + 1 2 ) cos ⁡ ( x 2 n − n π 2 ) ( 1 − x 2 2 n + 1 ) − 1 4 = Γ ( n + 1 ) Γ ( n 2 + 1 ) cos ⁡ ( x 2 n − n π 2 ) ( 1 − x 2 2 n + 1 ) − 1 4 , {\displaystyle e^{-{\frac {x^{2}}{2}}}\cdot H_{n}(x)\sim {\frac {2^{n}}{\sqrt {\pi }}}\Gamma \left({\frac {n+1}{2}}\right)\cos \left(x{\sqrt {2n}}-{\frac {n\pi }{2}}\right)\left(1-{\frac {x^{2}}{2n+1}}\right)^{-{\frac {1}{4}}}={\frac {\Gamma (n+1)}{\Gamma \left({\frac {n}{2}}+1\right)}}\cos \left(x{\sqrt {2n}}-{\frac {n\pi }{2}}\right)\left(1-{\frac {x^{2}}{2n+1}}\right)^{-{\frac {1}{4}}},} which, using Stirling's approximation, can be further simplified, in the limit, to e − x 2 2 ⋅ H n ( x ) ∼ ( 2 n e ) n 2 2 cos ⁡ ( x 2 n − n π 2 ) ( 1 − x 2 2 n + 1 ) − 1 4 . {\displaystyle e^{-{\frac {x^{2}}{2}}}\cdot H_{n}(x)\sim \left({\frac {2n}{e}}\right)^{\frac {n}{2}}{\sqrt {2}}\cos \left(x{\sqrt {2n}}-{\frac {n\pi }{2}}\right)\left(1-{\frac {x^{2}}{2n+1}}\right)^{-{\frac {1}{4}}}.} This expansion is needed to resolve the wavefunction of a quantum harmonic oscillator such that it agrees with the classical approximation in the limit of the correspondence principle. A better approximation, which accounts for the variation in frequency, is given by e − x 2 2 ⋅ H n ( x ) ∼ ( 2 n e ) n 2 2 cos ⁡ ( x 2 n + 1 − x 2 3 − n π 2 ) ( 1 − x 2 2 n + 1 ) − 1 4 . {\displaystyle e^{-{\frac {x^{2}}{2}}}\cdot H_{n}(x)\sim \left({\frac {2n}{e}}\right)^{\frac {n}{2}}{\sqrt {2}}\cos \left(x{\sqrt {2n+1-{\frac {x^{2}}{3}}}}-{\frac {n\pi }{2}}\right)\left(1-{\frac {x^{2}}{2n+1}}\right)^{-{\frac {1}{4}}}.} A finer approximation, which takes into account the uneven spacing of the zeros near the edges, makes use of the substitution x = 2 n + 1 cos ⁡ ( φ ) , 0 < ε ≤ φ ≤ π − ε , {\displaystyle x={\sqrt {2n+1}}\cos(\varphi ),\quad 0<\varepsilon \leq \varphi \leq \pi -\varepsilon ,} with which one has the uniform approximation e − x 2 2 ⋅ H n ( x ) = 2 n 2 + 1 4 n ! ( π n ) − 1 4 ( sin ⁡ φ ) − 1 2 ⋅ ( sin ⁡ ( 3 π 4 + ( n 2 + 1 4 ) ( sin ⁡ 2 φ − 2 φ ) ) + O ( n − 1 ) ) . {\displaystyle e^{-{\frac {x^{2}}{2}}}\cdot H_{n}(x)=2^{{\frac {n}{2}}+{\frac {1}{4}}}{\sqrt {n!}}(\pi n)^{-{\frac {1}{4}}}(\sin \varphi )^{-{\frac {1}{2}}}\cdot \left(\sin \left({\frac {3\pi }{4}}+\left({\frac {n}{2}}+{\frac {1}{4}}\right)\left(\sin 2\varphi -2\varphi \right)\right)+O\left(n^{-1}\right)\right).} Similar approximations hold for the monotonic and transition regions. Specifically, if x = 2 n + 1 cosh ⁡ ( φ ) , 0 < ε ≤ φ ≤ ω < ∞ , {\displaystyle x={\sqrt {2n+1}}\cosh(\varphi ),\quad 0<\varepsilon \leq \varphi \leq \omega <\infty ,} then e − x 2 2 ⋅ H n ( x ) = 2 n 2 − 3 4 n ! ( π n ) − 1 4 ( sinh ⁡ φ ) − 1 2 ⋅ e ( n 2 + 1 4 ) ( 2 φ − sinh ⁡ 2 φ ) ( 1 + O ( n − 1 ) ) , {\displaystyle e^{-{\frac {x^{2}}{2}}}\cdot H_{n}(x)=2^{{\frac {n}{2}}-{\frac {3}{4}}}{\sqrt {n!}}(\pi n)^{-{\frac {1}{4}}}(\sinh \varphi )^{-{\frac {1}{2}}}\cdot e^{\left({\frac {n}{2}}+{\frac {1}{4}}\right)\left(2\varphi -\sinh 2\varphi \right)}\left(1+O\left(n^{-1}\right)\right),} while for x = 2 n + 1 + t {\displaystyle x={\sqrt {2n+1}}+t} with t complex and bounded, the approximation is e − x 2 2 ⋅ H n ( x ) = π 1 4 2 n 2 + 1 4 n ! n − 1 12 ( Ai ⁡ ( 2 1 2 n 1 6 t ) + O ( n − 2 3 ) ) , {\displaystyle e^{-{\frac {x^{2}}{2}}}\cdot H_{n}(x)=\pi ^{\frac {1}{4}}2^{{\frac {n}{2}}+{\frac {1}{4}}}{\sqrt {n!}}\,n^{-{\frac {1}{12}}}\left(\operatorname {Ai} \left(2^{\frac {1}{2}}n^{\frac {1}{6}}t\right)+O\left(n^{-{\frac {2}{3}}}\right)\right),} where Ai is the Airy function of the first kind. === Special values === The physicist's Hermite polynomials evaluated at zero argument Hn(0) are called Hermite numbers. H n ( 0 ) = { 0 for odd n , ( − 2 ) n 2 ( n − 1 ) ! ! for even n , {\displaystyle H_{n}(0)={\begin{cases}0&{\text{for odd }}n,\\(-2)^{\frac {n}{2}}(n-1)!!&{\text{for even }}n,\end{cases}}} which satisfy the recursion relation Hn(0) = −2(n − 1)Hn − 2(0). Equivalently, H 2 n ( 0 ) = ( − 2 ) n ( 2 n − 1 ) ! ! {\displaystyle H_{2n}(0)=(-2)^{n}(2n-1)!!} . In terms of the probabilist's polynomials this translates to He n ⁡ ( 0 ) = { 0 for odd n , ( − 1 ) n 2 ( n − 1 ) ! ! for even n . {\displaystyle \operatorname {He} _{n}(0)={\begin{cases}0&{\text{for odd }}n,\\(-1)^{\frac {n}{2}}(n-1)!!&{\text{for even }}n.\end{cases}}} === Kibble–Slepian formula === Let M {\textstyle M} be a real n × n {\textstyle n\times n} symmetric matrix, then the Kibble–Slepian formula states that det ( I + M ) − 1 2 e x T M ( I + M ) − 1 x = ∑ K [ ∏ 1 ≤ i ≤ j ≤ n ( M i j / 2 ) k i j k i j ! ] 2 − t r ( K ) H k 1 ( x 1 ) ⋯ H k n ( x n ) {\displaystyle \det(I+M)^{-{\frac {1}{2}}}e^{x^{T}M(I+M)^{-1}x}=\sum _{K}\left[\prod _{1\leq i\leq j\leq n}{\frac {(M_{ij}/2)^{k_{ij}}}{k_{ij}!}}\right]2^{-tr(K)}H_{k_{1}}(x_{1})\cdots H_{k_{n}}(x_{n})} where ∑ K {\textstyle \sum _{K}} is the n ( n + 1 ) 2 {\displaystyle {\frac {n(n+1)}{2}}} -fold summation over all n × n {\textstyle n\times n} symmetric matrices with non-negative integer entries, t r ( K ) {\displaystyle tr(K)} is the trace of K {\displaystyle K} , and k i {\textstyle k_{i}} is defined as k i i + ∑ j = 1 n k i j {\textstyle k_{ii}+\sum _{j=1}^{n}k_{ij}} . This gives Mehler's formula when M = [ 0 u u 0 ] {\displaystyle M={\begin{bmatrix}0&u\\u&0\end{bmatrix}}} . Equivalently stated, if T {\textstyle T} is a positive semidefinite matrix, then set M = − T ( I + T ) − 1 {\textstyle M=-T(I+T)^{-1}} , we have M ( I + M ) − 1 = − T {\textstyle M(I+M)^{-1}=-T} , so e − x T T x = det ( I + T ) − 1 2 ∑ K [ ∏ 1 ≤ i ≤ j ≤ n ( M i j / 2 ) k i j k i j ! ] 2 − t r ( K ) H k 1 ( x 1 ) … H k n ( x n ) {\displaystyle e^{-x^{T}Tx}=\det(I+T)^{-{\frac {1}{2}}}\sum _{K}\left[\prod _{1\leq i\leq j\leq n}{\frac {(M_{ij}/2)^{k_{ij}}}{k_{ij}!}}\right]2^{-tr(K)}H_{k_{1}}(x_{1})\dots H_{k_{n}}(x_{n})} Equivalently stated in a form closer to the boson quantum mechanics of the harmonic oscillator: π − n / 4 det ( I + M ) − 1 2 e − 1 2 x T ( I − M ) ( I + M ) − 1 x = ∑ K [ ∏ 1 ≤ i ≤ j ≤ n M i j k i j / k i j ! ] [ ∏ 1 ≤ i ≤ n k i ! ] 1 / 2 2 − tr ⁡ K ψ k 1 ( x 1 ) ⋯ ψ k n ( x n ) . {\displaystyle \pi ^{-n/4}\det(I+M)^{-{\frac {1}{2}}}e^{-{\frac {1}{2}}x^{T}(I-M)(I+M)^{-1}x}=\sum _{K}\left[\prod _{1\leq i\leq j\leq n}M_{ij}^{k_{ij}}/k_{ij}!\right]\left[\prod _{1\leq i\leq n}k_{i}!\right]^{1/2}2^{-\operatorname {tr} K}\psi _{k_{1}}\left(x_{1}\right)\cdots \psi _{k_{n}}\left(x_{n}\right).} where each ψ n ( x ) {\textstyle \psi _{n}(x)} is the n {\textstyle n} -th eigenfunction of the harmonic oscillator, defined as ψ n ( x ) := 1 2 n n ! ( 1 π ) 1 4 e − 1 2 x 2 H n ( x ) {\displaystyle \psi _{n}(x):={\frac {1}{\sqrt {2^{n}n!}}}\left({\frac {1}{\pi }}\right)^{\frac {1}{4}}e^{-{\frac {1}{2}}x^{2}}H_{n}(x)} The Kibble–Slepian formula was proposed by Kibble in 1945 and proven by Slepian in 1972 using Fourier analysis. Foata gave a combinatorial proof while Louck gave a proof via boson quantum mechanics. It has a generalization for complex-argument Hermite polynomials. == Relations to other functions == === Laguerre polynomials === The Hermite polynomials can be expressed as a special case of the Laguerre polynomials: H 2 n ( x ) = ( − 4 ) n n ! L n ( − 1 2 ) ( x 2 ) = 4 n n ! ∑ k = 0 n ( − 1 ) n − k ( n − 1 2 n − k ) x 2 k k ! , H 2 n + 1 ( x ) = 2 ( − 4 ) n n ! x L n ( 1 2 ) ( x 2 ) = 2 ⋅ 4 n n ! ∑ k = 0 n ( − 1 ) n − k ( n + 1 2 n − k ) x 2 k + 1 k ! . {\displaystyle {\begin{aligned}H_{2n}(x)&=(-4)^{n}n!L_{n}^{\left(-{\frac {1}{2}}\right)}(x^{2})&&=4^{n}n!\sum _{k=0}^{n}(-1)^{n-k}{\binom {n-{\frac {1}{2}}}{n-k}}{\frac {x^{2k}}{k!}},\\H_{2n+1}(x)&=2(-4)^{n}n!xL_{n}^{\left({\frac {1}{2}}\right)}(x^{2})&&=2\cdot 4^{n}n!\sum _{k=0}^{n}(-1)^{n-k}{\binom {n+{\frac {1}{2}}}{n-k}}{\frac {x^{2k+1}}{k!}}.\end{aligned}}} === Hypergeometric functions === The physicist's Hermite polynomials can be expressed as a special case of the parabolic cylinder functions: H n ( x ) = 2 n U ( − 1 2 n , 1 2 , x 2 ) {\displaystyle H_{n}(x)=2^{n}U\left(-{\tfrac {1}{2}}n,{\tfrac {1}{2}},x^{2}\right)} in the right half-plane, where U(a, b, z) is Tricomi's confluent hypergeometric function. Similarly, H 2 n ( x ) = ( − 1 ) n ( 2 n ) ! n ! 1 F 1 ( − n , 1 2 ; x 2 ) , H 2 n + 1 ( x ) = ( − 1 ) n ( 2 n + 1 ) ! n ! 2 x 1 F 1 ( − n , 3 2 ; x 2 ) , {\displaystyle {\begin{aligned}H_{2n}(x)&=(-1)^{n}{\frac {(2n)!}{n!}}\,_{1}F_{1}{\big (}-n,{\tfrac {1}{2}};x^{2}{\big )},\\H_{2n+1}(x)&=(-1)^{n}{\frac {(2n+1)!}{n!}}\,2x\,_{1}F_{1}{\big (}-n,{\tfrac {3}{2}};x^{2}{\big )},\end{aligned}}} where 1F1(a, b; z) = M(a, b; z) is Kummer's confluent hypergeometric function. There is also H n ( x ) = ( 2 x ) n 2 F 0 ( − 1 2 n , − 1 2 n + 1 2 − ; − 1 x 2 ) . {\displaystyle H_{n}\left(x\right)=(2x)^{n}{{}_{2}F_{0}}\left({-{\tfrac {1}{2}}n,-{\tfrac {1}{2}}n+{\tfrac {1}{2}} \atop -};-{\frac {1}{x^{2}}}\right).} === Limit relations === The Hermite polynomials can be obtained as the limit of various other polynomials. As a limit of Jacobi polynomials: lim α → ∞ α − 1 2 n P n ( α , α ) ( α − 1 2 x ) = H n ( x ) 2 n n ! . {\displaystyle \lim _{\alpha \to \infty }\alpha ^{-{\frac {1}{2}}n}P_{n}^{(\alpha ,\alpha )}\left(\alpha ^{-{\frac {1}{2}}}x\right)={\frac {H_{n}\left(x\right)}{2^{n}n!}}.} As a limit of ultraspherical polynomials: lim λ → ∞ λ − 1 2 n C n ( λ ) ( λ − 1 2 x ) = H n ( x ) n ! . {\displaystyle \lim _{\lambda \to \infty }\lambda ^{-{\frac {1}{2}}n}C_{n}^{(\lambda )}\left(\lambda ^{-{\frac {1}{2}}}x\right)={\frac {H_{n}\left(x\right)}{n!}}.} As a limit of associated Laguerre polynomials: lim α → ∞ ( 2 α ) 1 2 n L n ( α ) ( ( 2 α ) 1 2 x + α ) = ( − 1 ) n n ! H n ( x ) . {\displaystyle \lim _{\alpha \to \infty }\left({\frac {2}{\alpha }}\right)^{{\frac {1}{2}}n}L_{n}^{(\alpha )}\left((2\alpha )^{\frac {1}{2}}x+\alpha \right)={\frac {(-1)^{n}}{n!}}H_{n}\left(x\right).} == Hermite polynomial expansion == Similar to Taylor expansion, some functions are expressible as an infinite sum of Hermite polynomials. Specifically, if ∫ e − x 2 f ( x ) 2 d x < ∞ {\displaystyle \int e^{-x^{2}}f(x)^{2}dx<\infty } , then it has an expansion in the physicist's Hermite polynomials. Given such f {\displaystyle f} , the partial sums of the Hermite expansion of f {\displaystyle f} converges to in the L p {\displaystyle L^{p}} norm if and only if 4 / 3 < p < 4 {\displaystyle 4/3<p<4} . x n = n ! 2 n ∑ k = 0 ⌊ n / 2 ⌋ 1 k ! ( n − 2 k ) ! H n − 2 k ( x ) = n ! ∑ k = 0 ⌊ n / 2 ⌋ 1 k ! 2 k ( n − 2 k ) ! He n − 2 k ⁡ ( x ) , n ∈ Z + . {\displaystyle x^{n}={\frac {n!}{2^{n}}}\,\sum _{k=0}^{\left\lfloor n/2\right\rfloor }{\frac {1}{k!\,(n-2k)!}}\,H_{n-2k}(x)=n!\sum _{k=0}^{\left\lfloor n/2\right\rfloor }{\frac {1}{k!\,2^{k}\,(n-2k)!}}\,\operatorname {He} _{n-2k}(x),\qquad n\in \mathbb {Z} _{+}.} e a x = e a 2 / 4 ∑ n ≥ 0 a n n ! 2 n H n ( x ) , a ∈ C , x ∈ R . {\displaystyle e^{ax}=e^{a^{2}/4}\sum _{n\geq 0}{\frac {a^{n}}{n!\,2^{n}}}\,H_{n}(x),\qquad a\in \mathbb {C} ,\quad x\in \mathbb {R} .} e − a 2 x 2 = ∑ n ≥ 0 ( − 1 ) n a 2 n n ! ( 1 + a 2 ) n + 1 / 2 2 2 n H 2 n ( x ) . {\displaystyle e^{-a^{2}x^{2}}=\sum _{n\geq 0}{\frac {(-1)^{n}a^{2n}}{n!\left(1+a^{2}\right)^{n+1/2}2^{2n}}}\,H_{2n}(x).} erf ⁡ ( x ) = 2 π ∫ 0 x e − t 2 d t = 1 2 π ∑ k ≥ 0 ( − 1 ) k k ! ( 2 k + 1 ) 2 3 k H 2 k ( x ) . {\displaystyle \operatorname {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}e^{-t^{2}}~dt={\frac {1}{\sqrt {2\pi }}}\sum _{k\geq 0}{\frac {(-1)^{k}}{k!(2k+1)2^{3k}}}H_{2k}(x).} cosh ⁡ ( 2 x ) = e ∑ k ≥ 0 1 ( 2 k ) ! H 2 k ( x ) , sinh ⁡ ( 2 x ) = e ∑ k ≥ 0 1 ( 2 k + 1 ) ! H 2 k + 1 ( x ) . {\displaystyle \cosh(2x)=e\sum _{k\geq 0}{\frac {1}{(2k)!}}\,H_{2k}(x),\qquad \sinh(2x)=e\sum _{k\geq 0}{\frac {1}{(2k+1)!}}\,H_{2k+1}(x).} cos ⁡ ( x ) = e − 1 / 4 ∑ k ≥ 0 ( − 1 ) k 2 2 k ( 2 k ) ! H 2 k ( x ) sin ⁡ ( x ) = e − 1 / 4 ∑ k ≥ 0 ( − 1 ) k 2 2 k + 1 ( 2 k + 1 ) ! H 2 k + 1 ( x ) {\displaystyle \cos(x)=e^{-1/4}\,\sum _{k\geq 0}{\frac {(-1)^{k}}{2^{2k}\,(2k)!}}\,H_{2k}(x)\quad \sin(x)=e^{-1/4}\,\sum _{k\geq 0}{\frac {(-1)^{k}}{2^{2k+1}\,(2k+1)!}}\,H_{2k+1}(x)} == Differential-operator representation == The probabilist's Hermite polynomials satisfy the identity He n ⁡ ( x ) = e − D 2 2 x n , {\displaystyle \operatorname {He} _{n}(x)=e^{-{\frac {D^{2}}{2}}}x^{n},} where D represents differentiation with respect to x, and the exponential is interpreted by expanding it as a power series. There are no delicate questions of convergence of this series when it operates on polynomials, since all but finitely many terms vanish. Since the power-series coefficients of the exponential are well known, and higher-order derivatives of the monomial xn can be written down explicitly, this differential-operator representation gives rise to a concrete formula for the coefficients of Hn that can be used to quickly compute these polynomials. Since the formal expression for the Weierstrass transform W is eD2, we see that the Weierstrass transform of (√2)nHen(⁠x/√2⁠) is xn. Essentially the Weierstrass transform thus turns a series of Hermite polynomials into a corresponding Maclaurin series. The existence of some formal power series g(D) with nonzero constant coefficient, such that Hen(x) = g(D)xn, is another equivalent to the statement that these polynomials form an Appell sequence. Since they are an Appell sequence, they are a fortiori a Sheffer sequence. == Generalizations == The probabilist's Hermite polynomials defined above are orthogonal with respect to the standard normal probability distribution, whose density function is 1 2 π e − x 2 2 , {\displaystyle {\frac {1}{\sqrt {2\pi }}}e^{-{\frac {x^{2}}{2}}},} which has expected value 0 and variance 1. Scaling, one may analogously speak of generalized Hermite polynomials He n [ α ] ⁡ ( x ) {\displaystyle \operatorname {He} _{n}^{[\alpha ]}(x)} of variance α, where α is any positive number. These are then orthogonal with respect to the normal probability distribution whose density function is 1 2 π α e − x 2 2 α . {\displaystyle {\frac {1}{\sqrt {2\pi \alpha }}}e^{-{\frac {x^{2}}{2\alpha }}}.} They are given by He n [ α ] ⁡ ( x ) = α n 2 He n ⁡ ( x α ) = ( α 2 ) n 2 H n ( x 2 α ) = e − α D 2 2 ( x n ) . {\displaystyle \operatorname {He} _{n}^{[\alpha ]}(x)=\alpha ^{\frac {n}{2}}\operatorname {He} _{n}\left({\frac {x}{\sqrt {\alpha }}}\right)=\left({\frac {\alpha }{2}}\right)^{\frac {n}{2}}H_{n}\left({\frac {x}{\sqrt {2\alpha }}}\right)=e^{-{\frac {\alpha D^{2}}{2}}}\left(x^{n}\right).} Now, if He n [ α ] ⁡ ( x ) = ∑ k = 0 n h n , k [ α ] x k , {\displaystyle \operatorname {He} _{n}^{[\alpha ]}(x)=\sum _{k=0}^{n}h_{n,k}^{[\alpha ]}x^{k},} then the polynomial sequence whose nth term is ( He n [ α ] ∘ He [ β ] ) ( x ) ≡ ∑ k = 0 n h n , k [ α ] He k [ β ] ⁡ ( x ) {\displaystyle \left(\operatorname {He} _{n}^{[\alpha ]}\circ \operatorname {He} ^{[\beta ]}\right)(x)\equiv \sum _{k=0}^{n}h_{n,k}^{[\alpha ]}\,\operatorname {He} _{k}^{[\beta ]}(x)} is called the umbral composition of the two polynomial sequences. It can be shown to satisfy the identities ( He n [ α ] ∘ He [ β ] ) ( x ) = He n [ α + β ] ⁡ ( x ) {\displaystyle \left(\operatorname {He} _{n}^{[\alpha ]}\circ \operatorname {He} ^{[\beta ]}\right)(x)=\operatorname {He} _{n}^{[\alpha +\beta ]}(x)} and He n [ α + β ] ⁡ ( x + y ) = ∑ k = 0 n ( n k ) He k [ α ] ⁡ ( x ) He n − k [ β ] ⁡ ( y ) . {\displaystyle \operatorname {He} _{n}^{[\alpha +\beta ]}(x+y)=\sum _{k=0}^{n}{\binom {n}{k}}\operatorname {He} _{k}^{[\alpha ]}(x)\operatorname {He} _{n-k}^{[\beta ]}(y).} The last identity is expressed by saying that this parameterized family of polynomial sequences is known as a cross-sequence. (See the above section on Appell sequences and on the differential-operator representation, which leads to a ready derivation of it. This binomial type identity, for α = β = ⁠1/2⁠, has already been encountered in the above section on #Recursion relations.) === "Negative variance" === Since polynomial sequences form a group under the operation of umbral composition, one may denote by He n [ − α ] ⁡ ( x ) {\displaystyle \operatorname {He} _{n}^{[-\alpha ]}(x)} the sequence that is inverse to the one similarly denoted, but without the minus sign, and thus speak of Hermite polynomials of negative variance. For α > 0, the coefficients of He n [ − α ] ⁡ ( x ) {\displaystyle \operatorname {He} _{n}^{[-\alpha ]}(x)} are just the absolute values of the corresponding coefficients of He n [ α ] ⁡ ( x ) {\displaystyle \operatorname {He} _{n}^{[\alpha ]}(x)} . These arise as moments of normal probability distributions: The nth moment of the normal distribution with expected value μ and variance σ2 is E [ X n ] = He n [ − σ 2 ] ⁡ ( μ ) , {\displaystyle E[X^{n}]=\operatorname {He} _{n}^{[-\sigma ^{2}]}(\mu ),} where X is a random variable with the specified normal distribution. A special case of the cross-sequence identity then says that ∑ k = 0 n ( n k ) He k [ α ] ⁡ ( x ) He n − k [ − α ] ⁡ ( y ) = He n [ 0 ] ⁡ ( x + y ) = ( x + y ) n . {\displaystyle \sum _{k=0}^{n}{\binom {n}{k}}\operatorname {He} _{k}^{[\alpha ]}(x)\operatorname {He} _{n-k}^{[-\alpha ]}(y)=\operatorname {He} _{n}^{[0]}(x+y)=(x+y)^{n}.} == Hermite functions == === Definition === One can define the Hermite functions (often called Hermite-Gaussian functions) from the physicist's polynomials: ψ n ( x ) = ( 2 n n ! π ) − 1 2 e − x 2 2 H n ( x ) = ( − 1 ) n ( 2 n n ! π ) − 1 2 e x 2 2 d n d x n e − x 2 . {\displaystyle \psi _{n}(x)=\left(2^{n}n!{\sqrt {\pi }}\right)^{-{\frac {1}{2}}}e^{-{\frac {x^{2}}{2}}}H_{n}(x)=(-1)^{n}\left(2^{n}n!{\sqrt {\pi }}\right)^{-{\frac {1}{2}}}e^{\frac {x^{2}}{2}}{\frac {d^{n}}{dx^{n}}}e^{-x^{2}}.} Thus, 2 ( n + 1 ) ψ n + 1 ( x ) = ( x − d d x ) ψ n ( x ) . {\displaystyle {\sqrt {2(n+1)}}~~\psi _{n+1}(x)=\left(x-{d \over dx}\right)\psi _{n}(x).} Since these functions contain the square root of the weight function and have been scaled appropriately, they are orthonormal: ∫ − ∞ ∞ ψ n ( x ) ψ m ( x ) d x = δ n m , {\displaystyle \int _{-\infty }^{\infty }\psi _{n}(x)\psi _{m}(x)\,dx=\delta _{nm},} and they form an orthonormal basis of L2(R). This fact is equivalent to the corresponding statement for Hermite polynomials (see above). The Hermite functions are closely related to the Whittaker function (Whittaker & Watson 1996) Dn(z): D n ( z ) = ( n ! π ) 1 2 ψ n ( z 2 ) = ( − 1 ) n e z 2 4 d n d z n e − z 2 2 {\displaystyle D_{n}(z)=\left(n!{\sqrt {\pi }}\right)^{\frac {1}{2}}\psi _{n}\left({\frac {z}{\sqrt {2}}}\right)=(-1)^{n}e^{\frac {z^{2}}{4}}{\frac {d^{n}}{dz^{n}}}e^{\frac {-z^{2}}{2}}} and thereby to other parabolic cylinder functions. The Hermite functions satisfy the differential equation ψ n ″ ( x ) + ( 2 n + 1 − x 2 ) ψ n ( x ) = 0. {\displaystyle \psi _{n}''(x)+\left(2n+1-x^{2}\right)\psi _{n}(x)=0.} This equation is equivalent to the Schrödinger equation for a harmonic oscillator in quantum mechanics, so these functions are the eigenfunctions. ψ 0 ( x ) = π − 1 4 e − 1 2 x 2 , ψ 1 ( x ) = 2 π − 1 4 x e − 1 2 x 2 , ψ 2 ( x ) = ( 2 π 1 4 ) − 1 ( 2 x 2 − 1 ) e − 1 2 x 2 , ψ 3 ( x ) = ( 3 π 1 4 ) − 1 ( 2 x 3 − 3 x ) e − 1 2 x 2 , ψ 4 ( x ) = ( 2 6 π 1 4 ) − 1 ( 4 x 4 − 12 x 2 + 3 ) e − 1 2 x 2 , ψ 5 ( x ) = ( 2 15 π 1 4 ) − 1 ( 4 x 5 − 20 x 3 + 15 x ) e − 1 2 x 2 . {\displaystyle {\begin{aligned}\psi _{0}(x)&=\pi ^{-{\frac {1}{4}}}\,e^{-{\frac {1}{2}}x^{2}},\\\psi _{1}(x)&={\sqrt {2}}\,\pi ^{-{\frac {1}{4}}}\,x\,e^{-{\frac {1}{2}}x^{2}},\\\psi _{2}(x)&=\left({\sqrt {2}}\,\pi ^{\frac {1}{4}}\right)^{-1}\,\left(2x^{2}-1\right)\,e^{-{\frac {1}{2}}x^{2}},\\\psi _{3}(x)&=\left({\sqrt {3}}\,\pi ^{\frac {1}{4}}\right)^{-1}\,\left(2x^{3}-3x\right)\,e^{-{\frac {1}{2}}x^{2}},\\\psi _{4}(x)&=\left(2{\sqrt {6}}\,\pi ^{\frac {1}{4}}\right)^{-1}\,\left(4x^{4}-12x^{2}+3\right)\,e^{-{\frac {1}{2}}x^{2}},\\\psi _{5}(x)&=\left(2{\sqrt {15}}\,\pi ^{\frac {1}{4}}\right)^{-1}\,\left(4x^{5}-20x^{3}+15x\right)\,e^{-{\frac {1}{2}}x^{2}}.\end{aligned}}} === Recursion relation === Following recursion relations of Hermite polynomials, the Hermite functions obey ψ n ′ ( x ) = n 2 ψ n − 1 ( x ) − n + 1 2 ψ n + 1 ( x ) {\displaystyle \psi _{n}'(x)={\sqrt {\frac {n}{2}}}\,\psi _{n-1}(x)-{\sqrt {\frac {n+1}{2}}}\psi _{n+1}(x)} and x ψ n ( x ) = n 2 ψ n − 1 ( x ) + n + 1 2 ψ n + 1 ( x ) . {\displaystyle x\psi _{n}(x)={\sqrt {\frac {n}{2}}}\,\psi _{n-1}(x)+{\sqrt {\frac {n+1}{2}}}\psi _{n+1}(x).} Extending the first relation to the arbitrary mth derivatives for any positive integer m leads to ψ n ( m ) ( x ) = ∑ k = 0 m ( m k ) ( − 1 ) k 2 m − k 2 n ! ( n − m + k ) ! ψ n − m + k ( x ) He k ⁡ ( x ) . {\displaystyle \psi _{n}^{(m)}(x)=\sum _{k=0}^{m}{\binom {m}{k}}(-1)^{k}2^{\frac {m-k}{2}}{\sqrt {\frac {n!}{(n-m+k)!}}}\psi _{n-m+k}(x)\operatorname {He} _{k}(x).} This formula can be used in connection with the recurrence relations for Hen and ψn to calculate any derivative of the Hermite functions efficiently. === Cramér's inequality === For real x, the Hermite functions satisfy the following bound due to Harald Cramér and Jack Indritz: | ψ n ( x ) | ≤ π − 1 4 . {\displaystyle {\bigl |}\psi _{n}(x){\bigr |}\leq \pi ^{-{\frac {1}{4}}}.} === Hermite functions as eigenfunctions of the Fourier transform === The Hermite functions ψn(x) are a set of eigenfunctions of the continuous Fourier transform F. To see this, take the physicist's version of the generating function and multiply by e−⁠1/2⁠x2. This gives e − 1 2 x 2 + 2 x t − t 2 = ∑ n = 0 ∞ e − 1 2 x 2 H n ( x ) t n n ! . {\displaystyle e^{-{\frac {1}{2}}x^{2}+2xt-t^{2}}=\sum _{n=0}^{\infty }e^{-{\frac {1}{2}}x^{2}}H_{n}(x){\frac {t^{n}}{n!}}.} The Fourier transform of the left side is given by F { e − 1 2 x 2 + 2 x t − t 2 } ( k ) = 1 2 π ∫ − ∞ ∞ e − i x k e − 1 2 x 2 + 2 x t − t 2 d x = e − 1 2 k 2 − 2 k i t + t 2 = ∑ n = 0 ∞ e − 1 2 k 2 H n ( k ) ( − i t ) n n ! . {\displaystyle {\begin{aligned}{\mathcal {F}}\left\{e^{-{\frac {1}{2}}x^{2}+2xt-t^{2}}\right\}(k)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }e^{-ixk}e^{-{\frac {1}{2}}x^{2}+2xt-t^{2}}\,dx\\&=e^{-{\frac {1}{2}}k^{2}-2kit+t^{2}}\\&=\sum _{n=0}^{\infty }e^{-{\frac {1}{2}}k^{2}}H_{n}(k){\frac {(-it)^{n}}{n!}}.\end{aligned}}} The Fourier transform of the right side is given by F { ∑ n = 0 ∞ e − 1 2 x 2 H n ( x ) t n n ! } = ∑ n = 0 ∞ F { e − 1 2 x 2 H n ( x ) } t n n ! . {\displaystyle {\mathcal {F}}\left\{\sum _{n=0}^{\infty }e^{-{\frac {1}{2}}x^{2}}H_{n}(x){\frac {t^{n}}{n!}}\right\}=\sum _{n=0}^{\infty }{\mathcal {F}}\left\{e^{-{\frac {1}{2}}x^{2}}H_{n}(x)\right\}{\frac {t^{n}}{n!}}.} Equating like powers of t in the transformed versions of the left and right sides finally yields F { e − 1 2 x 2 H n ( x ) } = ( − i ) n e − 1 2 k 2 H n ( k ) . {\displaystyle {\mathcal {F}}\left\{e^{-{\frac {1}{2}}x^{2}}H_{n}(x)\right\}=(-i)^{n}e^{-{\frac {1}{2}}k^{2}}H_{n}(k).} The Hermite functions ψn(x) are thus an orthonormal basis of L2(R), which diagonalizes the Fourier transform operator. In short, we have: 1 2 π ∫ e − i k x ψ n ( x ) d x = ( − i ) n ψ n ( k ) , 1 2 π ∫ e + i k x ψ n ( k ) d k = i n ψ n ( x ) {\displaystyle {\frac {1}{\sqrt {2\pi }}}\int e^{-ikx}\psi _{n}(x)dx=(-i)^{n}\psi _{n}(k),\quad {\frac {1}{\sqrt {2\pi }}}\int e^{+ikx}\psi _{n}(k)dk=i^{n}\psi _{n}(x)} === Wigner distributions of Hermite functions === The Wigner distribution function of the nth-order Hermite function is related to the nth-order Laguerre polynomial. The Laguerre polynomials are L n ( x ) := ∑ k = 0 n ( n k ) ( − 1 ) k k ! x k , {\displaystyle L_{n}(x):=\sum _{k=0}^{n}{\binom {n}{k}}{\frac {(-1)^{k}}{k!}}x^{k},} leading to the oscillator Laguerre functions l n ( x ) := e − x 2 L n ( x ) . {\displaystyle l_{n}(x):=e^{-{\frac {x}{2}}}L_{n}(x).} For all natural integers n, it is straightforward to see that W ψ n ( t , f ) = ( − 1 ) n l n ( 4 π ( t 2 + f 2 ) ) , {\displaystyle W_{\psi _{n}}(t,f)=(-1)^{n}l_{n}{\big (}4\pi (t^{2}+f^{2}){\big )},} where the Wigner distribution of a function x ∈ L2(R, C) is defined as W x ( t , f ) = ∫ − ∞ ∞ x ( t + τ 2 ) x ( t − τ 2 ) ∗ e − 2 π i τ f d τ . {\displaystyle W_{x}(t,f)=\int _{-\infty }^{\infty }x\left(t+{\frac {\tau }{2}}\right)\,x\left(t-{\frac {\tau }{2}}\right)^{*}\,e^{-2\pi i\tau f}\,d\tau .} This is a fundamental result for the quantum harmonic oscillator discovered by Hip Groenewold in 1946 in his PhD thesis. It is the standard paradigm of quantum mechanics in phase space. There are further relations between the two families of polynomials. === Partial Overlap Integrals === It can be shown that the overlap between two different Hermite functions ( k ≠ ℓ {\displaystyle k\neq \ell } ) over a given interval has the exact result: ∫ x 1 x 2 ψ k ( x ) ψ ℓ ( x ) d x = 1 2 ( ℓ − k ) ( ψ k ′ ( x 2 ) ψ ℓ ( x 2 ) − ψ ℓ ′ ( x 2 ) ψ k ( x 2 ) − ψ k ′ ( x 1 ) ψ ℓ ( x 1 ) + ψ ℓ ′ ( x 1 ) ψ k ( x 1 ) ) . {\displaystyle \int _{x_{1}}^{x_{2}}\psi _{k}(x)\psi _{\ell }(x)\,dx={\frac {1}{2(\ell -k)}}\left(\psi _{k}'(x_{2})\psi _{\ell }(x_{2})-\psi _{\ell }'(x_{2})\psi _{k}(x_{2})-\psi _{k}'(x_{1})\psi _{\ell }(x_{1})+\psi _{\ell }'(x_{1})\psi _{k}(x_{1})\right).} === Combinatorial interpretation of coefficients === In the Hermite polynomial Hen(x) of variance 1, the absolute value of the coefficient of xk is the number of (unordered) partitions of an n-element set into k singletons and ⁠n − k/2⁠ (unordered) pairs. Equivalently, it is the number of involutions of an n-element set with precisely k fixed points, or in other words, the number of matchings in the complete graph on n vertices that leave k vertices uncovered (indeed, the Hermite polynomials are the matching polynomials of these graphs). The sum of the absolute values of the coefficients gives the total number of partitions into singletons and pairs, the so-called telephone numbers 1, 1, 2, 4, 10, 26, 76, 232, 764, 2620, 9496,... (sequence A000085 in the OEIS). This combinatorial interpretation can be related to complete exponential Bell polynomials as He n ⁡ ( x ) = B n ( x , − 1 , 0 , … , 0 ) , {\displaystyle \operatorname {He} _{n}(x)=B_{n}(x,-1,0,\ldots ,0),} where xi = 0 for all i > 2. These numbers may also be expressed as a special value of the Hermite polynomials: T ( n ) = He n ⁡ ( i ) i n . {\displaystyle T(n)={\frac {\operatorname {He} _{n}(i)}{i^{n}}}.} === Completeness relation === The Christoffel–Darboux formula for Hermite polynomials reads ∑ k = 0 n H k ( x ) H k ( y ) k ! 2 k = 1 n ! 2 n + 1 H n ( y ) H n + 1 ( x ) − H n ( x ) H n + 1 ( y ) x − y . {\displaystyle \sum _{k=0}^{n}{\frac {H_{k}(x)H_{k}(y)}{k!2^{k}}}={\frac {1}{n!2^{n+1}}}\,{\frac {H_{n}(y)H_{n+1}(x)-H_{n}(x)H_{n+1}(y)}{x-y}}.} Moreover, the following completeness identity for the above Hermite functions holds in the sense of distributions: ∑ n = 0 ∞ ψ n ( x ) ψ n ( y ) = δ ( x − y ) , {\displaystyle \sum _{n=0}^{\infty }\psi _{n}(x)\psi _{n}(y)=\delta (x-y),} where δ is the Dirac delta function, ψn the Hermite functions, and δ(x − y) represents the Lebesgue measure on the line y = x in R2, normalized so that its projection on the horizontal axis is the usual Lebesgue measure. This distributional identity follows Wiener (1958) by taking u → 1 in Mehler's formula, valid when −1 < u < 1: E ( x , y ; u ) := ∑ n = 0 ∞ u n ψ n ( x ) ψ n ( y ) = 1 π ( 1 − u 2 ) exp ⁡ ( − 1 − u 1 + u ( x + y ) 2 4 − 1 + u 1 − u ( x − y ) 2 4 ) , {\displaystyle E(x,y;u):=\sum _{n=0}^{\infty }u^{n}\,\psi _{n}(x)\,\psi _{n}(y)={\frac {1}{\sqrt {\pi (1-u^{2})}}}\,\exp \left(-{\frac {1-u}{1+u}}\,{\frac {(x+y)^{2}}{4}}-{\frac {1+u}{1-u}}\,{\frac {(x-y)^{2}}{4}}\right),} which is often stated equivalently as a separable kernel, ∑ n = 0 ∞ H n ( x ) H n ( y ) n ! ( u 2 ) n = 1 1 − u 2 e 2 u 1 + u x y − u 2 1 − u 2 ( x − y ) 2 . {\displaystyle \sum _{n=0}^{\infty }{\frac {H_{n}(x)H_{n}(y)}{n!}}\left({\frac {u}{2}}\right)^{n}={\frac {1}{\sqrt {1-u^{2}}}}e^{{\frac {2u}{1+u}}xy-{\frac {u^{2}}{1-u^{2}}}(x-y)^{2}}.} The function (x, y) → E(x, y; u) is the bivariate Gaussian probability density on R2, which is, when u is close to 1, very concentrated around the line y = x, and very spread out on that line. It follows that ∑ n = 0 ∞ u n ⟨ f , ψ n ⟩ ⟨ ψ n , g ⟩ = ∬ E ( x , y ; u ) f ( x ) g ( y ) ¯ d x d y → ∫ f ( x ) g ( x ) ¯ d x = ⟨ f , g ⟩ {\displaystyle \sum _{n=0}^{\infty }u^{n}\langle f,\psi _{n}\rangle \langle \psi _{n},g\rangle =\iint E(x,y;u)f(x){\overline {g(y)}}\,dx\,dy\to \int f(x){\overline {g(x)}}\,dx=\langle f,g\rangle } when f and g are continuous and compactly supported. This yields that f can be expressed in Hermite functions as the sum of a series of vectors in L2(R), namely, f = ∑ n = 0 ∞ ⟨ f , ψ n ⟩ ψ n . {\displaystyle f=\sum _{n=0}^{\infty }\langle f,\psi _{n}\rangle \psi _{n}.} In order to prove the above equality for E(x,y;u), the Fourier transform of Gaussian functions is used repeatedly: ρ π e − ρ 2 x 2 4 = ∫ e i s x − s 2 ρ 2 d s for ρ > 0. {\displaystyle \rho {\sqrt {\pi }}e^{-{\frac {\rho ^{2}x^{2}}{4}}}=\int e^{isx-{\frac {s^{2}}{\rho ^{2}}}}\,ds\quad {\text{for }}\rho >0.} The Hermite polynomial is then represented as H n ( x ) = ( − 1 ) n e x 2 d n d x n ( 1 2 π ∫ e i s x − s 2 4 d s ) = ( − 1 ) n e x 2 1 2 π ∫ ( i s ) n e i s x − s 2 4 d s . {\displaystyle H_{n}(x)=(-1)^{n}e^{x^{2}}{\frac {d^{n}}{dx^{n}}}\left({\frac {1}{2{\sqrt {\pi }}}}\int e^{isx-{\frac {s^{2}}{4}}}\,ds\right)=(-1)^{n}e^{x^{2}}{\frac {1}{2{\sqrt {\pi }}}}\int (is)^{n}e^{isx-{\frac {s^{2}}{4}}}\,ds.} With this representation for Hn(x) and Hn(y), it is evident that E ( x , y ; u ) = ∑ n = 0 ∞ u n 2 n n ! π H n ( x ) H n ( y ) e − x 2 + y 2 2 = e x 2 + y 2 2 4 π π ∬ ( ∑ n = 0 ∞ 1 2 n n ! ( − u s t ) n ) e i s x + i t y − s 2 4 − t 2 4 d s d t = e x 2 + y 2 2 4 π π ∬ e − u s t 2 e i s x + i t y − s 2 4 − t 2 4 d s d t , {\displaystyle {\begin{aligned}E(x,y;u)&=\sum _{n=0}^{\infty }{\frac {u^{n}}{2^{n}n!{\sqrt {\pi }}}}\,H_{n}(x)H_{n}(y)e^{-{\frac {x^{2}+y^{2}}{2}}}\\&={\frac {e^{\frac {x^{2}+y^{2}}{2}}}{4\pi {\sqrt {\pi }}}}\iint \left(\sum _{n=0}^{\infty }{\frac {1}{2^{n}n!}}(-ust)^{n}\right)e^{isx+ity-{\frac {s^{2}}{4}}-{\frac {t^{2}}{4}}}\,ds\,dt\\&={\frac {e^{\frac {x^{2}+y^{2}}{2}}}{4\pi {\sqrt {\pi }}}}\iint e^{-{\frac {ust}{2}}}\,e^{isx+ity-{\frac {s^{2}}{4}}-{\frac {t^{2}}{4}}}\,ds\,dt,\end{aligned}}} and this yields the desired resolution of the identity result, using again the Fourier transform of Gaussian kernels under the substitution s = σ + τ 2 , t = σ − τ 2 . {\displaystyle s={\frac {\sigma +\tau }{\sqrt {2}}},\quad t={\frac {\sigma -\tau }{\sqrt {2}}}.} == See also == == Notes == == References == == External links == Media related to Hermite polynomials at Wikimedia Commons Weisstein, Eric W. "Hermite Polynomial". MathWorld. GNU Scientific Library — includes C version of Hermite polynomials, functions, their derivatives and zeros (see also GNU Scientific Library)
Wikipedia/Hermite_function
A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Numerous algorithms are known and there has been much research into the topic. The oldest and simplest method, known since antiquity as long multiplication or grade-school multiplication, consists of multiplying every digit in the first number by every digit in the second and adding the results. This has a time complexity of O ( n 2 ) {\displaystyle O(n^{2})} , where n is the number of digits. When done by hand, this may also be reframed as grid method multiplication or lattice multiplication. In software, this may be called "shift and add" due to bitshifts and addition being the only two operations needed. In 1960, Anatoly Karatsuba discovered Karatsuba multiplication, unleashing a flood of research into fast multiplication algorithms. This method uses three multiplications rather than four to multiply two two-digit numbers. (A variant of this can also be used to multiply complex numbers quickly.) Done recursively, this has a time complexity of O ( n log 2 ⁡ 3 ) {\displaystyle O(n^{\log _{2}3})} . Splitting numbers into more than two parts results in Toom-Cook multiplication; for example, using three parts results in the Toom-3 algorithm. Using many parts can set the exponent arbitrarily close to 1, but the constant factor also grows, making it impractical. In 1968, the Schönhage-Strassen algorithm, which makes use of a Fourier transform over a modulus, was discovered. It has a time complexity of O ( n log ⁡ n log ⁡ log ⁡ n ) {\displaystyle O(n\log n\log \log n)} . In 2007, Martin Fürer proposed an algorithm with complexity O ( n log ⁡ n 2 Θ ( log ∗ ⁡ n ) ) {\displaystyle O(n\log n2^{\Theta (\log ^{*}n)})} . In 2014, Harvey, Joris van der Hoeven, and Lecerf proposed one with complexity O ( n log ⁡ n 2 3 log ∗ ⁡ n ) {\displaystyle O(n\log n2^{3\log ^{*}n})} , thus making the implicit constant explicit; this was improved to O ( n log ⁡ n 2 2 log ∗ ⁡ n ) {\displaystyle O(n\log n2^{2\log ^{*}n})} in 2018. Lastly, in 2019, Harvey and van der Hoeven came up with a galactic algorithm with complexity O ( n log ⁡ n ) {\displaystyle O(n\log n)} . This matches a guess by Schönhage and Strassen that this would be the optimal bound, although this remains a conjecture today. Integer multiplication algorithms can also be used to multiply polynomials by means of the method of Kronecker substitution. == Long multiplication == If a positional numeral system is used, a natural way of multiplying numbers is taught in schools as long multiplication, sometimes called grade-school multiplication, sometimes called the Standard Algorithm: multiply the multiplicand by each digit of the multiplier and then add up all the properly shifted results. It requires memorization of the multiplication table for single digits. This is the usual algorithm for multiplying larger numbers by hand in base 10. A person doing long multiplication on paper will write down all the products and then add them together; an abacus-user will sum the products as soon as each one is computed. === Example === This example uses long multiplication to multiply 23,958,233 (multiplicand) by 5,830 (multiplier) and arrives at 139,676,498,390 for the result (product). 23958233 × 5830 ——————————————— 00000000 ( = 23,958,233 × 0) 71874699 ( = 23,958,233 × 30) 191665864 ( = 23,958,233 × 800) + 119791165 ( = 23,958,233 × 5,000) ——————————————— 139676498390 ( = 139,676,498,390) ==== Other notations ==== In some countries such as Germany, the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier: 23958233 · 5830 ——————————————— 119791165 191665864 71874699 00000000 ——————————————— 139676498390 Below pseudocode describes the process of above multiplication. It keeps only one row to maintain the sum which finally becomes the result. Note that the '+=' operator is used to denote sum to existing value and store operation (akin to languages such as Java and C) for compactness. === Usage in computers === Some chips implement long multiplication, in hardware or in microcode, for various integer and floating-point word sizes. In arbitrary-precision arithmetic, it is common to use long multiplication with the base set to 2w, where w is the number of bits in a word, for multiplying relatively small numbers. To multiply two numbers with n digits using this method, one needs about n2 operations. More formally, multiplying two n-digit numbers using long multiplication requires Θ(n2) single-digit operations (additions and multiplications). When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. A typical solution is to represent the number in a small base, b, such that, for example, 8b is a representable machine integer. Several additions can then be performed before an overflow occurs. When the number becomes too large, we add part of it to the result, or we carry and map the remaining part back to a number that is less than b. This process is called normalization. Richard Brent used this approach in his Fortran package, MP. Computers initially used a very similar algorithm to long multiplication in base 2, but modern processors have optimized circuitry for fast multiplications using more efficient algorithms, at the price of a more complex hardware realization. In base two, long multiplication is sometimes called "shift and add", because the algorithm simplifies and just consists of shifting left (multiplying by powers of two) and adding. Most currently available microprocessors implement this or other similar algorithms (such as Booth encoding) for various integer and floating-point sizes in hardware multipliers or in microcode. On currently available processors, a bit-wise shift instruction is usually (but not always) faster than a multiply instruction and can be used to multiply (shift left) and divide (shift right) by powers of two. Multiplication by a constant and division by a constant can be implemented using a sequence of shifts and adds or subtracts. For example, there are several ways to multiply by 10 using only bit-shift and addition. In some cases such sequences of shifts and adds or subtracts will outperform hardware multipliers and especially dividers. A division by a number of the form 2 n {\displaystyle 2^{n}} or 2 n ± 1 {\displaystyle 2^{n}\pm 1} often can be converted to such a short sequence. == Algorithms for multiplying by hand == In addition to the standard long multiplication, there are several other methods used to perform multiplication by hand. Such algorithms may be devised for speed, ease of calculation, or educational value, particularly when computers or multiplication tables are unavailable. === Grid method === The grid method (or box method) is an introductory method for multiple-digit multiplication that is often taught to pupils at primary school or elementary school. It has been a standard part of the national primary school mathematics curriculum in England and Wales since the late 1990s. Both factors are broken up ("partitioned") into their hundreds, tens and units parts, and the products of the parts are then calculated explicitly in a relatively simple multiplication-only stage, before these contributions are then totalled to give the final answer in a separate addition stage. The calculation 34 × 13, for example, could be computed using the grid: followed by addition to obtain 442, either in a single sum (see right), or through forming the row-by-row totals (300 + 40) + (90 + 12) = 340 + 102 = 442. This calculation approach (though not necessarily with the explicit grid arrangement) is also known as the partial products algorithm. Its essence is the calculation of the simple multiplications separately, with all addition being left to the final gathering-up stage. The grid method can in principle be applied to factors of any size, although the number of sub-products becomes cumbersome as the number of digits increases. Nevertheless, it is seen as a usefully explicit method to introduce the idea of multiple-digit multiplications; and, in an age when most multiplication calculations are done using a calculator or a spreadsheet, it may in practice be the only multiplication algorithm that some students will ever need. === Lattice multiplication === Lattice, or sieve, multiplication is algorithmically equivalent to long multiplication. It requires the preparation of a lattice (a grid drawn on paper) which guides the calculation and separates all the multiplications from the additions. It was introduced to Europe in 1202 in Fibonacci's Liber Abaci. Fibonacci described the operation as mental, using his right and left hands to carry the intermediate calculations. Matrakçı Nasuh presented 6 different variants of this method in this 16th-century book, Umdet-ul Hisab. It was widely used in Enderun schools across the Ottoman Empire. Napier's bones, or Napier's rods also used this method, as published by Napier in 1617, the year of his death. As shown in the example, the multiplicand and multiplier are written above and to the right of a lattice, or a sieve. It is found in Muhammad ibn Musa al-Khwarizmi's "Arithmetic", one of Leonardo's sources mentioned by Sigler, author of "Fibonacci's Liber Abaci", 2002. During the multiplication phase, the lattice is filled in with two-digit products of the corresponding digits labeling each row and column: the tens digit goes in the top-left corner. During the addition phase, the lattice is summed on the diagonals. Finally, if a carry phase is necessary, the answer as shown along the left and bottom sides of the lattice is converted to normal form by carrying ten's digits as in long addition or multiplication. ==== Example ==== The pictures on the right show how to calculate 345 × 12 using lattice multiplication. As a more complicated example, consider the picture below displaying the computation of 23,958,233 multiplied by 5,830 (multiplier); the result is 139,676,498,390. Notice 23,958,233 is along the top of the lattice and 5,830 is along the right side. The products fill the lattice and the sum of those products (on the diagonal) are along the left and bottom sides. Then those sums are totaled as shown. === Russian peasant multiplication === The binary method is also known as peasant multiplication, because it has been widely used by people who are classified as peasants and thus have not memorized the multiplication tables required for long multiplication. The algorithm was in use in ancient Egypt. Its main advantages are that it can be taught quickly, requires no memorization, and can be performed using tokens, such as poker chips, if paper and pencil aren't available. The disadvantage is that it takes more steps than long multiplication, so it can be unwieldy for large numbers. ==== Description ==== On paper, write down in one column the numbers you get when you repeatedly halve the multiplier, ignoring the remainder; in a column beside it repeatedly double the multiplicand. Cross out each row in which the last digit of the first number is even, and add the remaining numbers in the second column to obtain the product. ==== Examples ==== This example uses peasant multiplication to multiply 11 by 3 to arrive at a result of 33. Decimal: Binary: 11 3 1011 11 5 6 101 110 2 12 10 1100 1 24 1 11000 —— —————— 33 100001 Describing the steps explicitly: 11 and 3 are written at the top 11 is halved (5.5) and 3 is doubled (6). The fractional portion is discarded (5.5 becomes 5). 5 is halved (2.5) and 6 is doubled (12). The fractional portion is discarded (2.5 becomes 2). The figure in the left column (2) is even, so the figure in the right column (12) is discarded. 2 is halved (1) and 12 is doubled (24). All not-scratched-out values are summed: 3 + 6 + 24 = 33. The method works because multiplication is distributive, so: 3 × 11 = 3 × ( 1 × 2 0 + 1 × 2 1 + 0 × 2 2 + 1 × 2 3 ) = 3 × ( 1 + 2 + 8 ) = 3 + 6 + 24 = 33. {\displaystyle {\begin{aligned}3\times 11&=3\times (1\times 2^{0}+1\times 2^{1}+0\times 2^{2}+1\times 2^{3})\\&=3\times (1+2+8)\\&=3+6+24\\&=33.\end{aligned}}} A more complicated example, using the figures from the earlier examples (23,958,233 and 5,830): Decimal: Binary: 5830 23958233 1011011000110 1011011011001001011011001 2915 47916466 101101100011 10110110110010010110110010 1457 95832932 10110110001 101101101100100101101100100 728 191665864 1011011000 1011011011001001011011001000 364 383331728 101101100 10110110110010010110110010000 182 766663456 10110110 101101101100100101101100100000 91 1533326912 1011011 1011011011001001011011001000000 45 3066653824 101101 10110110110010010110110010000000 22 6133307648 10110 101101101100100101101100100000000 11 12266615296 1011 1011011011001001011011001000000000 5 24533230592 101 10110110110010010110110010000000000 2 49066461184 10 101101101100100101101100100000000000 1 98132922368 1 1011011011001001011011001000000000000 ———————————— 1022143253354344244353353243222210110 (before carry) 139676498390 10000010000101010111100011100111010110 === Quarter square multiplication === This formula can in some cases be used, to make multiplication tasks easier to complete: ( x + y ) 2 4 − ( x − y ) 2 4 = 1 4 ( ( x 2 + 2 x y + y 2 ) − ( x 2 − 2 x y + y 2 ) ) = 1 4 ( 4 x y ) = x y . {\displaystyle {\frac {\left(x+y\right)^{2}}{4}}-{\frac {\left(x-y\right)^{2}}{4}}={\frac {1}{4}}\left(\left(x^{2}+2xy+y^{2}\right)-\left(x^{2}-2xy+y^{2}\right)\right)={\frac {1}{4}}\left(4xy\right)=xy.} In the case where x {\displaystyle x} and y {\displaystyle y} are integers, we have that ( x + y ) 2 ≡ ( x − y ) 2 mod 4 {\displaystyle (x+y)^{2}\equiv (x-y)^{2}{\bmod {4}}} because x + y {\displaystyle x+y} and x − y {\displaystyle x-y} are either both even or both odd. This means that x y = 1 4 ( x + y ) 2 − 1 4 ( x − y ) 2 = ( ( x + y ) 2 div 4 ) − ( ( x − y ) 2 div 4 ) {\displaystyle {\begin{aligned}xy&={\frac {1}{4}}(x+y)^{2}-{\frac {1}{4}}(x-y)^{2}\\&=\left((x+y)^{2}{\text{ div }}4\right)-\left((x-y)^{2}{\text{ div }}4\right)\end{aligned}}} and it's sufficient to (pre-)compute the integral part of squares divided by 4 like in the following example. ==== Examples ==== Below is a lookup table of quarter squares with the remainder discarded for the digits 0 through 18; this allows for the multiplication of numbers up to 9×9. If, for example, you wanted to multiply 9 by 3, you observe that the sum and difference are 12 and 6 respectively. Looking both those values up on the table yields 36 and 9, the difference of which is 27, which is the product of 9 and 3. ==== History of quarter square multiplication ==== In prehistoric time, quarter square multiplication involved floor function; that some sources attribute to Babylonian mathematics (2000–1600 BC). Antoine Voisin published a table of quarter squares from 1 to 1000 in 1817 as an aid in multiplication. A larger table of quarter squares from 1 to 100000 was published by Samuel Laundy in 1856, and a table from 1 to 200000 by Joseph Blater in 1888. Quarter square multipliers were used in analog computers to form an analog signal that was the product of two analog input signals. In this application, the sum and difference of two input voltages are formed using operational amplifiers. The square of each of these is approximated using piecewise linear circuits. Finally the difference of the two squares is formed and scaled by a factor of one fourth using yet another operational amplifier. In 1980, Everett L. Johnson proposed using the quarter square method in a digital multiplier. To form the product of two 8-bit integers, for example, the digital device forms the sum and difference, looks both quantities up in a table of squares, takes the difference of the results, and divides by four by shifting two bits to the right. For 8-bit integers the table of quarter squares will have 29−1=511 entries (one entry for the full range 0..510 of possible sums, the differences using only the first 256 entries in range 0..255) or 29−1=511 entries (using for negative differences the technique of 2-complements and 9-bit masking, which avoids testing the sign of differences), each entry being 16-bit wide (the entry values are from (0²/4)=0 to (510²/4)=65025). The quarter square multiplier technique has benefited 8-bit systems that do not have any support for a hardware multiplier. Charles Putney implemented this for the 6502. == Computational complexity of multiplication == A line of research in theoretical computer science is about the number of single-bit arithmetic operations necessary to multiply two n {\displaystyle n} -bit integers. This is known as the computational complexity of multiplication. Usual algorithms done by hand have asymptotic complexity of O ( n 2 ) {\displaystyle O(n^{2})} , but in 1960 Anatoly Karatsuba discovered that better complexity was possible (with the Karatsuba algorithm). Currently, the algorithm with the best computational complexity is a 2019 algorithm of David Harvey and Joris van der Hoeven, which uses the strategies of using number-theoretic transforms introduced with the Schönhage–Strassen algorithm to multiply integers using only O ( n log ⁡ n ) {\displaystyle O(n\log n)} operations. This is conjectured to be the best possible algorithm, but lower bounds of Ω ( n log ⁡ n ) {\displaystyle \Omega (n\log n)} are not known. === Karatsuba multiplication === Karatsuba multiplication is an O(nlog23) ≈ O(n1.585) divide and conquer algorithm, that uses recursion to merge together sub calculations. By rewriting the formula, one makes it possible to do sub calculations / recursion. By doing recursion, one can solve this in a fast manner. Let x {\displaystyle x} and y {\displaystyle y} be represented as n {\displaystyle n} -digit strings in some base B {\displaystyle B} . For any positive integer m {\displaystyle m} less than n {\displaystyle n} , one can write the two given numbers as x = x 1 B m + x 0 , {\displaystyle x=x_{1}B^{m}+x_{0},} y = y 1 B m + y 0 , {\displaystyle y=y_{1}B^{m}+y_{0},} where x 0 {\displaystyle x_{0}} and y 0 {\displaystyle y_{0}} are less than B m {\displaystyle B^{m}} . The product is then x y = ( x 1 B m + x 0 ) ( y 1 B m + y 0 ) = x 1 y 1 B 2 m + ( x 1 y 0 + x 0 y 1 ) B m + x 0 y 0 = z 2 B 2 m + z 1 B m + z 0 , {\displaystyle {\begin{aligned}xy&=(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})\\&=x_{1}y_{1}B^{2m}+(x_{1}y_{0}+x_{0}y_{1})B^{m}+x_{0}y_{0}\\&=z_{2}B^{2m}+z_{1}B^{m}+z_{0},\\\end{aligned}}} where z 2 = x 1 y 1 , {\displaystyle z_{2}=x_{1}y_{1},} z 1 = x 1 y 0 + x 0 y 1 , {\displaystyle z_{1}=x_{1}y_{0}+x_{0}y_{1},} z 0 = x 0 y 0 . {\displaystyle z_{0}=x_{0}y_{0}.} These formulae require four multiplications and were known to Charles Babbage. Karatsuba observed that x y {\displaystyle xy} can be computed in only three multiplications, at the cost of a few extra additions. With z 0 {\displaystyle z_{0}} and z 2 {\displaystyle z_{2}} as before one can observe that z 1 = x 1 y 0 + x 0 y 1 = x 1 y 0 + x 0 y 1 + x 1 y 1 − x 1 y 1 + x 0 y 0 − x 0 y 0 = x 1 y 0 + x 0 y 0 + x 0 y 1 + x 1 y 1 − x 1 y 1 − x 0 y 0 = ( x 1 + x 0 ) y 0 + ( x 0 + x 1 ) y 1 − x 1 y 1 − x 0 y 0 = ( x 1 + x 0 ) ( y 0 + y 1 ) − x 1 y 1 − x 0 y 0 = ( x 1 + x 0 ) ( y 1 + y 0 ) − z 2 − z 0 . {\displaystyle {\begin{aligned}z_{1}&=x_{1}y_{0}+x_{0}y_{1}\\&=x_{1}y_{0}+x_{0}y_{1}+x_{1}y_{1}-x_{1}y_{1}+x_{0}y_{0}-x_{0}y_{0}\\&=x_{1}y_{0}+x_{0}y_{0}+x_{0}y_{1}+x_{1}y_{1}-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})y_{0}+(x_{0}+x_{1})y_{1}-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})(y_{0}+y_{1})-x_{1}y_{1}-x_{0}y_{0}\\&=(x_{1}+x_{0})(y_{1}+y_{0})-z_{2}-z_{0}.\\\end{aligned}}} Because of the overhead of recursion, Karatsuba's multiplication is slower than long multiplication for small values of n; typical implementations therefore switch to long multiplication for small values of n. ==== General case with multiplication of N numbers ==== By exploring patterns after expansion, one see following: ( x 1 B m + x 0 ) ( y 1 B m + y 0 ) ( z 1 B m + z 0 ) ( a 1 B m + a 0 ) = a 1 x 1 y 1 z 1 B 4 m + a 1 x 1 y 1 z 0 B 3 m + a 1 x 1 y 0 z 1 B 3 m + a 1 x 0 y 1 z 1 B 3 m + a 0 x 1 y 1 z 1 B 3 m + a 1 x 1 y 0 z 0 B 2 m + a 1 x 0 y 1 z 0 B 2 m + a 0 x 1 y 1 z 0 B 2 m + a 1 x 0 y 0 z 1 B 2 m + a 0 x 1 y 0 z 1 B 2 m + a 0 x 0 y 1 z 1 B 2 m + a 1 x 0 y 0 z 0 B m 1 + a 0 x 1 y 0 z 0 B m 1 + a 0 x 0 y 1 z 0 B m 1 + a 0 x 0 y 0 z 1 B m 1 + a 0 x 0 y 0 z 0 B 1 m {\displaystyle {\begin{alignedat}{5}(x_{1}B^{m}+x_{0})(y_{1}B^{m}+y_{0})(z_{1}B^{m}+z_{0})(a_{1}B^{m}+a_{0})&=a_{1}x_{1}y_{1}z_{1}B^{4m}&+a_{1}x_{1}y_{1}z_{0}B^{3m}&+a_{1}x_{1}y_{0}z_{1}B^{3m}&+a_{1}x_{0}y_{1}z_{1}B^{3m}\\&+a_{0}x_{1}y_{1}z_{1}B^{3m}&+a_{1}x_{1}y_{0}z_{0}B^{2m}&+a_{1}x_{0}y_{1}z_{0}B^{2m}&+a_{0}x_{1}y_{1}z_{0}B^{2m}\\&+a_{1}x_{0}y_{0}z_{1}B^{2m}&+a_{0}x_{1}y_{0}z_{1}B^{2m}&+a_{0}x_{0}y_{1}z_{1}B^{2m}&+a_{1}x_{0}y_{0}z_{0}B^{m{\phantom {1}}}\\&+a_{0}x_{1}y_{0}z_{0}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{1}z_{0}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{0}z_{1}B^{m{\phantom {1}}}&+a_{0}x_{0}y_{0}z_{0}{\phantom {B^{1m}}}\end{alignedat}}} Each summand is associated to a unique binary number from 0 to 2 N + 1 − 1 {\displaystyle 2^{N+1}-1} , for example a 1 x 1 y 1 z 1 ⟷ 1111 , a 1 x 0 y 1 z 0 ⟷ 1010 {\displaystyle a_{1}x_{1}y_{1}z_{1}\longleftrightarrow 1111,\ a_{1}x_{0}y_{1}z_{0}\longleftrightarrow 1010} etc. Furthermore; B is powered to number of 1, in this binary string, multiplied with m. If we express this in fewer terms, we get: ∏ j = 1 N ( x j , 1 B m + x j , 0 ) = ∑ i = 1 2 N + 1 − 1 ∏ j = 1 N x j , c ( i , j ) B m ∑ j = 1 N c ( i , j ) = ∑ j = 0 N z j B j m {\displaystyle \prod _{j=1}^{N}(x_{j,1}B^{m}+x_{j,0})=\sum _{i=1}^{2^{N+1}-1}\prod _{j=1}^{N}x_{j,c(i,j)}B^{m\sum _{j=1}^{N}c(i,j)}=\sum _{j=0}^{N}z_{j}B^{jm}} , where c ( i , j ) {\displaystyle c(i,j)} means digit in number i at position j. Notice that c ( i , j ) ∈ { 0 , 1 } {\displaystyle c(i,j)\in \{0,1\}} z 0 = ∏ j = 1 N x j , 0 z N = ∏ j = 1 N x j , 1 z N − 1 = ∏ j = 1 N ( x j , 0 + x j , 1 ) − ∑ i ≠ N − 1 N z i {\displaystyle {\begin{aligned}z_{0}&=\prod _{j=1}^{N}x_{j,0}\\z_{N}&=\prod _{j=1}^{N}x_{j,1}\\z_{N-1}&=\prod _{j=1}^{N}(x_{j,0}+x_{j,1})-\sum _{i\neq N-1}^{N}z_{i}\end{aligned}}} ==== History ==== Karatsuba's algorithm was the first known algorithm for multiplication that is asymptotically faster than long multiplication, and can thus be viewed as the starting point for the theory of fast multiplications. === Toom–Cook === Another method of multiplication is called Toom–Cook or Toom-3. The Toom–Cook method splits each number to be multiplied into multiple parts. The Toom–Cook method is one of the generalizations of the Karatsuba method. A three-way Toom–Cook can do a size-3N multiplication for the cost of five size-N multiplications. This accelerates the operation by a factor of 9/5, while the Karatsuba method accelerates it by 4/3. Although using more and more parts can reduce the time spent on recursive multiplications further, the overhead from additions and digit management also grows. For this reason, the method of Fourier transforms is typically faster for numbers with several thousand digits, and asymptotically faster for even larger numbers. === Schönhage–Strassen === Every number in base B, can be written as a polynomial: X = ∑ i = 0 N x i B i {\displaystyle X=\sum _{i=0}^{N}{x_{i}B^{i}}} Furthermore, multiplication of two numbers could be thought of as a product of two polynomials: X Y = ( ∑ i = 0 N x i B i ) ( ∑ j = 0 N y i B j ) {\displaystyle XY=(\sum _{i=0}^{N}{x_{i}B^{i}})(\sum _{j=0}^{N}{y_{i}B^{j}})} Because,for B k {\displaystyle B^{k}} : c k = ∑ ( i , j ) : i + j = k a i b j = ∑ i = 0 k a i b k − i {\displaystyle c_{k}=\sum _{(i,j):i+j=k}{a_{i}b_{j}}=\sum _{i=0}^{k}{a_{i}b_{k-i}}} , we have a convolution. By using fft (fast fourier transformation) with convolution rule, we can get f ^ ( a ∗ b ) = f ^ ( ∑ i = 0 k a i b k − i ) = f ^ ( a ) ∙ f ^ ( b ) {\displaystyle {\hat {f}}(a*b)={\hat {f}}(\sum _{i=0}^{k}{a_{i}b_{k-i}})={\hat {f}}(a)\bullet {\hat {f}}(b)} . That is; C k = a k ∙ b k {\displaystyle C_{k}=a_{k}\bullet b_{k}} , where C k {\displaystyle C_{k}} is the corresponding coefficient in fourier space. This can also be written as: f f t ( a ∗ b ) = f f t ( a ) ∙ f f t ( b ) {\displaystyle \mathrm {fft} (a*b)=\mathrm {fft} (a)\bullet \mathrm {fft} (b)} . We have the same coefficient due to linearity under fourier transformation, and because these polynomials only consist of one unique term per coefficient: f ^ ( x n ) = ( i 2 π ) n δ ( n ) {\displaystyle {\hat {f}}(x^{n})=\left({\frac {i}{2\pi }}\right)^{n}\delta ^{(n)}} and f ^ ( a X ( ξ ) + b Y ( ξ ) ) = a X ^ ( ξ ) + b Y ^ ( ξ ) {\displaystyle {\hat {f}}(a\,X(\xi )+b\,Y(\xi ))=a\,{\hat {X}}(\xi )+b\,{\hat {Y}}(\xi )} Convolution rule: f ^ ( X ∗ Y ) = f ^ ( X ) ∙ f ^ ( Y ) {\displaystyle {\hat {f}}(X*Y)=\ {\hat {f}}(X)\bullet {\hat {f}}(Y)} We have reduced our convolution problem to product problem, through fft. By finding ifft (polynomial interpolation), for each c k {\displaystyle c_{k}} , one get the desired coefficients. Algorithm uses divide and conquer strategy, to divide problem to subproblems. It has a time complexity of O(n log(n) log(log(n))). ==== History ==== The algorithm was invented by Strassen (1968). It was made practical and theoretical guarantees were provided in 1971 by Schönhage and Strassen resulting in the Schönhage–Strassen algorithm. === Further improvements === In 2007 the asymptotic complexity of integer multiplication was improved by the Swiss mathematician Martin Fürer of Pennsylvania State University to O ( n log ⁡ n ⋅ 2 Θ ( log ∗ ⁡ ( n ) ) ) {\textstyle O(n\log n\cdot {2}^{\Theta (\log ^{*}(n))})} using Fourier transforms over complex numbers, where log* denotes the iterated logarithm. Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi gave a similar algorithm using modular arithmetic in 2008 achieving the same running time. In context of the above material, what these latter authors have achieved is to find N much less than 23k + 1, so that Z/NZ has a (2m)th root of unity. This speeds up computation and reduces the time complexity. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs. In 2014, Harvey, Joris van der Hoeven and Lecerf gave a new algorithm that achieves a running time of O ( n log ⁡ n ⋅ 2 3 log ∗ ⁡ n ) {\displaystyle O(n\log n\cdot 2^{3\log ^{*}n})} , making explicit the implied constant in the O ( log ∗ ⁡ n ) {\displaystyle O(\log ^{*}n)} exponent. They also proposed a variant of their algorithm which achieves O ( n log ⁡ n ⋅ 2 2 log ∗ ⁡ n ) {\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})} but whose validity relies on standard conjectures about the distribution of Mersenne primes. In 2016, Covanov and Thomé proposed an integer multiplication algorithm based on a generalization of Fermat primes that conjecturally achieves a complexity bound of O ( n log ⁡ n ⋅ 2 2 log ∗ ⁡ n ) {\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})} . This matches the 2015 conditional result of Harvey, van der Hoeven, and Lecerf but uses a different algorithm and relies on a different conjecture. In 2018, Harvey and van der Hoeven used an approach based on the existence of short lattice vectors guaranteed by Minkowski's theorem to prove an unconditional complexity bound of O ( n log ⁡ n ⋅ 2 2 log ∗ ⁡ n ) {\displaystyle O(n\log n\cdot 2^{2\log ^{*}n})} . In March 2019, David Harvey and Joris van der Hoeven announced their discovery of an O(n log n) multiplication algorithm. It was published in the Annals of Mathematics in 2021. Because Schönhage and Strassen predicted that n log(n) is the "best possible" result, Harvey said: "... our work is expected to be the end of the road for this problem, although we don't know yet how to prove this rigorously." === Lower bounds === There is a trivial lower bound of Ω(n) for multiplying two n-bit numbers on a single processor; no matching algorithm (on conventional machines, that is on Turing equivalent machines) nor any sharper lower bound is known. Multiplication lies outside of AC0[p] for any prime p, meaning there is no family of constant-depth, polynomial (or even subexponential) size circuits using AND, OR, NOT, and MODp gates that can compute a product. This follows from a constant-depth reduction of MODq to multiplication. Lower bounds for multiplication are also known for some classes of branching programs. == Complex number multiplication == Complex multiplication normally involves four multiplications and two additions. ( a + b i ) ( c + d i ) = ( a c − b d ) + ( b c + a d ) i . {\displaystyle (a+bi)(c+di)=(ac-bd)+(bc+ad)i.} Or × a b i c a c b c i d i a d i − b d {\displaystyle {\begin{array}{c|c|c}\times &a&bi\\\hline c&ac&bci\\\hline di&adi&-bd\end{array}}} As observed by Peter Ungar in 1963, one can reduce the number of multiplications to three, using essentially the same computation as Karatsuba's algorithm. The product (a + bi) · (c + di) can be calculated in the following way. k1 = c · (a + b) k2 = a · (d − c) k3 = b · (c + d) Real part = k1 − k3 Imaginary part = k1 + k2. This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point. For fast Fourier transforms (FFTs) (or any linear transformation) the complex multiplies are by constant coefficients c + di (called twiddle factors in FFTs), in which case two of the additions (d−c and c+d) can be precomputed. Hence, only three multiplies and three adds are required. However, trading off a multiplication for an addition in this way may no longer be beneficial with modern floating-point units. == Polynomial multiplication == All the above multiplication algorithms can also be expanded to multiply polynomials. Alternatively the Kronecker substitution technique may be used to convert the problem of multiplying polynomials into a single binary multiplication. Long multiplication methods can be generalised to allow the multiplication of algebraic formulae: 14ac - 3ab + 2 multiplied by ac - ab + 1 14ac -3ab 2 ac -ab 1 ———————————————————— 14a2c2 -3a2bc 2ac -14a2bc 3 a2b2 -2ab 14ac -3ab 2 ——————————————————————————————————————— 14a2c2 -17a2bc 16ac 3a2b2 -5ab +2 ======================================= As a further example of column based multiplication, consider multiplying 23 long tons (t), 12 hundredweight (cwt) and 2 quarters (qtr) by 47. This example uses avoirdupois measures: 1 t = 20 cwt, 1 cwt = 4 qtr. t cwt qtr 23 12 2 47 x ———————————————— 141 94 94 940 470 29 23 ———————————————— 1110 587 94 ———————————————— 1110 7 2 ================= Answer: 1110 ton 7 cwt 2 qtr First multiply the quarters by 47, the result 94 is written into the first workspace. Next, multiply cwt 12*47 = (2 + 10)*47 but don't add up the partial results (94, 470) yet. Likewise multiply 23 by 47 yielding (141, 940). The quarters column is totaled and the result placed in the second workspace (a trivial move in this case). 94 quarters is 23 cwt and 2 qtr, so place the 2 in the answer and put the 23 in the next column left. Now add up the three entries in the cwt column giving 587. This is 29 t 7 cwt, so write the 7 into the answer and the 29 in the column to the left. Now add up the tons column. There is no adjustment to make, so the result is just copied down. The same layout and methods can be used for any traditional measurements and non-decimal currencies such as the old British £sd system. == See also == Binary multiplier Dadda multiplier Division algorithm Horner scheme for evaluating of a polynomial Logarithm Matrix multiplication algorithm Mental calculation Number-theoretic transform Prosthaphaeresis Slide rule Trachtenberg system Residue number system § Multiplication for another fast multiplication algorithm, specially efficient when many operations are done in sequence, such as in linear algebra Wallace tree == References == == Further reading == Warren Jr., Henry S. (2013). Hacker's Delight (2 ed.). Addison Wesley - Pearson Education, Inc. ISBN 978-0-321-84268-8. Savard, John J. G. (2018) [2006]. "Advanced Arithmetic Techniques". quadibloc. Archived from the original on 2018-07-03. Retrieved 2018-07-16. Johansson, Kenny (2008). Low Power and Low Complexity Shift-and-Add Based Computations (PDF) (Dissertation thesis). Linköping Studies in Science and Technology (1 ed.). Linköping, Sweden: Department of Electrical Engineering, Linköping University. ISBN 978-91-7393-836-5. ISSN 0345-7524. No. 1201. Archived (PDF) from the original on 2017-08-13. Retrieved 2021-08-23. (x+268 pages) == External links == === Basic arithmetic === The Many Ways of Arithmetic in UCSMP Everyday Mathematics A Powerpoint presentation about ancient mathematics Lattice Multiplication Flash Video === Advanced algorithms === Multiplication Algorithms used by GMP
Wikipedia/Multiplication_algorithms
In time series analysis, Bartlett's method (also known as the method of averaged periodograms), is used for estimating power spectra. It provides a way to reduce the variance of the periodogram in exchange for a reduction of resolution, compared to standard periodograms. A final estimate of the spectrum at a given frequency is obtained by averaging the estimates from the periodograms (at the same frequency) derived from non-overlapping portions of the original series. The method is used in physics, engineering, and applied mathematics. Common applications of Bartlett's method are frequency response measurements and general spectrum analysis. The method is named after M. S. Bartlett who first proposed it. == Definition and procedure == Bartlett’s method consists of the following steps: The original N point data segment is split up into K (non-overlapping) data segments, each of length M For each segment, compute the periodogram by computing the discrete Fourier transform (DFT version which does not divide by M), then computing the squared magnitude of the result and dividing this by M. Average the result of the periodograms above for the K data segments. The averaging reduces the variance, compared to the original N point data segment. The end result is an array of power measurements vs. frequency "bin". == Related methods == The Welch method: this is a method that uses a modified version of Bartlett’s method in which the portions of the series contributing to each periodogram are allowed to overlap. Periodogram smoothing. == References == == Further reading == Proakis, John G.; Manolakis, Dimitri G. (1996), Digital Signal Processing: Principles, Algorithms and Applications (3 ed.), Pearson Education, pp. 910–911, ISBN 0-13-394289-9 Proakis, John G.; Manolakis, Dimitri G. (1996), Digital Signal Processing: Principles, Algorithms and Applications (3 ed.), Upper Saddle River, NJ: Prentice-Hall, ISBN 9780133942897, sAcfAQAAIAAJ
Wikipedia/Bartlett_method
In mathematics, especially in the fields of group theory and representation theory of groups, a class function is a function on a group G that is constant on the conjugacy classes of G. In other words, it is invariant under the conjugation map on G. Such functions play a basic role in representation theory. == Characters == The character of a linear representation of G over a field K is always a class function with values in K. The class functions form the center of the group ring K[G]. Here a class function f is identified with the element ∑ g ∈ G f ( g ) g {\displaystyle \sum _{g\in G}f(g)g} . == Inner products == The set of class functions of a group G with values in a field K form a K-vector space. If G is finite and the characteristic of the field does not divide the order of G, then there is an inner product defined on this space defined by ⟨ ϕ , ψ ⟩ = 1 | G | ∑ g ∈ G ϕ ( g ) ψ ( g ) ¯ , {\displaystyle \langle \phi ,\psi \rangle ={\frac {1}{|G|}}\sum _{g\in G}\phi (g){\overline {\psi (g)}},} where |G| denotes the order of G and the overbar denotes conjugation in the field K. The set of irreducible characters of G forms an orthogonal basis. Further, if K is a splitting field for G—for instance, if K is algebraically closed, then the irreducible characters form an orthonormal basis. When G is a compact group and K = C is the field of complex numbers, the Haar measure can be applied to replace the finite sum above with an integral: ⟨ ϕ , ψ ⟩ = ∫ G ϕ ( t ) ψ ( t ) ¯ d t . {\displaystyle \langle \phi ,\psi \rangle =\int _{G}\phi (t){\overline {\psi (t)}}\,dt.} When K is the real numbers or the complex numbers, the inner product is a non-degenerate Hermitian bilinear form. == See also == Brauer's theorem on induced characters == References == Jean-Pierre Serre, Linear representations of finite groups, Graduate Texts in Mathematics 42, Springer-Verlag, Berlin, 1977.
Wikipedia/Class_function
Welch's method, named after Peter D. Welch, is an approach for spectral density estimation. It is used in physics, engineering, and applied mathematics for estimating the power of a signal at different frequencies. The method is based on the concept of using periodogram spectrum estimates, which are the result of converting a signal from the time domain to the frequency domain. Welch's method is an improvement on the standard periodogram spectrum estimating method and on Bartlett's method, in that it reduces noise in the estimated power spectra in exchange for reducing the frequency resolution. Due to the noise caused by imperfect and finite data, the noise reduction from Welch's method is often desired. == Definition and procedure == The Welch method is based on Bartlett's method and differs in two ways: The signal is split up into overlapping segments: the original data segment is split up into L data segments of length M, overlapping by D points. If D = M / 2, the overlap is said to be 50% If D = 0, the overlap is said to be 0%. This is the same situation as in the Bartlett's method. The overlapping segments are then windowed: After the data is split up into overlapping segments, the individual L data segments have a window applied to them (in the time domain). Most window functions afford more influence to the data at the center of the set than to data at the edges, which represents a loss of information. To mitigate that loss, the individual data sets are commonly overlapped in time (as in the above step). The windowing of the segments is what makes the Welch method a "modified" periodogram. After doing the above, the periodogram is calculated by computing the discrete Fourier transform, and then computing the squared magnitude of the result, yielding power spectrum estimates for each segment. The individual spectrum estimates are then averaged, which reduces the variance of the individual power measurements. The end result is an array of power measurements vs. frequency "bin". == Related approaches == Other overlapping windowed Fourier transforms include: Modified discrete cosine transform Short-time Fourier transform == See also == Fast Fourier transform Power spectrum Spectral density estimation == References == Welch, P. D. (1967), "The use of Fast Fourier Transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms" (PDF), IEEE Transactions on Audio and Electroacoustics, AU-15 (2): 70–73, Bibcode:1967ITAE...15...70W, doi:10.1109/TAU.1967.1161901 Oppenheim, Alan V.; Schafer, Ronald W. (1975). Digital signal processing. Englewood Cliffs, N.J.: Prentice-Hall. pp. 548–554. ISBN 0-13-214635-5. Proakis, John G.; Manolakis, Dimitri G. (1996), Digital Signal Processing: Principles, Algorithms and Applications (3 ed.), Upper Saddle River, NJ: Prentice-Hall, pp. 910–913, ISBN 9780133942897, sAcfAQAAIAAJ
Wikipedia/Welch_method
In applied mathematics, the sliding discrete Fourier transform is a recursive algorithm to compute successive STFTs of input data frames that are a single sample apart (hopsize − 1). The calculation for the sliding DFT is closely related to Goertzel algorithm. == Definition == Assuming that the hopsize between two consecutive DFTs is 1 sample, then F t + 1 ( n ) = ∑ k = 0 N − 1 f k + t + 1 e − j 2 π k n / N = ∑ m = 1 N f m + t e − j 2 π ( m − 1 ) n / N = e j 2 π n / N [ ∑ m = 0 N − 1 f m + t e − j 2 π m n / N − f t + f t + N ] = e j 2 π n / N [ F t ( n ) − f t + f t + N ] . {\displaystyle {\begin{aligned}F_{t+1}(n)&=\sum _{k=0}^{N-1}f_{k+t+1}e^{-j2\pi kn/N}\\&=\sum _{m=1}^{N}f_{m+t}e^{-j2\pi (m-1)n/N}\\&=e^{j2\pi n/N}\left[\sum _{m=0}^{N-1}f_{m+t}e^{-j2\pi mn/N}-f_{t}+f_{t+N}\right]\\&=e^{j2\pi n/N}\left[F_{t}(n)-f_{t}+f_{t+N}\right].\end{aligned}}} From this definition above, the DFT can be computed recursively thereafter. However, implementing the window function on a sliding DFT is difficult due to its recursive nature, therefore it is done exclusively in a frequency domain. === Sliding windowed infinite Fourier transform === It is not possible to implement asymmetric window functions into sliding DFT. However, the IIR version called sliding windowed infinite Fourier transform (SWIFT) provides an exponential window and the αSWIFT calculates two sDFTs in parallel where slow-decaying one is subtracted by fast-decaying one, therefore a window function of w ( x ) = e − x α − e − x β {\displaystyle w(x)=e^{-x\alpha }-e^{-x\beta }} . == References ==
Wikipedia/Sliding_discrete_Fourier_transform
In mathematical analysis, a bump function (also called a test function) is a function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } on a Euclidean space R n {\displaystyle \mathbb {R} ^{n}} which is both smooth (in the sense of having continuous derivatives of all orders) and compactly supported. The set of all bump functions with domain R n {\displaystyle \mathbb {R} ^{n}} forms a vector space, denoted C 0 ∞ ( R n ) {\displaystyle \mathrm {C} _{0}^{\infty }(\mathbb {R} ^{n})} or C c ∞ ( R n ) . {\displaystyle \mathrm {C} _{\mathrm {c} }^{\infty }(\mathbb {R} ^{n}).} The dual space of this space endowed with a suitable topology is the space of distributions. == Examples == The function Ψ : R → R {\displaystyle \Psi :\mathbb {R} \to \mathbb {R} } given by Ψ ( x ) = { exp ⁡ ( 1 x 2 − 1 ) , if | x | < 1 , 0 , if | x | ≥ 1 , {\displaystyle \Psi (x)={\begin{cases}\exp \left({\frac {1}{x^{2}-1}}\right),&{\text{ if }}|x|<1,\\0,&{\text{ if }}|x|\geq 1,\end{cases}}} is an example of a bump function in one dimension. Note that the support of this function is the closed interval [ − 1 , 1 ] {\displaystyle [-1,1]} . In fact, by definition of support, we have that supp ⁡ ( Ψ ) := { x ∈ R : Ψ ( x ) ≠ 0 } ¯ = ( − 1 , 1 ) ¯ {\displaystyle \operatorname {supp} (\Psi ):={\overline {\{x\in \mathbb {R} :\Psi (x)\neq 0\}}}={\overline {(-1,1)}}} , where the closure is taken with respect the Euclidean topology of the real line. The proof of smoothness follows along the same lines as for the related function discussed in the Non-analytic smooth function article. This function can be interpreted as the Gaussian function exp ⁡ ( − y 2 ) {\displaystyle \exp \left(-y^{2}\right)} scaled to fit into the unit disc: the substitution y 2 = 1 / ( 1 − x 2 ) {\displaystyle y^{2}={1}/{\left(1-x^{2}\right)}} corresponds to sending x = ± 1 {\displaystyle x=\pm 1} to y = ∞ . {\displaystyle y=\infty .} A simple example of a (square) bump function in n {\displaystyle n} variables is obtained by taking the product of n {\displaystyle n} copies of the above bump function in one variable, so Φ ( x 1 , x 2 , … , x n ) = Ψ ( x 1 ) Ψ ( x 2 ) ⋯ Ψ ( x n ) . {\displaystyle \Phi (x_{1},x_{2},\dots ,x_{n})=\Psi (x_{1})\Psi (x_{2})\cdots \Psi (x_{n}).} A radially symmetric bump function in n {\displaystyle n} variables can be formed by taking the function Ψ n : R n → R {\displaystyle \Psi _{n}:\mathbb {R} ^{n}\to \mathbb {R} } defined by Ψ n ( x ) = Ψ ( | x | ) {\displaystyle \Psi _{n}(\mathbf {x} )=\Psi (|\mathbf {x} |)} . This function is supported on the unit ball centered at the origin. For another example, take an h {\displaystyle h} that is positive on ( c , d ) {\displaystyle (c,d)} and zero elsewhere, for example h ( x ) = { exp ⁡ ( − 1 ( x − c ) ( d − x ) ) , c < x < d 0 , o t h e r w i s e {\displaystyle h(x)={\begin{cases}\exp \left(-{\frac {1}{(x-c)(d-x)}}\right),&c<x<d\\0,&\mathrm {otherwise} \end{cases}}} . Smooth transition functions Consider the function f ( x ) = { e − 1 x if x > 0 , 0 if x ≤ 0 , {\displaystyle f(x)={\begin{cases}e^{-{\frac {1}{x}}}&{\text{if }}x>0,\\0&{\text{if }}x\leq 0,\end{cases}}} defined for every real number x. The function g ( x ) = f ( x ) f ( x ) + f ( 1 − x ) , x ∈ R , {\displaystyle g(x)={\frac {f(x)}{f(x)+f(1-x)}},\qquad x\in \mathbb {R} ,} has a strictly positive denominator everywhere on the real line, hence g is also smooth. Furthermore, g(x) = 0 for x ≤ 0 and g(x) = 1 for x ≥ 1, hence it provides a smooth transition from the level 0 to the level 1 in the unit interval [0, 1]. To have the smooth transition in the real interval [a, b] with a < b, consider the function R ∋ x ↦ g ( x − a b − a ) . {\displaystyle \mathbb {R} \ni x\mapsto g{\Bigl (}{\frac {x-a}{b-a}}{\Bigr )}.} For real numbers a < b < c < d, the smooth function R ∋ x ↦ g ( x − a b − a ) g ( d − x d − c ) {\displaystyle \mathbb {R} \ni x\mapsto g{\Bigl (}{\frac {x-a}{b-a}}{\Bigr )}\,g{\Bigl (}{\frac {d-x}{d-c}}{\Bigr )}} equals 1 on the closed interval [b, c] and vanishes outside the open interval (a, d), hence it can serve as a bump function. Caution must be taken since, as example, taking { a = − 1 } < { b = c = 0 } < { d = 1 } {\displaystyle \{a=-1\}<\{b=c=0\}<\{d=1\}} , leads to: q ( x ) = 1 1 + e 1 − 2 | x | x 2 − | x | {\displaystyle q(x)={\frac {1}{1+e^{\frac {1-2|x|}{x^{2}-|x|}}}}} which is not an infinitely differentiable function (so, is not "smooth"), so the constraints a < b < c < d must be strictly fulfilled. Some interesting facts about the function: q ( x , a ) = 1 1 + e a ( 1 − 2 | x | ) x 2 − | x | {\displaystyle q(x,a)={\frac {1}{1+e^{\frac {a(1-2|x|)}{x^{2}-|x|}}}}} Are that q ( x , 3 2 ) {\displaystyle q\left(x,{\frac {\sqrt {3}}{2}}\right)} make smooth transition curves with "almost" constant slope edges (a bump function with true straight slopes is portrayed this Another example). A proper example of a smooth Bump function would be: u ( x ) = { 1 , if x = 0 , 0 , if | x | ≥ 1 , 1 1 + e 1 − 2 | x | x 2 − | x | , otherwise , {\displaystyle u(x)={\begin{cases}1,{\text{if }}x=0,\\0,{\text{if }}|x|\geq 1,\\{\frac {1}{1+e^{\frac {1-2|x|}{x^{2}-|x|}}}},{\text{otherwise}},\end{cases}}} A proper example of a smooth transition function will be: w ( x ) = { 1 1 + e 2 x − 1 x 2 − x if 0 < x < 1 , 0 if x ≤ 0 , 1 if x ≥ 1 , {\displaystyle w(x)={\begin{cases}{\frac {1}{1+e^{\frac {2x-1}{x^{2}-x}}}}&{\text{if }}0<x<1,\\0&{\text{if }}x\leq 0,\\1&{\text{if }}x\geq 1,\end{cases}}} where could be noticed that it can be represented also through Hyperbolic functions: 1 1 + e 2 x − 1 x 2 − x = 1 2 ( 1 − tanh ⁡ ( 2 x − 1 2 ( x 2 − x ) ) ) {\displaystyle {\frac {1}{1+e^{\frac {2x-1}{x^{2}-x}}}}={\frac {1}{2}}\left(1-\tanh \left({\frac {2x-1}{2(x^{2}-x)}}\right)\right)} == Existence of bump functions == It is possible to construct bump functions "to specifications". Stated formally, if K {\displaystyle K} is an arbitrary compact set in n {\displaystyle n} dimensions and U {\displaystyle U} is an open set containing K , {\displaystyle K,} there exists a bump function ϕ {\displaystyle \phi } which is 1 {\displaystyle 1} on K {\displaystyle K} and 0 {\displaystyle 0} outside of U . {\displaystyle U.} Since U {\displaystyle U} can be taken to be a very small neighborhood of K , {\displaystyle K,} this amounts to being able to construct a function that is 1 {\displaystyle 1} on K {\displaystyle K} and falls off rapidly to 0 {\displaystyle 0} outside of K , {\displaystyle K,} while still being smooth. Bump functions defined in terms of convolution The construction proceeds as follows. One considers a compact neighborhood V {\displaystyle V} of K {\displaystyle K} contained in U , {\displaystyle U,} so K ⊆ V ∘ ⊆ V ⊆ U . {\displaystyle K\subseteq V^{\circ }\subseteq V\subseteq U.} The characteristic function χ V {\displaystyle \chi _{V}} of V {\displaystyle V} will be equal to 1 {\displaystyle 1} on V {\displaystyle V} and 0 {\displaystyle 0} outside of V , {\displaystyle V,} so in particular, it will be 1 {\displaystyle 1} on K {\displaystyle K} and 0 {\displaystyle 0} outside of U . {\displaystyle U.} This function is not smooth however. The key idea is to smooth χ V {\displaystyle \chi _{V}} a bit, by taking the convolution of χ V {\displaystyle \chi _{V}} with a mollifier. The latter is just a bump function with a very small support and whose integral is 1. {\displaystyle 1.} Such a mollifier can be obtained, for example, by taking the bump function Φ {\displaystyle \Phi } from the previous section and performing appropriate scalings. Bump functions defined in terms of a function c : R → [ 0 , ∞ ) {\displaystyle c:\mathbb {R} \to [0,\infty )} with support ( − ∞ , 0 ] {\displaystyle (-\infty ,0]} An alternative construction that does not involve convolution is now detailed. It begins by constructing a smooth function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } that is positive on a given open subset U ⊆ R n {\displaystyle U\subseteq \mathbb {R} ^{n}} and vanishes off of U . {\displaystyle U.} This function's support is equal to the closure U ¯ {\displaystyle {\overline {U}}} of U {\displaystyle U} in R n , {\displaystyle \mathbb {R} ^{n},} so if U ¯ {\displaystyle {\overline {U}}} is compact, then f {\displaystyle f} is a bump function. Start with any smooth function c : R → R {\displaystyle c:\mathbb {R} \to \mathbb {R} } that vanishes on the negative reals and is positive on the positive reals (that is, c = 0 {\displaystyle c=0} on ( − ∞ , 0 ) {\displaystyle (-\infty ,0)} and c > 0 {\displaystyle c>0} on ( 0 , ∞ ) , {\displaystyle (0,\infty ),} where continuity from the left necessitates c ( 0 ) = 0 {\displaystyle c(0)=0} ); an example of such a function is c ( x ) := e − 1 / x {\displaystyle c(x):=e^{-1/x}} for x > 0 {\displaystyle x>0} and c ( x ) := 0 {\displaystyle c(x):=0} otherwise. Fix an open subset U {\displaystyle U} of R n {\displaystyle \mathbb {R} ^{n}} and denote the usual Euclidean norm by ‖ ⋅ ‖ {\displaystyle \|\cdot \|} (so R n {\displaystyle \mathbb {R} ^{n}} is endowed with the usual Euclidean metric). The following construction defines a smooth function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } that is positive on U {\displaystyle U} and vanishes outside of U . {\displaystyle U.} So in particular, if U {\displaystyle U} is relatively compact then this function f {\displaystyle f} will be a bump function. If U = R n {\displaystyle U=\mathbb {R} ^{n}} then let f = 1 {\displaystyle f=1} while if U = ∅ {\displaystyle U=\varnothing } then let f = 0 {\displaystyle f=0} ; so assume U {\displaystyle U} is neither of these. Let ( U k ) k = 1 ∞ {\displaystyle \left(U_{k}\right)_{k=1}^{\infty }} be an open cover of U {\displaystyle U} by open balls where the open ball U k {\displaystyle U_{k}} has radius r k > 0 {\displaystyle r_{k}>0} and center a k ∈ U . {\displaystyle a_{k}\in U.} Then the map f k : R n → R {\displaystyle f_{k}:\mathbb {R} ^{n}\to \mathbb {R} } defined by f k ( x ) = c ( r k 2 − ‖ x − a k ‖ 2 ) {\displaystyle f_{k}(x)=c\left(r_{k}^{2}-\left\|x-a_{k}\right\|^{2}\right)} is a smooth function that is positive on U k {\displaystyle U_{k}} and vanishes off of U k . {\displaystyle U_{k}.} For every k ∈ N , {\displaystyle k\in \mathbb {N} ,} let M k = sup { | ∂ p f k ∂ p 1 x 1 ⋯ ∂ p n x n ( x ) | : x ∈ R n and p 1 , … , p n ∈ Z satisfy 0 ≤ p i ≤ k and p = ∑ i p i } , {\displaystyle M_{k}=\sup \left\{\left|{\frac {\partial ^{p}f_{k}}{\partial ^{p_{1}}x_{1}\cdots \partial ^{p_{n}}x_{n}}}(x)\right|~:~x\in \mathbb {R} ^{n}{\text{ and }}p_{1},\ldots ,p_{n}\in \mathbb {Z} {\text{ satisfy }}0\leq p_{i}\leq k{\text{ and }}p=\sum _{i}p_{i}\right\},} where this supremum is not equal to + ∞ {\displaystyle +\infty } (so M k {\displaystyle M_{k}} is a non-negative real number) because ( R n ∖ U k ) ∪ U k ¯ = R n , {\displaystyle \left(\mathbb {R} ^{n}\setminus U_{k}\right)\cup {\overline {U_{k}}}=\mathbb {R} ^{n},} the partial derivatives all vanish (equal 0 {\displaystyle 0} ) at any x {\displaystyle x} outside of U k , {\displaystyle U_{k},} while on the compact set U k ¯ , {\displaystyle {\overline {U_{k}}},} the values of each of the (finitely many) partial derivatives are (uniformly) bounded above by some non-negative real number. The series f := ∑ k = 1 ∞ f k 2 k M k {\displaystyle f~:=~\sum _{k=1}^{\infty }{\frac {f_{k}}{2^{k}M_{k}}}} converges uniformly on R n {\displaystyle \mathbb {R} ^{n}} to a smooth function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } that is positive on U {\displaystyle U} and vanishes off of U . {\displaystyle U.} Moreover, for any non-negative integers p 1 , … , p n ∈ Z , {\displaystyle p_{1},\ldots ,p_{n}\in \mathbb {Z} ,} ∂ p 1 + ⋯ + p n ∂ p 1 x 1 ⋯ ∂ p n x n f = ∑ k = 1 ∞ 1 2 k M k ∂ p 1 + ⋯ + p n f k ∂ p 1 x 1 ⋯ ∂ p n x n {\displaystyle {\frac {\partial ^{p_{1}+\cdots +p_{n}}}{\partial ^{p_{1}}x_{1}\cdots \partial ^{p_{n}}x_{n}}}f~=~\sum _{k=1}^{\infty }{\frac {1}{2^{k}M_{k}}}{\frac {\partial ^{p_{1}+\cdots +p_{n}}f_{k}}{\partial ^{p_{1}}x_{1}\cdots \partial ^{p_{n}}x_{n}}}} where this series also converges uniformly on R n {\displaystyle \mathbb {R} ^{n}} (because whenever k ≥ p 1 + ⋯ + p n {\displaystyle k\geq p_{1}+\cdots +p_{n}} then the k {\displaystyle k} th term's absolute value is ≤ M k 2 k M k = 1 2 k {\displaystyle \leq {\tfrac {M_{k}}{2^{k}M_{k}}}={\tfrac {1}{2^{k}}}} ). This completes the construction. As a corollary, given two disjoint closed subsets A , B {\displaystyle A,B} of R n , {\displaystyle \mathbb {R} ^{n},} the above construction guarantees the existence of smooth non-negative functions f A , f B : R n → [ 0 , ∞ ) {\displaystyle f_{A},f_{B}:\mathbb {R} ^{n}\to [0,\infty )} such that for any x ∈ R n , {\displaystyle x\in \mathbb {R} ^{n},} f A ( x ) = 0 {\displaystyle f_{A}(x)=0} if and only if x ∈ A , {\displaystyle x\in A,} and similarly, f B ( x ) = 0 {\displaystyle f_{B}(x)=0} if and only if x ∈ B , {\displaystyle x\in B,} then the function h := f A f A + f B : R n → [ 0 , 1 ] {\displaystyle h~:=~{\frac {f_{A}}{f_{A}+f_{B}}}:\mathbb {R} ^{n}\to [0,1]} is smooth and for any x ∈ R n , {\displaystyle x\in \mathbb {R} ^{n},} h ( x ) = 0 {\displaystyle h(x)=0} if and only if x ∈ A , {\displaystyle x\in A,} h ( x ) = 1 {\displaystyle h(x)=1} if and only if x ∈ B , {\displaystyle x\in B,} and 0 < h ( x ) < 1 {\displaystyle 0<h(x)<1} if and only if x ∉ A ∪ B . {\displaystyle x\not \in A\cup B.} In particular, h ( x ) ≠ 0 {\displaystyle h(x)\neq 0} if and only if x ∈ R n ∖ A , {\displaystyle x\in \mathbb {R} ^{n}\smallsetminus A,} so if in addition U := R n ∖ A {\displaystyle U:=\mathbb {R} ^{n}\smallsetminus A} is relatively compact in R n {\displaystyle \mathbb {R} ^{n}} (where A ∩ B = ∅ {\displaystyle A\cap B=\varnothing } implies B ⊆ U {\displaystyle B\subseteq U} ) then h {\displaystyle h} will be a smooth bump function with support in U ¯ . {\displaystyle {\overline {U}}.} == Properties and uses == While bump functions are smooth, the identity theorem prohibits their being analytic unless they vanish identically. Bump functions are often used as mollifiers, as smooth cutoff functions, and to form smooth partitions of unity. They are the most common class of test functions used in analysis. The space of bump functions is closed under many operations. For instance, the sum, product, or convolution of two bump functions is again a bump function, and any differential operator with smooth coefficients, when applied to a bump function, will produce another bump function. If the boundaries of the Bump function domain is ∂ x , {\displaystyle \partial x,} to fulfill the requirement of "smoothness", it has to preserve the continuity of all its derivatives, which leads to the following requirement at the boundaries of its domain: lim x → ∂ x ± d n d x n f ( x ) = 0 , for all n ≥ 0 , n ∈ Z {\displaystyle \lim _{x\to \partial x^{\pm }}{\frac {d^{n}}{dx^{n}}}f(x)=0,\,{\text{ for all }}n\geq 0,\,n\in \mathbb {Z} } The Fourier transform of a bump function is a (real) analytic function, and it can be extended to the whole complex plane: hence it cannot be compactly supported unless it is zero, since the only entire analytic bump function is the zero function (see Paley–Wiener theorem and Liouville's theorem). Because the bump function is infinitely differentiable, its Fourier transform must decay faster than any finite power of 1 / k {\displaystyle 1/k} for a large angular frequency | k | . {\displaystyle |k|.} The Fourier transform of the particular bump function Ψ ( x ) = e − 1 / ( 1 − x 2 ) 1 { | x | < 1 } {\displaystyle \Psi (x)=e^{-1/(1-x^{2})}\mathbf {1} _{\{|x|<1\}}} from above can be analyzed by a saddle-point method, and decays asymptotically as | k | − 3 / 4 e − | k | {\displaystyle |k|^{-3/4}e^{-{\sqrt {|k|}}}} for large | k | . {\displaystyle |k|.} == See also == Cutoff function – Integration kernels for smoothing out sharp featuresPages displaying short descriptions of redirect targets Laplacian of the indicator – Limit of sequence of smooth functions Non-analytic smooth function – Mathematical functions which are smooth but not analytic Schwartz space – Function space of all functions whose derivatives are rapidly decreasing == Citations == == References == Nestruev, Jet (10 September 2020). Smooth Manifolds and Observables. Graduate Texts in Mathematics. Vol. 220. Cham, Switzerland: Springer Nature. ISBN 978-3-030-45649-8. OCLC 1195920718.
Wikipedia/Bump_function
This is a list of linear transformations of functions related to Fourier analysis. Such transformations map a function to a set of coefficients of basis functions, where the basis functions are sinusoidal and are therefore strongly localized in the frequency spectrum. (These transforms are generally designed to be invertible.) In the case of the Fourier transform, each basis function corresponds to a single frequency component. == Continuous transforms == Applied to functions of continuous arguments, Fourier-related transforms include: Two-sided Laplace transform Mellin transform, another closely related integral transform Laplace transform: the Fourier transform may be considered a special case of the imaginary axis of the bilateral Laplace transform Fourier transform, with special cases: Fourier series When the input function/waveform is periodic, the Fourier transform output is a Dirac comb function, modulated by a discrete sequence of finite-valued coefficients that are complex-valued in general. These are called Fourier series coefficients. The term Fourier series actually refers to the inverse Fourier transform, which is a sum of sinusoids at discrete frequencies, weighted by the Fourier series coefficients. When the non-zero portion of the input function has finite duration, the Fourier transform is continuous and finite-valued. But a discrete subset of its values is sufficient to reconstruct/represent the portion that was analyzed. The same discrete set is obtained by treating the duration of the segment as one period of a periodic function and computing the Fourier series coefficients. Sine and cosine transforms: When the input function has odd or even symmetry around the origin, the Fourier transform reduces to a sine transform or a cosine transform, respectively. Because functions can be uniquely decomposed into an odd function plus an even function, their respective sine and cosine transforms can be added to express the function. The Fourier transform can be expressed as the cosine transform minus -1 {\displaystyle {\sqrt {\text{-1}}}} times the sine transform. Hartley transform Short-time Fourier transform (or short-term Fourier transform) (STFT) Rectangular mask short-time Fourier transform Chirplet transform Fractional Fourier transform (FRFT) Hankel transform: related to the Fourier Transform of radial functions. Fourier–Bros–Iagolnitzer transform Linear canonical transform == Discrete transforms == For usage on computers, number theory and algebra, discrete arguments (e.g. functions of a series of discrete samples) are often more appropriate, and are handled by the transforms (analogous to the continuous cases above): Discrete-time Fourier transform (DTFT): Equivalent to the Fourier transform of a "continuous" function that is constructed from the discrete input function by using the sample values to modulate a Dirac comb. When the sample values are derived by sampling a function on the real line, ƒ(x), the DTFT is equivalent to a periodic summation of the Fourier transform of ƒ. The DTFT output is always periodic (cyclic). An alternative viewpoint is that the DTFT is a transform to a frequency domain that is bounded (or finite), the length of one cycle. discrete Fourier transform (DFT): When the input sequence is periodic, the DTFT output is also a Dirac comb function, modulated by the coefficients of a Fourier series which can be computed as a DFT of one cycle of the input sequence. The number of discrete values in one cycle of the DFT is the same as in one cycle of the input sequence. When the non-zero portion of the input sequence has finite duration, the DTFT is continuous and finite-valued. But a discrete subset of its values is sufficient to reconstruct/represent the portion that was analyzed. The same discrete set is obtained by treating the duration of the segment as one cycle of a periodic function and computing the DFT. Discrete sine and cosine transforms: When the input sequence has odd or even symmetry around the origin, the DTFT reduces to a discrete sine transform (DST) or discrete cosine transform (DCT). Regressive discrete Fourier series, in which the period is determined by the data rather than fixed in advance. Discrete Chebyshev transforms (on the 'roots' grid and the 'extrema' grid of the Chebyshev polynomials of the first kind). This transform is of much importance in the field of spectral methods for solving differential equations because it can be used to swiftly and efficiently go from grid point values to Chebyshev series coefficients. Generalized DFT (GDFT), a generalization of the DFT and constant modulus transforms where phase functions might be of linear with integer and real valued slopes, or even non-linear phase bringing flexibilities for optimal designs of various metrics, e.g. auto- and cross-correlations. Discrete-space Fourier transform (DSFT) is the generalization of the DTFT from 1D signals to 2D signals. It is called "discrete-space" rather than "discrete-time" because the most prevalent application is to imaging and image processing where the input function arguments are equally spaced samples of spatial coordinates ( x , y ) {\displaystyle (x,y)} . The DSFT output is periodic in both variables. Z-transform, a generalization of the DTFT to the entire complex plane Modified discrete cosine transform (MDCT) Discrete Hartley transform (DHT) Also the discretized STFT (see above). Hadamard transform (Walsh function). Fourier transform on finite groups. Discrete Fourier transform (general). The use of all of these transforms is greatly facilitated by the existence of efficient algorithms based on a fast Fourier transform (FFT). The Nyquist–Shannon sampling theorem is critical for understanding the output of such discrete transforms. == See also == Integral transform Wavelet transform Fourier-transform spectroscopy Harmonic analysis List of transforms List of mathematic operators Bispectrum == Notes == == References == A. D. Polyanin and A. V. Manzhirov, Handbook of Integral Equations, CRC Press, Boca Raton, 1998. ISBN 0-8493-2876-4 Tables of Integral Transforms at EqWorld: The World of Mathematical Equations. A. N. Akansu and H. Agirman-Tosun, "Generalized Discrete Fourier Transform With Nonlinear Phase", IEEE Transactions on Signal Processing, vol. 58, no. 9, pp. 4547-4556, Sept. 2010.
Wikipedia/Fourier-related_transform
In a Fourier transformation (FT), the Fourier transformed function f ^ ( s ) {\displaystyle {\hat {f}}(s)} is obtained from f ( t ) {\displaystyle f(t)} by: f ^ ( s ) = ∫ − ∞ ∞ f ( t ) e − i s t d t {\displaystyle {\hat {f}}(s)=\int _{-\infty }^{\infty }f(t)e^{-ist}dt} where i {\displaystyle i} is defined as i 2 = − 1 {\displaystyle i^{2}=-1} . f ( t ) {\displaystyle f(t)} can be obtained from f ^ ( s ) {\displaystyle {\hat {f}}(s)} by inverse FT: f ( t ) = 1 2 π ∫ − ∞ ∞ f ^ ( s ) e i s t d t {\displaystyle f(t)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }{\hat {f}}(s)e^{ist}dt} s {\displaystyle s} and t {\displaystyle t} are inverse variables, e.g. frequency and time. Obtaining f ^ ( s ) {\displaystyle {\hat {f}}(s)} directly requires that f ( t ) {\displaystyle f(t)} is well known from t = − ∞ {\displaystyle t=-\infty } to t = ∞ {\displaystyle t=\infty } , vice versa. In real experimental data this is rarely the case due to noise and limited measured range, say f ( t ) {\displaystyle f(t)} is known from a > − ∞ {\displaystyle a>-\infty } to b < ∞ {\displaystyle b<\infty } . Performing a FT on f ( t ) {\displaystyle f(t)} in the limited range may lead to systematic errors and overfitting. An indirect Fourier transform (IFT) is a solution to this problem. == Indirect Fourier transformation in small-angle scattering == In small-angle scattering on single molecules, an intensity I ( r ) {\displaystyle I(\mathbf {r} )} is measured and is a function of the magnitude of the scattering vector q = | q | = 4 π sin ⁡ ( θ ) / λ {\displaystyle q=|\mathbf {q} |=4\pi \sin(\theta )/\lambda } , where 2 θ {\displaystyle 2\theta } is the scattered angle, and λ {\displaystyle \lambda } is the wavelength of the incoming and scattered beam (elastic scattering). q {\displaystyle q} has units 1/length. I ( q ) {\displaystyle I(q)} is related to the so-called pair distance distribution p ( r ) {\displaystyle p(r)} via Fourier Transformation. p ( r ) {\displaystyle p(r)} is a (scattering weighted) histogram of distances r {\displaystyle r} between pairs of atoms in the molecule. In one dimensions ( r {\displaystyle r} and q {\displaystyle q} are scalars), I ( q ) {\displaystyle I(q)} and p ( r ) {\displaystyle p(r)} are related by: I ( q ) = 4 π n ∫ − ∞ ∞ p ( r ) e − i q r cos ⁡ ( ϕ ) d r {\displaystyle I(q)=4\pi n\int _{-\infty }^{\infty }p(r)e^{-iqr\cos(\phi )}dr} p ( r ) = 1 2 π 2 n ∫ − ∞ ∞ ( ^ q r ) 2 I ( q ) e − i q r cos ⁡ ( ϕ ) d q {\displaystyle p(r)={\frac {1}{2\pi ^{2}n}}\int _{-\infty }^{\infty }{\hat {(}}qr)^{2}I(q)e^{-iqr\cos(\phi )}dq} where ϕ {\displaystyle \phi } is the angle between q {\displaystyle \mathbf {q} } and r {\displaystyle \mathbf {r} } , and n {\displaystyle n} is the number density of molecules in the measured sample. The sample is orientational averaged (denoted by ⟨ . . ⟩ {\displaystyle \langle ..\rangle } ), and the Debye equation can thus be exploited to simplify the relations by ⟨ e − i q r cos ⁡ ( ϕ ) ⟩ = ⟨ e i q r cos ⁡ ( ϕ ) ⟩ = sin ⁡ ( q r ) q r {\displaystyle \langle e^{-iqr\cos(\phi )}\rangle =\langle e^{iqr\cos(\phi )}\rangle ={\frac {\sin(qr)}{qr}}} In 1977 Glatter proposed an IFT method to obtain p ( r ) {\displaystyle p(r)} form I ( q ) {\displaystyle I(q)} , and three years later, Moore introduced an alternative method. Others have later introduced alternative methods for IFT, and automatised the process == The Glatter method of IFT == This is a brief outline of the method introduced by Otto Glatter. For simplicity, we use n = 1 {\displaystyle n=1} in the following. In indirect Fourier transformation, a guess on the largest distance in the particle D m a x {\displaystyle D_{max}} is given, and an initial distance distribution function p i ( r ) {\displaystyle p_{i}(r)} is expressed as a sum of N {\displaystyle N} cubic spline functions ϕ i ( r ) {\displaystyle \phi _{i}(r)} evenly distributed on the interval (0, p i ( r ) {\displaystyle p_{i}(r)} ): where c i {\displaystyle c_{i}} are scalar coefficients. The relation between the scattering intensity I ( q ) {\displaystyle I(q)} and the p ( r ) {\displaystyle p(r)} is: Inserting the expression for pi(r) (1) into (2) and using that the transformation from p ( r ) {\displaystyle p(r)} to I ( q ) {\displaystyle I(q)} is linear gives: I ( q ) = 4 π ∑ i = 1 N c i ψ i ( q ) , {\displaystyle I(q)=4\pi \sum _{i=1}^{N}c_{i}\psi _{i}(q),} where ψ i ( q ) {\displaystyle \psi _{i}(q)} is given as: ψ i ( q ) = ∫ 0 ∞ ϕ i ( r ) sin ⁡ ( q r ) q r d r . {\displaystyle \psi _{i}(q)=\int _{0}^{\infty }\phi _{i}(r){\frac {\sin(qr)}{qr}}{\text{d}}r.} The c i {\displaystyle c_{i}} 's are unchanged under the linear Fourier transformation and can be fitted to data, thereby obtaining the coefficients c i f i t {\displaystyle c_{i}^{fit}} . Inserting these new coefficients into the expression for p i ( r ) {\displaystyle p_{i}(r)} gives a final p f ( r ) {\displaystyle p_{f}(r)} . The coefficients c i f i t {\displaystyle c_{i}^{fit}} are chosen to minimise the χ 2 {\displaystyle \chi ^{2}} of the fit, given by: χ 2 = ∑ k = 1 M [ I e x p e r i m e n t ( q k ) − I f i t ( q k ) ] 2 σ 2 ( q k ) {\displaystyle \chi ^{2}=\sum _{k=1}^{M}{\frac {[I_{experiment}(q_{k})-I_{fit}(q_{k})]^{2}}{\sigma ^{2}(q_{k})}}} where M {\displaystyle M} is the number of datapoints and σ k {\displaystyle \sigma _{k}} is the standard deviations on data point k {\displaystyle k} . The fitting problem is ill posed and a very oscillating function would give the lowest χ 2 {\displaystyle \chi ^{2}} despite being physically unrealistic. Therefore, a smoothness function S {\displaystyle S} is introduced: S = ∑ i = 1 N − 1 ( c i + 1 − c i ) 2 {\displaystyle S=\sum _{i=1}^{N-1}(c_{i+1}-c_{i})^{2}} . The larger the oscillations, the higher S {\displaystyle S} . Instead of minimizing χ 2 {\displaystyle \chi ^{2}} , the Lagrangian L = χ 2 + α S {\displaystyle L=\chi ^{2}+\alpha S} is minimized, where the Lagrange multiplier α {\displaystyle \alpha } is denoted the smoothness parameter. The method is indirect in the sense that the FT is done in several steps: p i ( r ) → fitting → p f ( r ) {\displaystyle p_{i}(r)\rightarrow {\text{fitting}}\rightarrow p_{f}(r)} . == See also == Frequency spectrum Least-squares spectral analysis == References ==
Wikipedia/Indirect_Fourier_transform
Time stretch dispersive Fourier transform (TS-DFT), otherwise known as time-stretch transform (TST), temporal Fourier transform or photonic time-stretch (PTS) is a spectroscopy technique that uses optical dispersion instead of a grating or prism to separate the light wavelengths and analyze the optical spectrum in real-time. It employs group-velocity dispersion (GVD) to transform the spectrum of a broadband optical pulse into a time stretched temporal waveform. It is used to perform Fourier transformation on an optical signal on a single shot basis and at high frame rates for real-time analysis of fast dynamic processes. It replaces a diffraction grating and detector array with a dispersive fiber and single-pixel detector, enabling ultrafast real-time spectroscopy and imaging. Its nonuniform variant, warped-stretch transform, realized with nonlinear group delay, offers variable-rate spectral domain sampling, as well as the ability to engineer the time-bandwidth product of the signal's envelope to match that of the data acquisition systems acting as an information gearbox. == Operation principle == TS-DFT is usually adopted in a two step process. In the first step, the spectrum of an optical broadband pulse is encoded by the information (e.g., temporal, spatial, or chemical information) to be captured. In the next step, the encoded spectrum is mapped by large group-velocity dispersion into a slowed temporal waveform. At this point the waveform has been sufficiently slowed so it can be digitized and processed in real-time. Without the time stretch, single shot waveforms will be too fast to be digitized by analog to digital converters. Implemented in the optical domain, this process performs a similar function as slow motion used to see fast events in videos. While video slow motion is a simple process of playing back an already recorded event, the TS-DFT performs slow motion at the speed of light and before the signal is captured. When needed, the waveform is simultaneously amplified in the dispersive fiber by the process of stimulated Raman scattering. This optical amplification overcomes the thermal noise which would otherwise limit the sensitivity in real-time detection. Subsequent optical pulses perform repetitive measurements at the frame rate of the pulsed laser. Consequently, single shot optical spectra, carrying information from fast dynamic processes, can be digitized and analyzed at high frame rates. The time-stretch dispersive Fourier transformer consists of a low-loss dispersive fiber that is also a Raman amplifier. To create Raman gain, pump lasers are coupled into the fiber by wavelength-division multiplexers, with wavelengths of pump lasers chosen to create a broadband and flat gain profile that covers the spectrum of the broadband optical pulse. Instead of Raman amplification, a discrete amplifier such as an erbium doped optical amplifier or a semiconductor optical amplifier can be placed before the dispersive fiber. However, the distributed nature of Raman amplification provides superior signal to noise ratio. Dispersive Fourier Transform has proven to be an enabling technology for wideband A/D conversion (ultra wideband analog to digital converters) and has also been used for high-throughput real-time spectroscopy and imaging (serial time-encoded amplified microscopy (STEAM)). == Relation to phase stretch transform == The phase stretch transform or pST is a computational approach to signal and image processing. One of its utilities is for feature detection and classification. Phase stretch transform is a spin-off from research on the time stretch dispersive Fourier transform. it transforms the image by emulating propagation through a diffractive medium with engineered 3D dispersive property (refractive index). == Real-time single-shot analysis of spectral noise == Recently, PTS has been used to study of optical non-linearities in fibers. Correlation properties in both the spectral and temporal domains can be deduced from single-shot PTS data to study the stochastic nature of optical systems. Namely, modulation instability and supercontinuum generation in highly non-linear fiber have been studied. == See also == Frequency spectrum Least-squares spectral analysis Time stretch analog-to-digital converter Serial time-encoded amplified microscopy == References ==
Wikipedia/Time_stretch_dispersive_Fourier_transform
In mathematics, the Gelfand representation in functional analysis (named after I. M. Gelfand) is either of two things: a way of representing commutative Banach algebras as algebras of continuous functions; the fact that for commutative C*-algebras, this representation is an isometric isomorphism. In the former case, one may regard the Gelfand representation as a far-reaching generalization of the Fourier transform of an integrable function. In the latter case, the Gelfand–Naimark representation theorem is one avenue in the development of spectral theory for normal operators, and generalizes the notion of diagonalizing a normal matrix. == Historical remarks == One of Gelfand's original applications (and one which historically motivated much of the study of Banach algebras) was to give a much shorter and more conceptual proof of a celebrated lemma of Norbert Wiener (see the citation below), characterizing the elements of the group algebras L1(R) and ℓ 1 ( Z ) {\displaystyle \ell ^{1}({\mathbf {Z} })} whose translates span dense subspaces in the respective algebras. == The model algebra == For any locally compact Hausdorff topological space X, the space C0(X) of continuous complex-valued functions on X which vanish at infinity is in a natural way a commutative C*-algebra: The algebra structure over the complex numbers is obtained by considering the pointwise operations of addition and multiplication. The involution is pointwise complex conjugation. The norm is the uniform norm on functions. The importance of X being locally compact and Hausdorff is that this turns X into a completely regular space. In such a space every closed subset of X is the common zero set of a family of continuous complex-valued functions on X, allowing one to recover the topology of X from C0(X). Note that C0(X) is unital if and only if X is compact, in which case C0(X) is equal to C(X), the algebra of all continuous complex-valued functions on X. == Gelfand representation of a commutative Banach algebra == Let A {\displaystyle A} be a commutative Banach algebra, defined over the field C {\displaystyle \mathbb {C} } of complex numbers. A non-zero algebra homomorphism (a multiplicative linear functional) Φ : A → C {\displaystyle \Phi \colon A\to \mathbb {C} } is called a character of A {\displaystyle A} ; the set of all characters of A {\displaystyle A} is denoted by Φ A {\displaystyle \Phi _{A}} . It can be shown that every character on A {\displaystyle A} is automatically continuous, and hence Φ A {\displaystyle \Phi _{A}} is a subset of the space A ∗ {\displaystyle A^{*}} of continuous linear functionals on A {\displaystyle A} ; moreover, when equipped with the relative weak-* topology, Φ A {\displaystyle \Phi _{A}} turns out to be locally compact and Hausdorff. (This follows from the Banach–Alaoglu theorem.) The space Φ A {\displaystyle \Phi _{A}} is compact (in the topology just defined) if and only if the algebra A {\displaystyle A} has an identity element. Given a ∈ A {\displaystyle a\in A} , one defines the function a ^ : Φ A → C {\displaystyle {\widehat {a}}:\Phi _{A}\to {\mathbb {C} }} by a ^ ( ϕ ) = ϕ ( a ) {\displaystyle {\widehat {a}}(\phi )=\phi (a)} . The definition of Φ A {\displaystyle \Phi _{A}} and the topology on it ensure that a ^ {\displaystyle {\widehat {a}}} is continuous and vanishes at infinity, and that the map a ↦ a ^ {\displaystyle a\mapsto {\widehat {a}}} defines a norm-decreasing, unit-preserving algebra homomorphism from A {\displaystyle A} to C 0 ( Φ A ) {\displaystyle C_{0}(\Phi _{A})} . This homomorphism is the Gelfand representation of A {\displaystyle A} , and a ^ {\displaystyle {\widehat {a}}} is the Gelfand transform of the element a {\displaystyle a} . In general, the representation is neither injective nor surjective. In the case where A {\displaystyle A} has an identity element, there is a bijection between Φ A {\displaystyle \Phi _{A}} and the set of maximal ideals in A {\displaystyle A} (this relies on the Gelfand–Mazur theorem). As a consequence, the kernel of the Gelfand representation A → C 0 ( Φ A ) {\displaystyle A\to C_{0}(\Phi _{A})} may be identified with the Jacobson radical of A {\displaystyle A} . Thus the Gelfand representation is injective if and only if A {\displaystyle A} is (Jacobson) semisimple. === Examples === The Banach space A = L 1 ( R ) {\displaystyle A=L^{1}(\mathbb {R} )} is a Banach algebra under the convolution, the group algebra of R {\displaystyle \mathbb {R} } . Then Φ A {\displaystyle \Phi _{A}} is homeomorphic to R {\displaystyle \mathbb {R} } and the Gelfand transform of f ∈ L 1 ( R ) {\displaystyle f\in L^{1}(\mathbb {R} )} is the Fourier transform f ~ {\displaystyle {\tilde {f}}} . Similarly, with A = L 1 ( R + ) {\displaystyle A=L^{1}(\mathbb {R} _{+})} , the group algebra of the multiplicative reals, the Gelfand transform is the Mellin transform. For A = ℓ ∞ {\displaystyle A=\ell ^{\infty }} , the representation space is the Stone–Čech compactification β N {\displaystyle \beta \mathbb {N} } . More generally, if X {\displaystyle X} is a completely regular Hausdorff space, then the representation space of the Banach algebra of bounded continuous functions is the Stone–Čech compactification of X {\displaystyle X} . == The C*-algebra case == As motivation, consider the special case A = C0(X). Given x in X, let φ x ∈ A ∗ {\displaystyle \varphi _{x}\in A^{*}} be pointwise evaluation at x, i.e. φ x ( f ) = f ( x ) {\displaystyle \varphi _{x}(f)=f(x)} . Then φ x {\displaystyle \varphi _{x}} is a character on A, and it can be shown that all characters of A are of this form; a more precise analysis shows that we may identify ΦA with X, not just as sets but as topological spaces. The Gelfand representation is then an isomorphism C 0 ( X ) → C 0 ( Φ A ) . {\displaystyle C_{0}(X)\to C_{0}(\Phi _{A}).\ } === The spectrum of a commutative C*-algebra === The spectrum or Gelfand space of a commutative C*-algebra A, denoted Â, consists of the set of non-zero *-homomorphisms from A to the complex numbers. Elements of the spectrum are called characters on A. (It can be shown that every algebra homomorphism from A to the complex numbers is automatically a *-homomorphism, so that this definition of the term 'character' agrees with the one above.) In particular, the spectrum of a commutative C*-algebra is a locally compact Hausdorff space: In the unital case, i.e. where the C*-algebra has a multiplicative unit element 1, all characters f must be unital, i.e. f(1) is the complex number one. This excludes the zero homomorphism. So  is closed under weak-* convergence and the spectrum is actually compact. In the non-unital case, the weak-* closure of  is  ∪ {0}, where 0 is the zero homomorphism, and the removal of a single point from a compact Hausdorff space yields a locally compact Hausdorff space. Note that spectrum is an overloaded word. It also refers to the spectrum σ(x) of an element x of an algebra with unit 1, that is the set of complex numbers r for which x − r 1 is not invertible in A. For unital C*-algebras, the two notions are connected in the following way: σ(x) is the set of complex numbers f(x) where f ranges over Gelfand space of A. Together with the spectral radius formula, this shows that  is a subset of the unit ball of A* and as such can be given the relative weak-* topology. This is the topology of pointwise convergence. A net {fk}k of elements of the spectrum of A converges to f if and only if for each x in A, the net of complex numbers {fk(x)}k converges to f(x). If A is a separable C*-algebra, the weak-* topology is metrizable on bounded subsets. Thus the spectrum of a separable commutative C*-algebra A can be regarded as a metric space. So the topology can be characterized via convergence of sequences. Equivalently, σ(x) is the range of γ(x), where γ is the Gelfand representation. === Statement of the commutative Gelfand–Naimark theorem === Let A be a commutative C*-algebra and let X be the spectrum of A. Let γ : A → C 0 ( X ) {\displaystyle \gamma :A\to C_{0}(X)} be the Gelfand representation defined above. Theorem. The Gelfand map γ is an isometric *-isomorphism from A onto C0(X). See the Arveson reference below. The spectrum of a commutative C*-algebra can also be viewed as the set of all maximal ideals m of A, with the hull-kernel topology. (See the earlier remarks for the general, commutative Banach algebra case.) For any such m the quotient algebra A/m is one-dimensional (by the Gelfand-Mazur theorem), and therefore any a in A gives rise to a complex-valued function on Y. In the case of C*-algebras with unit, the spectrum map gives rise to a contravariant functor from the category of commutative C*-algebras with unit and unit-preserving continuous *-homomorphisms, to the category of compact Hausdorff spaces and continuous maps. This functor is one half of a contravariant equivalence between these two categories (its adjoint being the functor that assigns to each compact Hausdorff space X the C*-algebra C0(X)). In particular, given compact Hausdorff spaces X and Y, then C(X) is isomorphic to C(Y) (as a C*-algebra) if and only if X is homeomorphic to Y. The 'full' Gelfand–Naimark theorem is a result for arbitrary (abstract) noncommutative C*-algebras A, which though not quite analogous to the Gelfand representation, does provide a concrete representation of A as an algebra of operators. == Applications == One of the most significant applications is the existence of a continuous functional calculus for normal elements in C*-algebra A: An element x is normal if and only if x commutes with its adjoint x*, or equivalently if and only if it generates a commutative C*-algebra C*(x). By the Gelfand isomorphism applied to C*(x) this is *-isomorphic to an algebra of continuous functions on a locally compact space. This observation leads almost immediately to: Theorem. Let A be a C*-algebra with identity and x a normal element of A. Then there is a *-morphism f → f(x) from the algebra of continuous functions on the spectrum σ(x) into A such that It maps 1 to the multiplicative identity of A; It maps the identity function on the spectrum to x. This allows us to apply continuous functions to bounded normal operators on Hilbert space. == References == Arveson, W. (1981). An Invitation to C*-Algebras. Springer-Verlag. ISBN 0-387-90176-0. Bonsall, F. F.; Duncan, J. (1973). Complete Normed Algebras. New York: Springer-Verlag. ISBN 0-387-06386-2. Conway, J. B. (1990). A Course in Functional Analysis. Graduate Texts in Mathematics. Vol. 96. Springer Verlag. ISBN 0-387-97245-5. Wiener, N. (1932). "Tauberian theorems". Ann. of Math. II. 33 (1). Annals of Mathematics: 1–100. doi:10.2307/1968102. JSTOR 1968102.
Wikipedia/Gelfand_transform
In algebraic geometry, a Fourier–Mukai transform ΦK is a functor between derived categories of coherent sheaves D(X) → D(Y) for schemes X and Y, which is, in a sense, an integral transform along a kernel object K ∈ D(X×Y). Most natural functors, including basic ones like pushforwards and pullbacks, are of this type. These kinds of functors were introduced by Mukai (1981) in order to prove an equivalence between the derived categories of coherent sheaves on an abelian variety and its dual. That equivalence is analogous to the classical Fourier transform that gives an isomorphism between tempered distributions on a finite-dimensional real vector space and its dual. == Definition == Let X and Y be smooth projective varieties, K ∈ Db(X×Y) an object in the derived category of coherent sheaves on their product. Denote by q the projection X×Y→X, by p the projection X×Y→Y. Then the Fourier-Mukai transform ΦK is a functor Db(X)→Db(Y) given by F ↦ R p ∗ ( q ∗ F ⊗ L K ) {\displaystyle {\mathcal {F}}\mapsto \mathrm {R} p_{*}\left(q^{*}{\mathcal {F}}\otimes ^{L}K\right)} where Rp* is the derived direct image functor and ⊗ L {\displaystyle \otimes ^{L}} is the derived tensor product. Fourier-Mukai transforms always have left and right adjoints, both of which are also kernel transformations. Given two kernels K1 ∈ Db(X×Y) and K2 ∈ Db(Y×Z), the composed functor ΦK2 ∘ {\displaystyle \circ } ΦK1 is also a Fourier-Mukai transform. The structure sheaf of the diagonal O Δ ∈ D b ( X × X ) {\displaystyle {\mathcal {O}}_{\Delta }\in \mathrm {D} ^{b}(X\times X)} , taken as a kernel, produces the identity functor on Db(X). For a morphism f:X→Y, the structure sheaf of the graph Γf produces a pushforward when viewed as an object in Db(X×Y), or a pullback when viewed as an object in Db(Y×X). == On abelian varieties == Let X {\displaystyle X} be an abelian variety and X ^ {\displaystyle {\hat {X}}} be its dual variety. The Poincaré bundle P {\displaystyle {\mathcal {P}}} on X × X ^ {\displaystyle X\times {\hat {X}}} , normalized to be trivial on the fiber at zero, can be used as a Fourier-Mukai kernel. Let p {\displaystyle p} and p ^ {\displaystyle {\hat {p}}} be the canonical projections. The corresponding Fourier–Mukai functor with kernel P {\displaystyle {\mathcal {P}}} is then R S : F ∈ D ( X ) ↦ R p ^ ∗ ( p ∗ F ⊗ P ) ∈ D ( X ^ ) {\displaystyle R{\mathcal {S}}:{\mathcal {F}}\in D(X)\mapsto R{\hat {p}}_{\ast }(p^{\ast }{\mathcal {F}}\otimes {\mathcal {P}})\in D({\hat {X}})} There is a similar functor R S ^ : D ( X ^ ) → D ( X ) . {\displaystyle R{\widehat {\mathcal {S}}}:D({\hat {X}})\to D(X).\,} If the canonical class of a variety is ample or anti-ample, then the derived category of coherent sheaves determines the variety. In general, an abelian variety is not isomorphic to its dual, so this Fourier–Mukai transform gives examples of different varieties (with trivial canonical bundles) that have equivalent derived categories. Let g denote the dimension of X. The Fourier–Mukai transformation is nearly involutive : R S ∘ R S ^ = ( − 1 ) ∗ [ − g ] {\displaystyle R{\mathcal {S}}\circ R{\widehat {\mathcal {S}}}=(-1)^{\ast }[-g]} It interchanges Pontrjagin product and tensor product. R S ( F ∗ G ) = R S ( F ) ⊗ R S ( G ) {\displaystyle R{\mathcal {S}}({\mathcal {F}}\ast {\mathcal {G}})=R{\mathcal {S}}({\mathcal {F}})\otimes R{\mathcal {S}}({\mathcal {G}})} R S ( F ⊗ G ) = R S ( F ) ∗ R S ( G ) [ g ] {\displaystyle R{\mathcal {S}}({\mathcal {F}}\otimes {\mathcal {G}})=R{\mathcal {S}}({\mathcal {F}})\ast R{\mathcal {S}}({\mathcal {G}})[g]} Deninger & Murre (1991) have used the Fourier-Mukai transform to prove the Künneth decomposition for the Chow motives of abelian varieties. == Applications in string theory == In string theory, T-duality (short for target space duality), which relates two quantum field theories or string theories with different spacetime geometries, is closely related with the Fourier-Mukai transformation. == See also == Derived noncommutative algebraic geometry == References == Deninger, Christopher; Murre, Jacob (1991), "Motivic decomposition of abelian schemes and the Fourier transform", J. Reine Angew. Math., 422: 201–219, MR 1133323 Huybrechts, D. (2006), Fourier–Mukai transforms in algebraic geometry, Oxford Mathematical Monographs, vol. 1, The Clarendon Press Oxford University Press, doi:10.1093/acprof:oso/9780199296866.001.0001, ISBN 978-0-19-929686-6, MR 2244106 Bartocci, C.; Bruzzo, U.; Hernández Ruipérez, D. (2009), Fourier–Mukai and Nahm transforms in geometry and mathematical physics, Progress in Mathematics, vol. 276, Birkhäuser, doi:10.1007/b1801, ISBN 978-0-8176-3246-5, MR 2511017 Mukai, Shigeru (1981). "Duality between D ( X ) {\displaystyle D(X)} and D ( X ^ ) {\displaystyle D({\hat {X}})} with its application to Picard sheaves". Nagoya Mathematical Journal. 81: 153–175. ISSN 0027-7630.
Wikipedia/Fourier–Mukai_transform
In mathematics, the continuous wavelet transform (CWT) is a formal (i.e., non-numerical) tool that provides an overcomplete representation of a signal by letting the translation and scale parameter of the wavelets vary continuously. == Definition == The continuous wavelet transform of a function x ( t ) {\displaystyle x(t)} at a scale a ∈ R + ∗ {\displaystyle a\in \mathbb {R^{+*}} } and translational value b ∈ R {\displaystyle b\in \mathbb {R} } is expressed by the following integral X w ( a , b ) = 1 | a | 1 / 2 ∫ − ∞ ∞ x ( t ) ψ ¯ ( t − b a ) d t {\displaystyle X_{w}(a,b)={\frac {1}{|a|^{1/2}}}\int _{-\infty }^{\infty }x(t){\overline {\psi }}\left({\frac {t-b}{a}}\right)\,\mathrm {d} t} where ψ ( t ) {\displaystyle \psi (t)} is a continuous function in both the time domain and the frequency domain called the mother wavelet and the overline represents operation of complex conjugate. The main purpose of the mother wavelet is to provide a source function to generate the daughter wavelets which are simply the translated and scaled versions of the mother wavelet. To recover the original signal x ( t ) {\displaystyle x(t)} , the first inverse continuous wavelet transform can be exploited. x ( t ) = C ψ − 1 ∫ 0 ∞ ∫ − ∞ ∞ X w ( a , b ) 1 | a | 1 / 2 ψ ~ ( t − b a ) d b d a a 2 {\displaystyle x(t)=C_{\psi }^{-1}\int _{0}^{\infty }\int _{-\infty }^{\infty }X_{w}(a,b){\frac {1}{|a|^{1/2}}}{\tilde {\psi }}\left({\frac {t-b}{a}}\right)\,\mathrm {d} b\ {\frac {\mathrm {d} a}{a^{2}}}} ψ ~ ( t ) {\displaystyle {\tilde {\psi }}(t)} is the dual function of ψ ( t ) {\displaystyle \psi (t)} and C ψ = ∫ − ∞ ∞ ψ ^ ¯ ( ω ) ψ ~ ^ ( ω ) | ω | d ω {\displaystyle C_{\psi }=\int _{-\infty }^{\infty }{\frac {{\overline {\hat {\psi }}}(\omega ){\hat {\tilde {\psi }}}(\omega )}{|\omega |}}\,\mathrm {d} \omega } is admissible constant, where hat means Fourier transform operator. Sometimes, ψ ~ ( t ) = ψ ( t ) {\displaystyle {\tilde {\psi }}(t)=\psi (t)} , then the admissible constant becomes C ψ = ∫ − ∞ + ∞ | ψ ^ ( ω ) | 2 | ω | d ω {\displaystyle C_{\psi }=\int _{-\infty }^{+\infty }{\frac {\left|{\hat {\psi }}(\omega )\right|^{2}}{\left|\omega \right|}}\,\mathrm {d} \omega } Traditionally, this constant is called wavelet admissible constant. A wavelet whose admissible constant satisfies 0 < C ψ < ∞ {\displaystyle 0<C_{\psi }<\infty } is called an admissible wavelet. To recover the original signal x ( t ) {\displaystyle x(t)} , the second inverse continuous wavelet transform can be exploited. x ( t ) = 1 2 π ψ ^ ¯ ( 1 ) ∫ 0 ∞ ∫ − ∞ ∞ 1 a 2 X w ( a , b ) exp ⁡ ( i t − b a ) d b d a {\displaystyle x(t)={\frac {1}{2\pi {\overline {\hat {\psi }}}(1)}}\int _{0}^{\infty }\int _{-\infty }^{\infty }{\frac {1}{a^{2}}}X_{w}(a,b)\exp \left(i{\frac {t-b}{a}}\right)\,\mathrm {d} b\ \mathrm {d} a} This inverse transform suggests that a wavelet should be defined as ψ ( t ) = w ( t ) exp ⁡ ( i t ) {\displaystyle \psi (t)=w(t)\exp(it)} where w ( t ) {\displaystyle w(t)} is a window. Such defined wavelet can be called as an analyzing wavelet, because it admits to time-frequency analysis. An analyzing wavelet is unnecessary to be admissible. == Scale factor == The scale factor a {\displaystyle a} either dilates or compresses a signal. When the scale factor is relatively low, the signal is more contracted which in turn results in a more detailed resulting graph. However, the drawback is that low scale factor does not last for the entire duration of the signal. On the other hand, when the scale factor is high, the signal is stretched out which means that the resulting graph will be presented in less detail. Nevertheless, it usually lasts the entire duration of the signal. == Continuous wavelet transform properties == In definition, the continuous wavelet transform is a convolution of the input data sequence with a set of functions generated by the mother wavelet. The convolution can be computed by using a fast Fourier transform (FFT) algorithm. Normally, the output X w ( a , b ) {\displaystyle X_{w}(a,b)} is a real valued function except when the mother wavelet is complex. A complex mother wavelet will convert the continuous wavelet transform to a complex valued function. The power spectrum of the continuous wavelet transform can be represented by 1 a ⋅ | X w ( a , b ) | 2 {\displaystyle {\frac {1}{a}}\cdot |X_{w}(a,b)|^{2}} . == Applications of the wavelet transform == One of the most popular applications of wavelet transform is image compression. The advantage of using wavelet-based coding in image compression is that it provides significant improvements in picture quality at higher compression ratios over conventional techniques. Since wavelet transform has the ability to decompose complex information and patterns into elementary forms, it is commonly used in acoustics processing and pattern recognition, but it has been also proposed as an instantaneous frequency estimator. Moreover, wavelet transforms can be applied to the following scientific research areas: edge and corner detection, partial differential equation solving, transient detection, filter design, electrocardiogram (ECG) analysis, texture analysis, business information analysis and gait analysis. Wavelet transforms can also be used in Electroencephalography (EEG) data analysis to identify epileptic spikes resulting from epilepsy. Wavelet transform has been also successfully used for the interpretation of time series of landslides and land subsidence, and for calculating the changing periodicities of epidemics. Continuous Wavelet Transform (CWT) is very efficient in determining the damping ratio of oscillating signals (e.g. identification of damping in dynamic systems). CWT is also very resistant to the noise in the signal. == See also == Continuous wavelet S transform Time-frequency analysis Cauchy wavelet == References == === Further reading === A. Grossmann & J. Morlet, 1984, Decomposition of Hardy functions into square integrable wavelets of constant shape, Soc. Int. Am. Math. (SIAM), J. Math. Analys., 15, 723–736. Lintao Liu and Houtse Hsu (2012) "Inversion and normalization of time-frequency transform" AMIS 6 No. 1S pp. 67S-74S. Stéphane Mallat, "A wavelet tour of signal processing" 2nd Edition, Academic Press, 1999, ISBN 0-12-466606-X Ding, Jian-Jiun (2008), Time-Frequency Analysis and Wavelet Transform, viewed 19 January 2008 Polikar, Robi (2001), The Wavelet Tutorial, viewed 19 January 2008 WaveMetrics (2004), Time Frequency Analysis, viewed 18 January 2008 Valens, Clemens (2004), A Really Friendly Guide to Wavelets, viewed 18 September 2018] Mathematica Continuous Wavelet Transform == External links == Wavelets: a mathematical microscope on YouTube
Wikipedia/Continuous_wavelet_transform
The rectangular function (also known as the rectangle function, rect function, Pi function, Heaviside Pi function, gate function, unit pulse, or the normalized boxcar function) is defined as rect ⁡ ( t a ) = Π ( t a ) = { 0 , if | t | > a 2 1 2 , if | t | = a 2 1 , if | t | < a 2 . {\displaystyle \operatorname {rect} \left({\frac {t}{a}}\right)=\Pi \left({\frac {t}{a}}\right)=\left\{{\begin{array}{rl}0,&{\text{if }}|t|>{\frac {a}{2}}\\{\frac {1}{2}},&{\text{if }}|t|={\frac {a}{2}}\\1,&{\text{if }}|t|<{\frac {a}{2}}.\end{array}}\right.} Alternative definitions of the function define rect ⁡ ( ± 1 2 ) {\textstyle \operatorname {rect} \left(\pm {\frac {1}{2}}\right)} to be 0, 1, or undefined. Its periodic version is called a rectangular wave. == History == The rect function has been introduced 1953 by Woodward in "Probability and Information Theory, with Applications to Radar" as an ideal cutout operator, together with the sinc function as an ideal interpolation operator, and their counter operations which are sampling (comb operator) and replicating (rep operator), respectively. == Relation to the boxcar function == The rectangular function is a special case of the more general boxcar function: rect ⁡ ( t − X Y ) = H ( t − ( X − Y / 2 ) ) − H ( t − ( X + Y / 2 ) ) = H ( t − X + Y / 2 ) − H ( t − X − Y / 2 ) {\displaystyle \operatorname {rect} \left({\frac {t-X}{Y}}\right)=H(t-(X-Y/2))-H(t-(X+Y/2))=H(t-X+Y/2)-H(t-X-Y/2)} where H ( x ) {\displaystyle H(x)} is the Heaviside step function; the function is centered at X {\displaystyle X} and has duration Y {\displaystyle Y} , from X − Y / 2 {\displaystyle X-Y/2} to X + Y / 2. {\displaystyle X+Y/2.} == Fourier transform of the rectangular function == The unitary Fourier transforms of the rectangular function are ∫ − ∞ ∞ rect ⁡ ( t ) ⋅ e − i 2 π f t d t = sin ⁡ ( π f ) π f = sinc ⁡ ( π f ) = sinc π ⁡ ( f ) , {\displaystyle \int _{-\infty }^{\infty }\operatorname {rect} (t)\cdot e^{-i2\pi ft}\,dt={\frac {\sin(\pi f)}{\pi f}}=\operatorname {sinc} (\pi f)=\operatorname {sinc} _{\pi }(f),} using ordinary frequency f, where sinc π {\displaystyle \operatorname {sinc} _{\pi }} is the normalized form of the sinc function and 1 2 π ∫ − ∞ ∞ rect ⁡ ( t ) ⋅ e − i ω t d t = 1 2 π ⋅ sin ⁡ ( ω / 2 ) ω / 2 = 1 2 π ⋅ sinc ⁡ ( ω / 2 ) , {\displaystyle {\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }\operatorname {rect} (t)\cdot e^{-i\omega t}\,dt={\frac {1}{\sqrt {2\pi }}}\cdot {\frac {\sin \left(\omega /2\right)}{\omega /2}}={\frac {1}{\sqrt {2\pi }}}\cdot \operatorname {sinc} \left(\omega /2\right),} using angular frequency ω {\displaystyle \omega } , where sinc {\displaystyle \operatorname {sinc} } is the unnormalized form of the sinc function. For rect ⁡ ( x / a ) {\displaystyle \operatorname {rect} (x/a)} , its Fourier transform is ∫ − ∞ ∞ rect ⁡ ( t a ) ⋅ e − i 2 π f t d t = a sin ⁡ ( π a f ) π a f = a sinc π ⁡ ( a f ) . {\displaystyle \int _{-\infty }^{\infty }\operatorname {rect} \left({\frac {t}{a}}\right)\cdot e^{-i2\pi ft}\,dt=a{\frac {\sin(\pi af)}{\pi af}}=a\ \operatorname {sinc} _{\pi }{(af)}.} == Relation to the triangular function == We can define the triangular function as the convolution of two rectangular functions: t r i ( t / T ) = r e c t ( 2 t / T ) ∗ r e c t ( 2 t / T ) . {\displaystyle \operatorname {tri(t/T)} =\operatorname {rect(2t/T)} *\operatorname {rect(2t/T)} .\,} == Use in probability == Viewing the rectangular function as a probability density function, it is a special case of the continuous uniform distribution with a = − 1 / 2 , b = 1 / 2. {\displaystyle a=-1/2,b=1/2.} The characteristic function is φ ( k ) = sin ⁡ ( k / 2 ) k / 2 , {\displaystyle \varphi (k)={\frac {\sin(k/2)}{k/2}},} and its moment-generating function is M ( k ) = sinh ⁡ ( k / 2 ) k / 2 , {\displaystyle M(k)={\frac {\sinh(k/2)}{k/2}},} where sinh ⁡ ( t ) {\displaystyle \sinh(t)} is the hyperbolic sine function. == Rational approximation == The pulse function may also be expressed as a limit of a rational function: Π ( t ) = lim n → ∞ , n ∈ ( Z ) 1 ( 2 t ) 2 n + 1 . {\displaystyle \Pi (t)=\lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{(2t)^{2n}+1}}.} === Demonstration of validity === First, we consider the case where | t | < 1 2 . {\textstyle |t|<{\frac {1}{2}}.} Notice that the term ( 2 t ) 2 n {\textstyle (2t)^{2n}} is always positive for integer n . {\displaystyle n.} However, 2 t < 1 {\displaystyle 2t<1} and hence ( 2 t ) 2 n {\textstyle (2t)^{2n}} approaches zero for large n . {\displaystyle n.} It follows that: lim n → ∞ , n ∈ ( Z ) 1 ( 2 t ) 2 n + 1 = 1 0 + 1 = 1 , | t | < 1 2 . {\displaystyle \lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{(2t)^{2n}+1}}={\frac {1}{0+1}}=1,|t|<{\tfrac {1}{2}}.} Second, we consider the case where | t | > 1 2 . {\textstyle |t|>{\frac {1}{2}}.} Notice that the term ( 2 t ) 2 n {\textstyle (2t)^{2n}} is always positive for integer n . {\displaystyle n.} However, 2 t > 1 {\displaystyle 2t>1} and hence ( 2 t ) 2 n {\textstyle (2t)^{2n}} grows very large for large n . {\displaystyle n.} It follows that: lim n → ∞ , n ∈ ( Z ) 1 ( 2 t ) 2 n + 1 = 1 + ∞ + 1 = 0 , | t | > 1 2 . {\displaystyle \lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{(2t)^{2n}+1}}={\frac {1}{+\infty +1}}=0,|t|>{\tfrac {1}{2}}.} Third, we consider the case where | t | = 1 2 . {\textstyle |t|={\frac {1}{2}}.} We may simply substitute in our equation: lim n → ∞ , n ∈ ( Z ) 1 ( 2 t ) 2 n + 1 = lim n → ∞ , n ∈ ( Z ) 1 1 2 n + 1 = 1 1 + 1 = 1 2 . {\displaystyle \lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{(2t)^{2n}+1}}=\lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{1^{2n}+1}}={\frac {1}{1+1}}={\tfrac {1}{2}}.} We see that it satisfies the definition of the pulse function. Therefore, rect ⁡ ( t ) = Π ( t ) = lim n → ∞ , n ∈ ( Z ) 1 ( 2 t ) 2 n + 1 = { 0 if | t | > 1 2 1 2 if | t | = 1 2 1 if | t | < 1 2 . {\displaystyle \operatorname {rect} (t)=\Pi (t)=\lim _{n\rightarrow \infty ,n\in \mathbb {(} Z)}{\frac {1}{(2t)^{2n}+1}}={\begin{cases}0&{\mbox{if }}|t|>{\frac {1}{2}}\\{\frac {1}{2}}&{\mbox{if }}|t|={\frac {1}{2}}\\1&{\mbox{if }}|t|<{\frac {1}{2}}.\\\end{cases}}} == Dirac delta function == The rectangle function can be used to represent the Dirac delta function δ ( x ) {\displaystyle \delta (x)} . Specifically, δ ( x ) = lim a → 0 1 a rect ⁡ ( x a ) . {\displaystyle \delta (x)=\lim _{a\to 0}{\frac {1}{a}}\operatorname {rect} \left({\frac {x}{a}}\right).} For a function g ( x ) {\displaystyle g(x)} , its average over the width a {\displaystyle a} around 0 in the function domain is calculated as, g a v g ( 0 ) = 1 a ∫ − ∞ ∞ d x g ( x ) rect ⁡ ( x a ) . {\displaystyle g_{avg}(0)={\frac {1}{a}}\int \limits _{-\infty }^{\infty }dx\ g(x)\operatorname {rect} \left({\frac {x}{a}}\right).} To obtain g ( 0 ) {\displaystyle g(0)} , the following limit is applied, g ( 0 ) = lim a → 0 1 a ∫ − ∞ ∞ d x g ( x ) rect ⁡ ( x a ) {\displaystyle g(0)=\lim _{a\to 0}{\frac {1}{a}}\int \limits _{-\infty }^{\infty }dx\ g(x)\operatorname {rect} \left({\frac {x}{a}}\right)} and this can be written in terms of the Dirac delta function as, g ( 0 ) = ∫ − ∞ ∞ d x g ( x ) δ ( x ) . {\displaystyle g(0)=\int \limits _{-\infty }^{\infty }dx\ g(x)\delta (x).} The Fourier transform of the Dirac delta function δ ( t ) {\displaystyle \delta (t)} is δ ( f ) = ∫ − ∞ ∞ δ ( t ) ⋅ e − i 2 π f t d t = lim a → 0 1 a ∫ − ∞ ∞ rect ⁡ ( t a ) ⋅ e − i 2 π f t d t = lim a → 0 sinc ⁡ ( a f ) . {\displaystyle \delta (f)=\int _{-\infty }^{\infty }\delta (t)\cdot e^{-i2\pi ft}\,dt=\lim _{a\to 0}{\frac {1}{a}}\int _{-\infty }^{\infty }\operatorname {rect} \left({\frac {t}{a}}\right)\cdot e^{-i2\pi ft}\,dt=\lim _{a\to 0}\operatorname {sinc} {(af)}.} where the sinc function here is the normalized sinc function. Because the first zero of the sinc function is at f = 1 / a {\displaystyle f=1/a} and a {\displaystyle a} goes to infinity, the Fourier transform of δ ( t ) {\displaystyle \delta (t)} is δ ( f ) = 1 , {\displaystyle \delta (f)=1,} means that the frequency spectrum of the Dirac delta function is infinitely broad. As a pulse is shorten in time, it is larger in spectrum. == See also == Fourier transform Square wave Step function Top-hat filter Boxcar function == References ==
Wikipedia/Rectangular_function
In mathematical analysis, a Hermitian function is a complex function with the property that its complex conjugate is equal to the original function with the variable changed in sign: f ∗ ( x ) = f ( − x ) {\displaystyle f^{*}(x)=f(-x)} (where the ∗ {\displaystyle ^{*}} indicates the complex conjugate) for all x {\displaystyle x} in the domain of f {\displaystyle f} . In physics, this property is referred to as PT symmetry. This definition extends also to functions of two or more variables, e.g., in the case that f {\displaystyle f} is a function of two variables it is Hermitian if f ∗ ( x 1 , x 2 ) = f ( − x 1 , − x 2 ) {\displaystyle f^{*}(x_{1},x_{2})=f(-x_{1},-x_{2})} for all pairs ( x 1 , x 2 ) {\displaystyle (x_{1},x_{2})} in the domain of f {\displaystyle f} . From this definition it follows immediately that: f {\displaystyle f} is a Hermitian function if and only if the real part of f {\displaystyle f} is an even function, the imaginary part of f {\displaystyle f} is an odd function. == Motivation == Hermitian functions appear frequently in mathematics, physics, and signal processing. For example, the following two statements follow from basic properties of the Fourier transform: The function f {\displaystyle f} is real-valued if and only if the Fourier transform of f {\displaystyle f} is Hermitian. The function f {\displaystyle f} is Hermitian if and only if the Fourier transform of f {\displaystyle f} is real-valued. Since the Fourier transform of a real signal is guaranteed to be Hermitian, it can be compressed using the Hermitian even/odd symmetry. This, for example, allows the discrete Fourier transform of a signal (which is in general complex) to be stored in the same space as the original real signal. If f is Hermitian, then f ⋆ g = f ∗ g {\displaystyle f\star g=f*g} . Where the ⋆ {\displaystyle \star } is cross-correlation, and ∗ {\displaystyle *} is convolution. If both f and g are Hermitian, then f ⋆ g = g ⋆ f {\displaystyle f\star g=g\star f} . == See also == Complex conjugate – Fundamental operation on complex numbers Even and odd functions – Functions such that f(–x) equals f(x) or –f(x)
Wikipedia/Hermitian_function
In mathematics, the compact-open topology is a topology defined on the set of continuous maps between two topological spaces. The compact-open topology is one of the commonly used topologies on function spaces, and is applied in homotopy theory and functional analysis. It was introduced by Ralph Fox in 1945. If the codomain of the functions under consideration has a uniform structure or a metric structure then the compact-open topology is the "topology of uniform convergence on compact sets." That is to say, a sequence of functions converges in the compact-open topology precisely when it converges uniformly on every compact subset of the domain. == Definition == Let X and Y be two topological spaces, and let C(X, Y) denote the set of all continuous maps between X and Y. Given a compact subset K of X and an open subset U of Y, let V(K, U) denote the set of all functions  f  ∈ C(X, Y) such that  f (K) ⊆ U. In other words, V ( K , U ) = C ( K , U ) × C ( K , Y ) C ( X , Y ) {\displaystyle V(K,U)=C(K,U)\times _{C(K,Y)}C(X,Y)} . Then the collection of all such V(K, U) is a subbase for the compact-open topology on C(X, Y). (This collection does not always form a base for a topology on C(X, Y).) When working in the category of compactly generated spaces, it is common to modify this definition by restricting to the subbase formed from those K that are the image of a compact Hausdorff space. Of course, if X is compactly generated and Hausdorff, this definition coincides with the previous one. However, the modified definition is crucial if one wants the convenient category of compactly generated weak Hausdorff spaces to be Cartesian closed, among other useful properties. The confusion between this definition and the one above is caused by differing usage of the word compact. If X is locally compact, then X × − {\displaystyle X\times -} from the category of topological spaces always has a right adjoint H o m ( X , − ) {\displaystyle Hom(X,-)} . This adjoint coincides with the compact-open topology and may be used to uniquely define it. The modification of the definition for compactly generated spaces may be viewed as taking the adjoint of the product in the category of compactly generated spaces instead of the category of topological spaces, which ensures that the right adjoint always exists. == Properties == If * is a one-point space then one can identify C(*, Y) with Y, and under this identification the compact-open topology agrees with the topology on Y. More generally, if X is a discrete space, then C(X, Y) can be identified with the cartesian product of |X| copies of Y and the compact-open topology agrees with the product topology. If Y is T0, T1, Hausdorff, regular, or Tychonoff, then the compact-open topology has the corresponding separation axiom. If X is Hausdorff and S is a subbase for Y, then the collection {V(K, U) : U ∈ S, K compact} is a subbase for the compact-open topology on C(X, Y). If Y is a metric space (or more generally, a uniform space), then the compact-open topology is equal to the topology of compact convergence. In other words, if Y is a metric space, then a sequence { fn } converges to  f  in the compact-open topology if and only if for every compact subset K of X, { fn } converges uniformly to  f  on K. If X is compact and Y is a uniform space, then the compact-open topology is equal to the topology of uniform convergence. If X, Y and Z are topological spaces, with Y locally compact Hausdorff (or even just locally compact preregular), then the composition map C(Y, Z) × C(X, Y) → C(X, Z), given by ( f , g) ↦  f ∘ g, is continuous (here all the function spaces are given the compact-open topology and C(Y, Z) × C(X, Y) is given the product topology). If X is a locally compact Hausdorff (or preregular) space, then the evaluation map e : C(X, Y) × X → Y, defined by e( f , x) =  f (x), is continuous. This can be seen as a special case of the above where X is a one-point space. If X is compact, and Y is a metric space with metric d, then the compact-open topology on C(X, Y) is metrizable, and a metric for it is given by e( f , g) = sup{d( f (x), g(x)) : x in X}, for  f , g in C(X, Y). More generally, if X is hemicompact, and Y metric, the compact-open topology is metrizable by the construction linked here. === Applications === The compact open topology can be used to topologize the following sets: Ω ( X , x 0 ) = { f : I → X ∣ f ( 0 ) = f ( 1 ) = x 0 } {\displaystyle \Omega (X,x_{0})=\{f:I\to X\mid f(0)=f(1)=x_{0}\}} , the loop space of X {\displaystyle X} at x 0 {\displaystyle x_{0}} , E ( X , x 0 , x 1 ) = { f : I → X ∣ f ( 0 ) = x 0 and f ( 1 ) = x 1 } {\displaystyle E(X,x_{0},x_{1})=\{f:I\to X\mid f(0)=x_{0}{\text{ and }}f(1)=x_{1}\}} , E ( X , x 0 ) = { f : I → X ∣ f ( 0 ) = x 0 } {\displaystyle E(X,x_{0})=\{f:I\to X\mid f(0)=x_{0}\}} . In addition, there is a homotopy equivalence between the spaces C ( Σ X , Y ) ≅ C ( X , Ω Y ) {\displaystyle C(\Sigma X,Y)\cong C(X,\Omega Y)} . The topological spaces C ( X , Y ) {\displaystyle C(X,Y)} are useful in homotopy theory because they can be used to form a topological space and a model for the homotopy type of the set of homotopy classes of maps π ( X , Y ) = { [ f ] : X → Y ∣ f is a homotopy class } . {\displaystyle \pi (X,Y)=\{[f]:X\to Y\mid f{\text{ is a homotopy class}}\}.} This is because π ( X , Y ) {\displaystyle \pi (X,Y)} is the set of path components in C ( X , Y ) {\displaystyle C(X,Y)} –that is, there is an isomorphism of sets π ( X , Y ) → C ( I , C ( X , Y ) ) / ∼ , {\displaystyle \pi (X,Y)\to C(I,C(X,Y))/{\sim },} where ∼ {\displaystyle \sim } is the homotopy equivalence. == Fréchet differentiable functions == Let X and Y be two Banach spaces defined over the same field, and let C m(U, Y) denote the set of all m-continuously Fréchet-differentiable functions from the open subset U ⊆ X to Y. The compact-open topology is the initial topology induced by the seminorms p K ( f ) = sup { ‖ D j f ( x ) ‖ : x ∈ K , 0 ≤ j ≤ m } {\displaystyle p_{K}(f)=\sup \left\{\left\|D^{j}f(x)\right\|\ :\ x\in K,0\leq j\leq m\right\}} where D0 f (x) =  f (x), for each compact subset K ⊆ U. == See also == Topology of uniform convergence Uniform convergence – Mode of convergence of a function sequence == References == Dugundji, J. (1966). Topology. Allyn and Becon. ASIN B000KWE22K. O.Ya. Viro, O.A. Ivanov, V.M. Kharlamov and N.Yu. Netsvetaev (2007) Textbook in Problems on Elementary Topology. "Compact-open topology". PlanetMath. Topology and Groupoids Section 5.9 Ronald Brown, 2006
Wikipedia/Compact-open_topology
In Hamiltonian mechanics, the linear canonical transformation (LCT) is a family of integral transforms that generalizes many classical transforms. It has 4 parameters and 1 constraint, so it is a 3-dimensional family, and can be visualized as the action of the special linear group SL2(C) on the time–frequency plane (domain). As this defines the original function up to a sign, this translates into an action of its double cover on the original function space. The LCT generalizes the Fourier, fractional Fourier, Laplace, Gauss–Weierstrass, Bargmann and the Fresnel transforms as particular cases. The name "linear canonical transformation" is from canonical transformation, a map that preserves the symplectic structure, as SL2(R) can also be interpreted as the symplectic group Sp2, and thus LCTs are the linear maps of the time–frequency domain which preserve the symplectic form, and their action on the Hilbert space is given by the Metaplectic group. The basic properties of the transformations mentioned above, such as scaling, shift, coordinate multiplication are considered. Any linear canonical transformation is related to affine transformations in phase space, defined by time-frequency or position-momentum coordinates. == Definition == The LCT can be represented in several ways; most easily, it can be parameterized by a 2×2 matrix with determinant 1, i.e., an element of the special linear group SL2(C). Then for any such matrix ( a b c d ) , {\displaystyle {\bigl (}{\begin{smallmatrix}a&b\\c&d\end{smallmatrix}}{\bigr )},} with ad − bc = 1, the corresponding integral transform from a function x ( t ) {\displaystyle x(t)} to X ( u ) {\displaystyle X(u)} is defined as X ( a , b , c , d ) ( u ) = { 1 i b ⋅ e i π d b u 2 ∫ − ∞ ∞ e − i 2 π 1 b u t e i π a b t 2 x ( t ) d t , when b ≠ 0 , d ⋅ e i π c d u 2 x ( d ⋅ u ) , when b = 0. {\displaystyle X_{(a,b,c,d)}(u)={\begin{cases}{\sqrt {\frac {1}{ib}}}\cdot e^{i\pi {\frac {d}{b}}u^{2}}\int _{-\infty }^{\infty }e^{-i2\pi {\frac {1}{b}}ut}e^{i\pi {\frac {a}{b}}t^{2}}x(t)\,dt,&{\text{when }}b\neq 0,\\{\sqrt {d}}\cdot e^{i\pi cdu^{2}}x(d\cdot u),&{\text{when }}b=0.\end{cases}}} == Special cases == Many classical transforms are special cases of the linear canonical transform: === Scaling === Scaling, x ( u ) ↦ σ x ( σ u ) {\displaystyle x(u)\mapsto {\sqrt {\sigma }}x(\sigma u)} , corresponds to scaling the time and frequency dimensions inversely (as time goes faster, frequencies are higher and the time dimension shrinks): [ 1 / σ 0 0 σ ] {\displaystyle {\begin{bmatrix}1/\sigma &0\\0&\sigma \end{bmatrix}}} === Fourier transform === The Fourier transform corresponds to a clockwise rotation by 90° in the time–frequency plane, represented by the matrix [ a b c d ] = [ 0 1 − 1 0 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}0&1\\-1&0\end{bmatrix}}.} === Fractional Fourier transform === The fractional Fourier transform corresponds to rotation by an arbitrary angle; they are the elliptic elements of SL2(R), represented by the matrices [ a b c d ] = [ cos ⁡ θ sin ⁡ θ − sin ⁡ θ cos ⁡ θ ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix}}.} The Fourier transform is the fractional Fourier transform when θ = 90 ∘ . {\displaystyle \theta =90^{\circ }.} The inverse Fourier transform corresponds to θ = − 90 ∘ . {\displaystyle \theta =-90^{\circ }.} === Fresnel transform === The Fresnel transform corresponds to shearing, and are a family of parabolic elements, represented by the matrices [ a b c d ] = [ 1 λ z 0 1 ] , {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&\lambda z\\0&1\end{bmatrix}},} where z is distance, and λ is wavelength. === Laplace transform === The Laplace transform corresponds to rotation by 90° into the complex domain and can be represented by the matrix [ a b c d ] = [ 0 i i 0 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}0&i\\i&0\end{bmatrix}}.} === Fractional Laplace transform === The fractional Laplace transform corresponds to rotation by an arbitrary angle into the complex domain and can be represented by the matrix [ a b c d ] = [ i cos ⁡ θ i sin ⁡ θ i sin ⁡ θ − i cos ⁡ θ ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}i\cos \theta &i\sin \theta \\i\sin \theta &-i\cos \theta \end{bmatrix}}.} The Laplace transform is the fractional Laplace transform when θ = 90 ∘ . {\displaystyle \theta =90^{\circ }.} The inverse Laplace transform corresponds to θ = − 90 ∘ . {\displaystyle \theta =-90^{\circ }.} === Chirp multiplication === Chirp multiplication, x ( u ) ↦ e i π τ u 2 x ( u ) {\displaystyle x(u)\mapsto e^{i\pi \tau u^{2}}x(u)} , corresponds to b = 0 , c = τ {\displaystyle b=0,c=\tau } : [ a b c d ] = [ 1 0 τ 1 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\\tau &1\end{bmatrix}}.} == Composition == Composition of LCTs corresponds to multiplication of the corresponding matrices; this is also known as the additivity property of the Wigner distribution function (WDF). Occasionally the product of transforms can pick up a sign factor due to picking a different branch of the square root in the definition of the LCT. In the literature, this is called the metaplectic phase. If the LCT is denoted by ⁠ O F ( a , b , c , d ) {\displaystyle O_{F}^{(a,b,c,d)}} ⁠, i.e. X ( a , b , c , d ) ( u ) = O F ( a , b , c , d ) [ x ( t ) ] , {\displaystyle X_{(a,b,c,d)}(u)=O_{F}^{(a,b,c,d)}[x(t)],} then O F ( a 2 , b 2 , c 2 , d 2 ) { O F ( a 1 , b 1 , c 1 , d 1 ) [ x ( t ) ] } = O F ( a 3 , b 3 , c 3 , d 3 ) [ x ( t ) ] , {\displaystyle O_{F}^{(a_{2},b_{2},c_{2},d_{2})}\left\{O_{F}^{(a_{1},b_{1},c_{1},d_{1})}[x(t)]\right\}=O_{F}^{(a_{3},b_{3},c_{3},d_{3})}[x(t)],} where [ a 3 b 3 c 3 d 3 ] = [ a 2 b 2 c 2 d 2 ] [ a 1 b 1 c 1 d 1 ] . {\displaystyle {\begin{bmatrix}a_{3}&b_{3}\\c_{3}&d_{3}\end{bmatrix}}={\begin{bmatrix}a_{2}&b_{2}\\c_{2}&d_{2}\end{bmatrix}}{\begin{bmatrix}a_{1}&b_{1}\\c_{1}&d_{1}\end{bmatrix}}.} If W X ( a , b , c , d ) ( u , v ) {\displaystyle W_{X(a,b,c,d)}(u,v)} is the X ( a , b , c , d ) ( u ) {\displaystyle X_{(a,b,c,d)}(u)} , where X ( a , b , c , d ) ( u ) {\displaystyle X_{(a,b,c,d)}(u)} is the LCT of x ( t ) {\displaystyle x(t)} , then W X ( a , b , c , d ) ( u , v ) = W x ( d u − b v , − c u + a v ) , {\displaystyle W_{X(a,b,c,d)}(u,v)=W_{x}(du-bv,-cu+av),} W X ( a , b , c , d ) ( a u + b v , c u + d v ) = W x ( u , v ) . {\displaystyle W_{X(a,b,c,d)}(au+bv,cu+dv)=W_{x}(u,v).} LCT is equal to the twisting operation for the WDF and the Cohen's class distribution also has the twisting operation. We can freely use the LCT to transform the parallelogram whose center is at (0, 0) to another parallelogram which has the same area and the same center: From this picture we know that the point (−1, 2) transform to the point (0, 1), and the point (1, 2) transform to the point (4, 3). As the result, we can write down the equations { − a + 2 b = 0 , − c + 2 d = 1 , { a + 2 b = 4 , c + 2 d = 3. {\displaystyle {\begin{cases}-a+2b=0,\\-c+2d=1,\end{cases}}\qquad {\begin{cases}a+2b=4,\\c+2d=3.\end{cases}}} Solve these equations gives (a, b, c, d) = (2, 1, 1, 1). == In optics and quantum mechanics == Paraxial optical systems implemented entirely with thin lenses and propagation through free space and/or graded-index (GRIN) media, are quadratic-phase systems (QPS); these were known before Moshinsky and Quesne (1974) called attention to their significance in connection with canonical transformations in quantum mechanics. The effect of any arbitrary QPS on an input wavefield can be described using the linear canonical transform, a particular case of which was developed by Segal (1963) and Bargmann (1961) in order to formalize Fock's (1928) boson calculus. In quantum mechanics, linear canonical transformations can be identified with the linear transformations which mix the momentum operator with the position operator and leave invariant the canonical commutation relations. == Applications == Canonical transforms are used to analyze differential equations. These include diffusion, the Schrödinger free particle, the linear potential (free-fall), and the attractive and repulsive oscillator equations. It also includes a few others such as the Fokker–Planck equation. Although this class is far from universal, the ease with which solutions and properties are found makes canonical transforms an attractive tool for problems such as these. Wave propagation through air, a lens, and between satellite dishes are discussed here. All of the computations can be reduced to 2×2 matrix algebra. This is the spirit of LCT. === Electromagnetic wave propagation === Assuming the system looks like as depicted in the figure, the wave travels from the (xi, yi) plane to the (x, y) plane. The Fresnel transform is used to describe electromagnetic wave propagation in free space: U 0 ( x , y ) = − j λ e j k z z ∫ − ∞ ∞ ∫ − ∞ ∞ e j k 2 z [ ( x − x i ) 2 + ( y − y i ) 2 ] U i ( x i , y i ) d x i d y i , {\displaystyle U_{0}(x,y)=-{\frac {j}{\lambda }}{\frac {e^{jkz}}{z}}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }e^{j{\frac {k}{2z}}\left[(x-x_{i})^{2}+(y-y_{i})^{2}\right]}U_{i}(x_{i},y_{i})\,dx_{i}\,dy_{i},} where ⁠ k = 2 π / λ {\displaystyle k=2\pi /\lambda } ⁠ is the wave number, λ is the wavelength, z is the distance of propagation, ⁠ j = − 1 {\displaystyle j={\sqrt {-1}}} ⁠ is the imaginary unit. This is equivalent to LCT (shearing), when [ a b c d ] = [ 1 λ z 0 1 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&\lambda z\\0&1\end{bmatrix}}.} When the travel distance (z) is larger, the shearing effect is larger. === Spherical lens === With the lens as depicted in the figure, and the refractive index denoted as n, the result is U 0 ( x , y ) = e j k n Δ e − j k 2 f [ x 2 + y 2 ] U i ( x , y ) , {\displaystyle U_{0}(x,y)=e^{jkn\Delta }e^{-j{\frac {k}{2f}}[x^{2}+y^{2}]}U_{i}(x,y),} where f is the focal length, and Δ is the thickness of the lens. The distortion passing through the lens is similar to LCT, when [ a b c d ] = [ 1 0 − 1 λ f 1 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\{\frac {-1}{\lambda f}}&1\end{bmatrix}}.} This is also a shearing effect: when the focal length is smaller, the shearing effect is larger. === Spherical mirror === The spherical mirror—e.g., a satellite dish—can be described as a LCT, with [ a b c d ] = [ 1 0 − 1 λ R 1 ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\{\frac {-1}{\lambda R}}&1\end{bmatrix}}.} This is very similar to lens, except focal length is replaced by the radius R of the dish. A spherical mirror with radius curvature of R is equivalent to a thin lens with the focal length f = −R/2 (by convention, R < 0 for concave mirror, R > 0 for convex mirror). Therefore, if the radius is smaller, the shearing effect is larger. === Joint free space and spherical lens === The relation between the input and output we can use LCT to represent [ a b c d ] = [ 1 λ z 2 0 1 ] [ 1 0 − 1 / λ f 1 ] [ 1 λ z 1 0 1 ] = [ 1 − z 2 / f λ ( z 1 + z 2 ) − λ z 1 z 2 / f − 1 / λ f 1 − z 1 / f ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&\lambda z_{2}\\0&1\end{bmatrix}}{\begin{bmatrix}1&0\\-1/\lambda f&1\end{bmatrix}}{\begin{bmatrix}1&\lambda z_{1}\\0&1\end{bmatrix}}={\begin{bmatrix}1-z_{2}/f&\lambda (z_{1}+z_{2})-\lambda z_{1}z_{2}/f\\-1/\lambda f&1-z_{1}/f\end{bmatrix}}\,.} If ⁠ z 1 = z 2 = 2 f {\displaystyle z_{1}=z_{2}=2f} ⁠, it is reverse real image. If ⁠ z 1 = z 2 = f {\displaystyle z_{1}=z_{2}=f} ⁠, it is Fourier transform+scaling If ⁠ z 1 = z 2 {\displaystyle z_{1}=z_{2}} ⁠, it is fractional Fourier transform+scaling == Basic properties == In this part, we show the basic properties of LCT Given a two-dimensional column vector r = [ x y ] , {\displaystyle r={\begin{bmatrix}x\\y\end{bmatrix}},} we show some basic properties (result) for the specific input below: == Example == The system considered is depicted in the figure to the right: two dishes – one being the emitter and the other one the receiver – and a signal travelling between them over a distance D. First, for dish A (emitter), the LCT matrix looks like this: [ 1 0 − 1 λ R A 1 ] . {\displaystyle {\begin{bmatrix}1&0\\{\frac {-1}{\lambda R_{A}}}&1\end{bmatrix}}.} Then, for dish B (receiver), the LCT matrix similarly becomes: [ 1 0 − 1 λ R B 1 ] . {\displaystyle {\begin{bmatrix}1&0\\{\frac {-1}{\lambda R_{B}}}&1\end{bmatrix}}.} Last, for the propagation of the signal in air, the LCT matrix is: [ 1 λ D 0 1 ] . {\displaystyle {\begin{bmatrix}1&\lambda D\\0&1\end{bmatrix}}.} Putting all three components together, the LCT of the system is: [ a b c d ] = [ 1 0 − 1 λ R B 1 ] [ 1 λ D 0 1 ] [ 1 0 − 1 λ R A 1 ] = [ 1 − D R A − λ D 1 λ ( R A − 1 + R B − 1 − R A − 1 R B − 1 D ) 1 − D R B ] . {\displaystyle {\begin{bmatrix}a&b\\c&d\end{bmatrix}}={\begin{bmatrix}1&0\\{\frac {-1}{\lambda R_{B}}}&1\end{bmatrix}}{\begin{bmatrix}1&\lambda D\\0&1\end{bmatrix}}{\begin{bmatrix}1&0\\{\frac {-1}{\lambda R_{A}}}&1\end{bmatrix}}={\begin{bmatrix}1-{\frac {D}{R_{A}}}&-\lambda D\\{\frac {1}{\lambda }}(R_{A}^{-1}+R_{B}^{-1}-R_{A}^{-1}R_{B}^{-1}D)&1-{\frac {D}{R_{B}}}\end{bmatrix}}\,.} == See also == Segal–Shale–Weil distribution, a metaplectic group of operators related to the chirplet transform Other time–frequency transforms: Fractional Fourier transform Continuous Fourier transform Chirplet transform Applications: Focus recovery based on the linear canonical transform Ray transfer matrix analysis == Notes == == References == J.J. Healy, M.A. Kutay, H.M. Ozaktas and J.T. Sheridan, "Linear Canonical Transforms: Theory and Applications", Springer, New York 2016. J.J. Ding, "Time–frequency analysis and wavelet transform course note", the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, 2007. K.B. Wolf, "Integral Transforms in Science and Engineering", Ch. 9&10, New York, Plenum Press, 1979. S.A. Collins, "Lens-system diffraction integral written in terms of matrix optics," J. Opt. Soc. Amer. 60, 1168–1177 (1970). M. Moshinsky and C. Quesne, "Linear canonical transformations and their unitary representations," J. Math. Phys. 12, 8, 1772–1783, (1971). B.M. Hennelly and J.T. Sheridan, "Fast Numerical Algorithm for the Linear Canonical Transform", J. Opt. Soc. Am. A 22, 5, 928–937 (2005). H.M. Ozaktas, A. Koç, I. Sari, and M.A. Kutay, "Efficient computation of quadratic-phase integrals in optics", Opt. Let. 31, 35–37, (2006). Bing-Zhao Li, Ran Tao, Yue Wang, "New sampling formulae related to the linear canonical transform", Signal Processing '87', 983–990, (2007). A. Koç, H.M. Ozaktas, C. Candan, and M.A. Kutay, "Digital computation of linear canonical transforms", IEEE Trans. Signal Process., vol. 56, no. 6, 2383–2394, (2008). Ran Tao, Bing-Zhao Li, Yue Wang, "On sampling of bandlimited signals associated with the linear canonical transform", IEEE Transactions on Signal Processing, vol. 56, no. 11, 5454–5464, (2008). D. Stoler, "Operator methods in Physical Optics", 26th Annual Technical Symposium. International Society for Optics and Photonics, 1982. Tian-Zhou Xu, Bing-Zhao Li, " Linear Canonical Transform and Its Applications ", Beijing, Science Press, 2013. Tatiana Alieva., Martin J. Bastiaans. (2016) The Linear Canonical Transformations: Definition and Properties. In: Healy J., Alper Kutay M., Ozaktas H., Sheridan J. (eds) Linear Canonical Transforms. Springer Series in Optical Sciences, vol 198. Springer, New York, NY
Wikipedia/Linear_canonical_transform
In signal processing and control theory, the impulse response, or impulse response function (IRF), of a dynamic system is its output when presented with a brief input signal, called an impulse (δ(t)). More generally, an impulse response is the reaction of any dynamic system in response to some external change. In both cases, the impulse response describes the reaction of the system as a function of time (or possibly as a function of some other independent variable that parameterizes the dynamic behavior of the system). In all these cases, the dynamic system and its impulse response may be actual physical objects, or may be mathematical systems of equations describing such objects. Since the impulse function contains all frequencies (see the Fourier transform of the Dirac delta function, showing infinite frequency bandwidth that the Dirac delta function has), the impulse response defines the response of a linear time-invariant system for all frequencies. == Mathematical considerations == Mathematically, how the impulse is described depends on whether the system is modeled in discrete or continuous time. The impulse can be modeled as a Dirac delta function for continuous-time systems, or as the discrete unit sample function for discrete-time systems. The Dirac delta represents the limiting case of a pulse made very short in time while maintaining its area or integral (thus giving an infinitely high peak). While this is impossible in any real system, it is a useful idealization. In Fourier analysis theory, such an impulse comprises equal portions of all possible excitation frequencies, which makes it a convenient test probe. Any system in a large class known as linear, time-invariant (LTI) is completely characterized by its impulse response. That is, for any input, the output can be calculated in terms of the input and the impulse response. (See LTI system theory.) The impulse response of a linear transformation is the image of Dirac's delta function under the transformation, analogous to the fundamental solution of a partial differential operator. It is usually easier to analyze systems using transfer functions as opposed to impulse responses. The transfer function is the Laplace transform of the impulse response. The Laplace transform of a system's output may be determined by the multiplication of the transfer function with the input's Laplace transform in the complex plane, also known as the frequency domain. An inverse Laplace transform of this result will yield the output in the time domain. To determine an output directly in the time domain requires the convolution of the input with the impulse response. When the transfer function and the Laplace transform of the input are known, this convolution may be more complicated than the alternative of multiplying two functions in the frequency domain. The impulse response, considered as a Green's function, can be thought of as an "influence function": how a point of input influences output. == Practical applications == In practice, it is not possible to perturb a system with a perfect impulse. One can use a brief pulse as a first approximation. Limitations of this approach include the duration of the pulse and its magnitude. The response can be close, compared to the ideal case, provided the pulse is short enough. Additionally, in many systems, a pulse of large intensity may drive the system into the nonlinear regime. Other methods exist to construct an impulse response. The impulse response can be calculated from the input and output of a system driven with a pseudo-random sequence, such as maximum length sequences. Another approach is to take a sine sweep measurement and process the result to get the impulse response. === Loudspeakers === Impulse response loudspeaker testing was first developed in the 1970s. Loudspeakers suffer from phase inaccuracy (delayed frequencies) which can be caused by passive crossovers, resonance, cone momentum, the internal volume, and vibrating enclosure panels. The impulse response can be used to indicate when such inaccuracies can be improved by different materials, enclosures or crossovers. Loudspeakers have a physical limit to their power output, thus the input amplitude must be limited to maintain linearity. This limitation led to the use of inputs like maximum length sequences in obtaining the impulse response. === Electronic processing === Impulse response analysis is a major facet of radar, ultrasound imaging, and many areas of digital signal processing. An interesting example is found in broadband internet connections. Digital subscriber line service providers use adaptive equalization to compensate for signal distortion and interference from using copper phone lines for transmission. === Control systems === In control theory the impulse response is the response of a system to a Dirac delta input. This proves useful in the analysis of dynamic systems; the Laplace transform of the delta function is 1, so the impulse response is equivalent to the inverse Laplace transform of the system's transfer function. === Acoustic and audio applications === In acoustic and audio settings, impulse responses can be used to capture the acoustic characteristics of many things. The reverb at a location, the body of an instrument, certain analog audio equipment, and amplifiers are all emulated by impulse responses. The impulse is convolved with a dry signal in software, often to create the effect of a physical recording. Various packages containing impulse responses from specific locations are available online. === Economics === In economics, and especially in contemporary macroeconomic modeling, impulse response functions are used to describe how the economy reacts over time to exogenous impulses, which economists usually call shocks, and are often modeled in the context of a vector autoregression. Impulses that are often treated as exogenous from a macroeconomic point of view include changes in government spending, tax rates, and other fiscal policy parameters; changes in the monetary base or other monetary policy parameters; changes in productivity or other technological parameters; and changes in preferences, such as the degree of impatience. Impulse response functions describe the reaction of endogenous macroeconomic variables such as output, consumption, investment, and employment at the time of the shock and over subsequent points in time. Recently, asymmetric impulse response functions have been suggested in the literature that separate the impact of a positive shock from a negative one. == See also == == References == == External links == Media related to Impulse response at Wikimedia Commons
Wikipedia/Impulse_response_function
In mathematical physics and harmonic analysis, the quadratic Fourier transform is an integral transform that generalizes the fractional Fourier transform, which in turn generalizes the Fourier transform. Roughly speaking, the Fourier transform corresponds to a change of variables from time to frequency (in the context of harmonic analysis) or from position to momentum (in the context of quantum mechanics). In phase space, this is a 90 degree rotation. The fractional Fourier transform generalizes this to any angle rotation, giving a smooth mixture of time and frequency, or of position and momentum. The quadratic Fourier transform extends this further to the group of all linear symplectic transformations in phase space (of which rotations are a subgroup). More specifically, for every member of the metaplectic group (which is a double cover of the symplectic group) there is a corresponding quadratic Fourier transform. == References ==
Wikipedia/Quadratic_Fourier_transform
In mathematics, Schwartz space S {\displaystyle {\mathcal {S}}} is the function space of all functions whose derivatives are rapidly decreasing. This space has the important property that the Fourier transform is an automorphism on this space. This property enables one, by duality, to define the Fourier transform for elements in the dual space S ∗ {\displaystyle {\mathcal {S}}^{*}} of S {\displaystyle {\mathcal {S}}} , that is, for tempered distributions. A function in the Schwartz space is sometimes called a Schwartz function. Schwartz space is named after French mathematician Laurent Schwartz. == Definition == Let N {\displaystyle \mathbb {N} } be the set of non-negative integers, and for any n ∈ N {\displaystyle n\in \mathbb {N} } , let N n := N × ⋯ × N ⏟ n times {\displaystyle \mathbb {N} ^{n}:=\underbrace {\mathbb {N} \times \dots \times \mathbb {N} } _{n{\text{ times}}}} be the n-fold Cartesian product. The Schwartz space or space of rapidly decreasing functions on R n {\displaystyle \mathbb {R} ^{n}} is the function space S ( R n , C ) := { f ∈ C ∞ ( R n , C ) ∣ ∀ α , β ∈ N n , ‖ f ‖ α , β < ∞ } , {\displaystyle {\mathcal {S}}\left(\mathbb {R} ^{n},\mathbb {C} \right):=\left\{f\in C^{\infty }(\mathbb {R} ^{n},\mathbb {C} )\mid \forall {\boldsymbol {\alpha }},{\boldsymbol {\beta }}\in \mathbb {N} ^{n},\|f\|_{{\boldsymbol {\alpha }},{\boldsymbol {\beta }}}<\infty \right\},} where C ∞ ( R n , C ) {\displaystyle C^{\infty }(\mathbb {R} ^{n},\mathbb {C} )} is the function space of smooth functions from R n {\displaystyle \mathbb {R} ^{n}} into C {\displaystyle \mathbb {C} } , and ‖ f ‖ α , β := sup x ∈ R n | x α ( D β f ) ( x ) | . {\displaystyle \|f\|_{{\boldsymbol {\alpha }},{\boldsymbol {\beta }}}:=\sup _{{\boldsymbol {x}}\in \mathbb {R} ^{n}}\left|{\boldsymbol {x}}^{\boldsymbol {\alpha }}({\boldsymbol {D}}^{\boldsymbol {\beta }}f)({\boldsymbol {x}})\right|.} Here, sup {\displaystyle \sup } denotes the supremum, and we used multi-index notation, i.e. x α := x 1 α 1 x 2 α 2 … x n α n {\displaystyle {\boldsymbol {x}}^{\boldsymbol {\alpha }}:=x_{1}^{\alpha _{1}}x_{2}^{\alpha _{2}}\ldots x_{n}^{\alpha _{n}}} and D β := ∂ 1 β 1 ∂ 2 β 2 … ∂ n β n {\displaystyle D^{\boldsymbol {\beta }}:=\partial _{1}^{\beta _{1}}\partial _{2}^{\beta _{2}}\ldots \partial _{n}^{\beta _{n}}} . To put common language to this definition, one could consider a rapidly decreasing function as essentially a function f(x) such that f(x), f ′(x), f ′′(x), ... all exist everywhere on R and go to zero as x→ ±∞ faster than any reciprocal power of x. In particular, 𝒮(Rn, C) is a subspace of the function space C∞(Rn, C) of smooth functions from Rn into C. == Examples of functions in the Schwartz space == If α {\displaystyle {\boldsymbol {\alpha }}} is a multi-index, and a is a positive real number, then x α e − a | x | 2 ∈ S ( R n ) . {\displaystyle {\boldsymbol {x}}^{\boldsymbol {\alpha }}e^{-a|{\boldsymbol {x}}|^{2}}\in {\mathcal {S}}(\mathbb {R} ^{n}).} Any smooth function f with compact support is in 𝒮(Rn). This is clear since any derivative of f is continuous and supported in the support of f, so ( x α D α ) f {\displaystyle {\boldsymbol {x}}^{\boldsymbol {\alpha }}{\boldsymbol {D}}^{\boldsymbol {\alpha }})f} has a maximum in Rn by the extreme value theorem. Because the Schwartz space is a vector space, any polynomial ϕ ( x ) {\displaystyle \phi ({\boldsymbol {x}})} can be multiplied by a factor e − a | x | 2 {\displaystyle e^{-a\vert {\boldsymbol {x}}\vert ^{2}}} for a > 0 {\displaystyle a>0} a real constant, to give an element of the Schwartz space. In particular, there is an embedding of polynomials into a Schwartz space. == Properties == === Analytic properties === From Leibniz's rule, it follows that 𝒮(Rn) is also closed under pointwise multiplication: If f, g ∈ 𝒮(Rn) then the product fg ∈ 𝒮(Rn). In particular, this implies that 𝒮(Rn) is an R-algebra. More generally, if f ∈ 𝒮(R) and H is a bounded smooth function with bounded derivatives of all orders, then fH ∈ 𝒮(R). The Fourier transform is a linear isomorphism F:𝒮(Rn) → 𝒮(Rn). If f ∈ 𝒮(Rn) then f is Lipschitz continuous and hence uniformly continuous on Rn. 𝒮(Rn) is a distinguished locally convex Fréchet Schwartz TVS over the complex numbers. Both 𝒮(Rn) and its strong dual space are also: complete Hausdorff locally convex spaces, nuclear Montel spaces, ultrabornological spaces, reflexive barrelled Mackey spaces. === Relation of Schwartz spaces with other topological vector spaces === If 1 ≤ p ≤ ∞, then 𝒮(Rn) ⊂ Lp(Rn). If 1 ≤ p < ∞, then 𝒮(Rn) is dense in Lp(Rn). The space of all bump functions, C∞c(Rn), is included in 𝒮(Rn). == See also == Bump function Schwartz–Bruhat function Nuclear space == References == === Sources === Hörmander, L. (1990). The Analysis of Linear Partial Differential Operators I, (Distribution theory and Fourier Analysis) (2nd ed.). Berlin: Springer-Verlag. ISBN 3-540-52343-X. Reed, M.; Simon, B. (1980). Methods of Modern Mathematical Physics: Functional Analysis I (Revised and enlarged ed.). San Diego: Academic Press. ISBN 0-12-585050-6. Stein, Elias M.; Shakarchi, Rami (2003). Fourier Analysis: An Introduction (Princeton Lectures in Analysis I). Princeton: Princeton University Press. ISBN 0-691-11384-X. Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. This article incorporates material from Space of rapidly decreasing functions on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia/Schwartz_functions
A pitch detection algorithm (PDA) is an algorithm designed to estimate the pitch or fundamental frequency of a quasiperiodic or oscillating signal, usually a digital recording of speech or a musical note or tone. This can be done in the time domain, the frequency domain, or both. PDAs are used in various contexts (e.g. phonetics, music information retrieval, speech coding, musical performance systems) and so there may be different demands placed upon the algorithm. There is as yet no single ideal PDA, so a variety of algorithms exist, most falling broadly into the classes given below. A PDA typically estimates the period of a quasiperiodic signal, then inverts that value to give the frequency. == General approaches == One simple approach would be to measure the distance between zero crossing points of the signal (i.e. the zero-crossing rate). However, this does not work well with complicated waveforms which are composed of multiple sine waves with differing periods or noisy data. Nevertheless, there are cases in which zero-crossing can be a useful measure, e.g. in some speech applications where a single source is assumed. The algorithm's simplicity makes it "cheap" to implement. More sophisticated approaches compare segments of the signal with other segments offset by a trial period to find a match. AMDF (average magnitude difference function), ASMDF (Average Squared Mean Difference Function), and other similar autocorrelation algorithms work this way. These algorithms can give quite accurate results for highly periodic signals. However, they have false detection problems (often "octave errors"), can sometimes cope badly with noisy signals (depending on the implementation), and - in their basic implementations - do not deal well with polyphonic sounds (which involve multiple musical notes of different pitches). Current time-domain pitch detector algorithms tend to build upon the basic methods mentioned above, with additional refinements to bring the performance more in line with a human assessment of pitch. For example, the YIN algorithm and the MPM algorithm are both based upon autocorrelation. == Frequency-domain approaches == Frequency domain, polyphonic detection is possible, usually utilizing the periodogram to convert the signal to an estimate of the frequency spectrum . This requires more processing power as the desired accuracy increases, although the well-known efficiency of the FFT, a key part of the periodogram algorithm, makes it suitably efficient for many purposes. Popular frequency domain algorithms include: the harmonic product spectrum; cepstral analysis and maximum likelihood which attempts to match the frequency domain characteristics to pre-defined frequency maps (useful for detecting pitch of fixed tuning instruments); and the detection of peaks due to harmonic series. To improve on the pitch estimate derived from the discrete Fourier spectrum, techniques such as spectral reassignment (phase based) or Grandke interpolation (magnitude based) can be used to go beyond the precision provided by the FFT bins. Another phase-based approach is offered by Brown and Puckette == Spectral/temporal approaches == Spectral/temporal pitch detection algorithms, e.g. the YAAPT pitch tracking algorithm, are based upon a combination of time domain processing using an autocorrelation function such as normalized cross correlation, and frequency domain processing utilizing spectral information to identify the pitch. Then, among the candidates estimated from the two domains, a final pitch track can be computed using dynamic programming. The advantage of these approaches is that the tracking error in one domain can be reduced by the process in the other domain. == Speech pitch detection == The fundamental frequency of speech can vary from 40 Hz for low-pitched voices to 600 Hz for high-pitched voices. Autocorrelation methods need at least two pitch periods to detect pitch. This means that in order to detect a fundamental frequency of 40 Hz, at least 50 milliseconds (ms) of the speech signal must be analyzed. However, during 50 ms, speech with higher fundamental frequencies may not necessarily have the same fundamental frequency throughout the window. == See also == Auto-Tune Beat detection Frequency estimation Linear predictive coding MUSIC (algorithm) Sinusoidal model == References == == External links == Alain de Cheveigne and Hideki Kawahara: YIN, a fundamental frequency estimator for speech and music AudioContentAnalysis.org: Matlab code for various pitch detection algorithms
Wikipedia/Pitch_detection_algorithm
In mathematics, the Fourier sine and cosine transforms are integral equations that decompose arbitrary functions into a sum of sine waves representing the odd component of the function plus cosine waves representing the even component of the function. The modern Fourier transform concisely contains both the sine and cosine transforms. Since the sine and cosine transforms use sine and cosine waves instead of complex exponentials and don't require complex numbers or negative frequency, they more closely correspond to Joseph Fourier's original transform equations and are still preferred in some signal processing and statistics applications and may be better suited as an introduction to Fourier analysis. == Definition == The Fourier sine transform of f ( t ) {\displaystyle f(t)} is: If t {\displaystyle t} means time, then ξ {\displaystyle \xi } is frequency in cycles per unit time, but in the abstract, they can be any dual pair of variables (e.g. position and spatial frequency). The sine transform is necessarily an odd function of frequency, i.e. for all ξ {\displaystyle \xi } : f ^ s ( − ξ ) = − f ^ s ( ξ ) . {\displaystyle {\hat {f}}^{s}(-\xi )=-{\hat {f}}^{s}(\xi ).} The Fourier cosine transform of f ( t ) {\displaystyle f(t)} is: The cosine transform is necessarily an even function of frequency, i.e. for all ξ {\displaystyle \xi } : f ^ c ( − ξ ) = f ^ c ( ξ ) . {\displaystyle {\hat {f}}^{c}(-\xi )={\hat {f}}^{c}(\xi ).} === Odd and even simplification === The multiplication rules for even and odd functions shown in the overbraces in the following equations dramatically simplify the integrands when transforming even and odd functions. Some authors even only define the cosine transform for even functions f even ( t ) {\displaystyle f_{\text{even}}(t)} . Since cosine is an even function and because the integral of an even function from − ∞ {\displaystyle {-}\infty } to ∞ {\displaystyle \infty } is twice its integral from 0 {\displaystyle 0} to ∞ {\displaystyle \infty } , the cosine transform of any even function can be simplified to avoid negative t {\displaystyle t} : f ^ c ( ξ ) = ∫ − ∞ ∞ f even ( t ) ⋅ cos ⁡ ( 2 π ξ t ) ⏞ even·even=even d t = 2 ∫ 0 ∞ f even ( t ) cos ⁡ ( 2 π ξ t ) d t . {\displaystyle {\hat {f}}^{c}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{even}}(t)\cdot \cos(2\pi \xi t)} ^{\text{even·even=even}}\,dt=2\int _{0}^{\infty }f_{\text{even}}(t)\cos(2\pi \xi t)\,dt.} And because the integral from − ∞ {\displaystyle {-}\infty } to ∞ {\displaystyle \infty } of any odd function is zero, the cosine transform of any odd function is simply zero: f ^ c ( ξ ) = ∫ − ∞ ∞ f odd ( t ) ⋅ cos ⁡ ( 2 π ξ t ) ⏞ odd·even=odd d t = 0. {\displaystyle {\hat {f}}^{c}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{odd}}(t)\cdot \cos(2\pi \xi t)} ^{\text{odd·even=odd}}\,dt=0.} Similarly, because sin is odd, the sine transform of any odd function f odd ( t ) {\displaystyle f_{\text{odd}}(t)} also simplifies to avoid negative t {\displaystyle t} : f ^ s ( ξ ) = ∫ − ∞ ∞ f odd ( t ) ⋅ sin ⁡ ( 2 π ξ t ) ⏞ odd·odd=even d t = 2 ∫ 0 ∞ f odd ( t ) sin ⁡ ( 2 π ξ t ) d t {\displaystyle {\hat {f}}^{s}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{odd}}(t)\cdot \sin(2\pi \xi t)} ^{\text{odd·odd=even}}\,dt=2\int _{0}^{\infty }f_{\text{odd}}(t)\sin(2\pi \xi t)\,dt} and the sine transform of any even function is simply zero: f ^ s ( ξ ) = ∫ − ∞ ∞ f even ( t ) ⋅ sin ⁡ ( 2 π ξ t ) ⏞ even·odd=odd d t = 0. {\displaystyle {\hat {f}}^{s}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{even}}(t)\cdot \sin(2\pi \xi t)} ^{\text{even·odd=odd}}\,dt=0.} The sine transform represents the odd part of a function, while the cosine transform represents the even part of a function. === Other conventions === Just like the Fourier transform takes the form of different equations with different constant factors (see Fourier transform § Unitarity and definition for square integrable functions for discussion), other authors also define the cosine transform as f ^ c ( ξ ) = 2 π ∫ 0 ∞ f ( t ) cos ⁡ ( 2 π ξ t ) d t {\displaystyle {\hat {f}}^{c}(\xi )={\sqrt {\frac {2}{\pi }}}\int _{0}^{\infty }f(t)\cos(2\pi \xi t)\,dt} and the sine transform as f ^ s ( ξ ) = 2 π ∫ 0 ∞ f ( t ) sin ⁡ ( 2 π ξ t ) d t . {\displaystyle {\hat {f}}^{s}(\xi )={\sqrt {\frac {2}{\pi }}}\int _{0}^{\infty }f(t)\sin(2\pi \xi t)\,dt.} Another convention defines the cosine transform as F c ( α ) = 2 π ∫ 0 ∞ f ( x ) cos ⁡ ( α x ) d x {\displaystyle F_{c}(\alpha )={\frac {2}{\pi }}\int _{0}^{\infty }f(x)\cos(\alpha x)\,dx} and the sine transform as F s ( α ) = 2 π ∫ 0 ∞ f ( x ) sin ⁡ ( α x ) d x {\displaystyle F_{s}(\alpha )={\frac {2}{\pi }}\int _{0}^{\infty }f(x)\sin(\alpha x)\,dx} using α {\displaystyle \alpha } as the transformation variable. And while t {\displaystyle t} is typically used to represent the time domain, x {\displaystyle x} is often instead used to represent a spatial domain when transforming to spatial frequencies. == Fourier inversion == The original function f {\displaystyle f} can be recovered from its sine and cosine transforms under the usual hypotheses using the inversion formula: === Simplifications === Note that since both integrands are even functions of ξ {\displaystyle \xi } , the concept of negative frequency can be avoided by doubling the result of integrating over non-negative frequencies: f ( t ) = 2 ∫ 0 ∞ f ^ s ( ξ ) sin ⁡ ( 2 π ξ t ) d ξ + 2 ∫ 0 ∞ f ^ c ( ξ ) cos ⁡ ( 2 π ξ t ) d ξ . {\displaystyle f(t)=2\int _{0}^{\infty }{\hat {f}}^{s}(\xi )\sin(2\pi \xi t)\,d\xi \,+2\int _{0}^{\infty }{\hat {f}}^{c}(\xi )\cos(2\pi \xi t)\,d\xi \,.} Also, if f {\displaystyle f} is an odd function, then the cosine transform is zero, so its inversion simplifies to: f ( t ) = ∫ − ∞ ∞ f ^ s ( ξ ) sin ⁡ ( 2 π ξ t ) d ξ , only if f ( t ) is odd. {\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}^{s}(\xi )\sin(2\pi \xi t)\,d\xi ,{\text{ only if }}f(t){\text{ is odd.}}} Likewise, if the original function f {\displaystyle f} is an even function, then the sine transform is zero, so its inversion also simplifies to: f ( t ) = ∫ − ∞ ∞ f ^ c ( ξ ) cos ⁡ ( 2 π ξ t ) d ξ , only if f ( t ) is even. {\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}^{c}(\xi )\cos(2\pi \xi t)\,d\xi ,{\text{ only if }}f(t){\text{ is even.}}} Remarkably, these last two simplified inversion formulas look identical to the original sine and cosine transforms, respectively, though with t {\displaystyle t} swapped with ξ {\displaystyle \xi } (and with f {\displaystyle f} swapped with f ^ s {\displaystyle {\hat {f}}^{s}} or f ^ c {\displaystyle {\hat {f}}^{c}} ). A consequence of this symmetry is that their inversion and transform processes still work when the two functions are swapped. Two such functions are called transform pairs. === Overview of inversion proof === Using the addition formula for cosine, the full inversion formula can also be rewritten as Fourier's integral formula: f ( t ) = ∫ − ∞ ∞ ∫ − ∞ ∞ f ( x ) cos ⁡ ( 2 π ξ ( x − t ) ) d x d ξ . {\displaystyle f(t)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f(x)\cos(2\pi \xi (x-t))\,dx\,d\xi .} This theorem is often stated under different hypotheses, that f {\displaystyle f} is integrable, and is of bounded variation on an open interval containing the point t {\displaystyle t} , in which case 1 2 lim h → 0 ( f ( t + h ) + f ( t − h ) ) = 2 ∫ 0 ∞ ∫ − ∞ ∞ f ( x ) cos ⁡ ( 2 π ξ ( x − t ) ) d x d ξ . {\displaystyle {\tfrac {1}{2}}\lim _{h\to 0}\left(f(t+h)+f(t-h)\right)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(x)\cos(2\pi \xi (x-t))\,dx\,d\xi .} This latter form is a useful intermediate step in proving the inverse formulae for the since and cosine transforms. One method of deriving it, due to Cauchy is to insert a e − δ ξ {\displaystyle e^{-\delta \xi }} into the integral, where δ > 0 {\displaystyle \delta >0} is fixed. Then 2 ∫ − ∞ ∞ ∫ 0 ∞ e − δ ξ cos ⁡ ( 2 π ξ ( x − t ) ) d ξ f ( x ) d x = ∫ − ∞ ∞ f ( x ) 2 δ δ 2 + 4 π 2 ( x − t ) 2 d x . {\displaystyle 2\int _{-\infty }^{\infty }\int _{0}^{\infty }e^{-\delta \xi }\cos(2\pi \xi (x-t))\,d\xi \,f(x)\,dx=\int _{-\infty }^{\infty }f(x){\frac {2\delta }{\delta ^{2}+4\pi ^{2}(x-t)^{2}}}\,dx.} Now when δ → 0 {\displaystyle \delta \to 0} , the integrand tends to zero except at x = t {\displaystyle x=t} , so that formally the above is f ( t ) ∫ − ∞ ∞ 2 δ δ 2 + 4 π 2 ( x − t ) 2 d x = f ( t ) . {\displaystyle f(t)\int _{-\infty }^{\infty }{\frac {2\delta }{\delta ^{2}+4\pi ^{2}(x-t)^{2}}}\,dx=f(t).} == Relation with complex exponentials == The complex exponential form of the Fourier transform used more often today is f ^ ( ξ ) = ∫ − ∞ ∞ f ( t ) e − 2 π i ξ t d t {\displaystyle {\begin{aligned}{\hat {f}}(\xi )&=\int _{-\infty }^{\infty }f(t)e^{-2\pi i\xi t}\,dt\\\end{aligned}}\,} where i {\displaystyle i} is the square root of negative one. By applying Euler's formula ( e i x = cos ⁡ x + i sin ⁡ x ) , {\textstyle (e^{ix}=\cos x+i\sin x),} it can be shown (for real-valued functions) that the Fourier transform's real component is the cosine transform (representing the even component of the original function) and the Fourier transform's imaginary component is the negative of the sine transform (representing the odd component of the original function): f ^ ( ξ ) = ∫ − ∞ ∞ f ( t ) ( cos ⁡ ( 2 π ξ t ) − i sin ⁡ ( 2 π ξ t ) ) d t Euler's Formula = ( ∫ − ∞ ∞ f ( t ) cos ⁡ ( 2 π ξ t ) d t ) − i ( ∫ − ∞ ∞ f ( t ) sin ⁡ ( 2 π ξ t ) d t ) = f ^ c ( ξ ) − i f ^ s ( ξ ) . {\displaystyle {\begin{aligned}{\hat {f}}(\xi )&=\int _{-\infty }^{\infty }f(t)\left(\cos(2\pi \xi t)-i\,\sin(2\pi \xi t)\right)dt&&{\text{Euler's Formula}}\\&=\left(\int _{-\infty }^{\infty }f(t)\cos(2\pi \xi t)\,dt\right)-i\left(\int _{-\infty }^{\infty }f(t)\sin(2\pi \xi t)\,dt\right)\\&={\hat {f}}^{c}(\xi )-i\,{\hat {f}}^{s}(\xi )\,.\end{aligned}}} Because of this relationship, the cosine transform of functions whose Fourier transform is known (e.g. in Fourier transform § Tables of important Fourier transforms) can be simply found by taking the real part of the Fourier transform: f ^ c ( ξ ) = R e [ f ^ ( ξ ) ] {\displaystyle {\hat {f}}^{c}(\xi )=\mathrm {Re} {[\;{\hat {f}}(\xi )\;]}} while the sine transform is simply the negative of the imaginary part of the Fourier transform: f ^ s ( ξ ) = − I m [ f ^ ( ξ ) ] . {\displaystyle {\hat {f}}^{s}(\xi )=-\mathrm {Im} {[\;{\hat {f}}(\xi )\;]}\,.} === Pros and cons === An advantage of the modern Fourier transform is that while the sine and cosine transforms together are required to extract the phase information of a frequency, the modern Fourier transform instead compactly packs both phase and amplitude information inside its complex valued result. But a disadvantage is its requirement on understanding complex numbers, complex exponentials, and negative frequency. The sine and cosine transforms meanwhile have the advantage that all quantities are real. Since positive frequencies can fully express them, the non-trivial concept of negative frequency needed in the regular Fourier transform can be avoided. They may also be convenient when the original function is already even or odd or can be made even or odd, in which case only the cosine or the sine transform respectively is needed. For instance, even though an input may not be even or odd, a discrete cosine transform may start by assuming an even extension of its input while a discrete sine transform may start by assuming an odd extension of its input, to avoid having to compute the entire discrete Fourier transform. == Numerical evaluation == Using standard methods of numerical evaluation for Fourier integrals, such as Gaussian or tanh-sinh quadrature, is likely to lead to completely incorrect results, as the quadrature sum is (for most integrands of interest) highly ill-conditioned. Special numerical methods which exploit the structure of the oscillation are required, an example of which is Ooura's method for Fourier integrals This method attempts to evaluate the integrand at locations which asymptotically approach the zeros of the oscillation (either the sine or cosine), quickly reducing the magnitude of positive and negative terms which are summed. == See also == Discrete cosine transform Discrete sine transform List of Fourier-related transforms == Notes == == References == Whittaker, Edmund, and James Watson, A Course in Modern Analysis, Fourth Edition, Cambridge Univ. Press, 1927, pp. 189, 211
Wikipedia/Sine_and_cosine_transforms
In signal processing, the chirplet transform is an inner product of an input signal with a family of analysis primitives called chirplets. Similar to the wavelet transform, chirplets are usually generated from (or can be expressed as being from) a single mother chirplet (analogous to the so-called mother wavelet of wavelet theory). == Definitions == The term chirplet transform was coined by Steve Mann, as the title of the first published paper on chirplets. The term chirplet itself (apart from chirplet transform) was also used by Steve Mann, Domingo Mihovilovic, and Ronald Bracewell to describe a windowed portion of a chirp function. In Mann's words: A wavelet is a piece of a wave, and a chirplet, similarly, is a piece of a chirp. More precisely, a chirplet is a windowed portion of a chirp function, where the window provides some time localization property. In terms of time–frequency space, chirplets exist as rotated, sheared, or other structures that move from the traditional parallelism with the time and frequency axes that are typical for waves (Fourier and short-time Fourier transforms) or wavelets. The chirplet transform thus represents a rotated, sheared, or otherwise transformed tiling of the time–frequency plane. Although chirp signals have been known for many years in radar, pulse compression, and the like, the first published reference to the chirplet transform described specific signal representations based on families of functions related to one another by time–varying frequency modulation or frequency varying time modulation, in addition to time and frequency shifting, and scale changes. In that paper, the Gaussian chirplet transform was presented as one such example, together with a successful application to ice fragment detection in radar (improving target detection results over previous approaches). The term chirplet (but not the term chirplet transform) was also proposed for a similar transform, apparently independently, by Mihovilovic and Bracewell later that same year. == Applications == The first practical application of the chirplet transform was in water-human-computer interaction (WaterHCI) for marine safety, to assist vessels in navigating through ice-infested waters, using marine radar to detect growlers (small iceberg fragments too small to be visible on conventional radar, yet large enough to damage a vessel). Other applications of the chirplet transform in WaterHCI include the SWIM (Sequential Wave Imprinting Machine). More recently other practical applications have been developed, including image processing (e.g. where there is periodic structure imaged through projective geometry), as well as to excise chirp-like interference in spread spectrum communications, in EEG processing, and Chirplet Time Domain Reflectometry. == Extensions == The warblet transform is a particular example of the chirplet transform introduced by Mann and Haykin in 1992 and now widely used. It provides a signal representation based on cyclically varying frequency modulated signals (warbling signals). == See also == Time–frequency representation Other time–frequency transforms Fractional Fourier transform Short-time Fourier transform Wavelet transform == References == Mann, S.; Haykin, S. (21–26 July 1991), "Adaptive chirplet: An adaptive generalized wavelet-like transform", in Haykin, Simon (ed.), Adaptive Signal Processing, vol. 1565, pp. 402–413, doi:10.1117/12.49794, S2CID 9418542 LEM, Logon Expectation Maximization Mann, S.; Haykin, S. (1992). "Adaptive chirplet transform". Optical Engineering. 31 (6): 1243–1256. Bibcode:1992OptEn..31.1243M. doi:10.1117/12.57676. introduces Logon Expectation Maximization (LEM) and Radial Basis Functions (RBF) in Time–Frequency space. Osaka Kyoiku, Gabor, wavelet and chirplet transforms...(PDF) J. "Richard" Cui, etal, Time–frequency analysis of visual evoked potentials using chirplet transform Archived 2011-07-16 at the Wayback Machine, IEE Electronics Letters, vol. 41, no. 4, pp. 217–218, 2005. Florian Bossmann, Jianwei Ma, Asymmetric chirplet transform—Part 2: phase, frequency, and chirp rate, Geophysics, 2016, 81 (6), V425-V439. Florian Bossmann, Jianwei Ma, Asymmetric chirplet transform for sparse representation of seismic data, Geophysics, 2015, 80 (6), WD89-WD100. == External links == DiscreteTFDs - software for computing chirplet decompositions and time–frequency distributions The Chirplet Transform (web tutorial and info).
Wikipedia/Chirplet_transform
In algebraic geometry, the Fourier–Deligne transform, or ℓ-adic Fourier transform, or geometric Fourier transform, is an operation on objects of the derived category of ℓ-adic sheaves over the affine line. It was introduced by Pierre Deligne on November 29, 1976 in a letter to David Kazhdan as an analogue of the usual Fourier transform. It was used by Gérard Laumon to simplify Deligne's proof of the Weil conjectures. == References == Katz, Nicholas M.; Laumon, Gérard (1985), "Transformation de Fourier et majoration de sommes exponentielles", Publications Mathématiques de l'IHÉS, 62 (62): 361–418, doi:10.1007/BF02698808, ISSN 1618-1913, MR 0823177, S2CID 189775634, erratum Kiehl, Reinhardt; Weissauer, Rainer (2001), Weil conjectures, perverse sheaves and l'adic Fourier transform, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics, vol. 42, Berlin, New York: Springer-Verlag, ISBN 978-3-540-41457-5, MR 1855066 Laumon, Gérard (1987), "Transformation de Fourier, constantes d'équations fonctionnelles et conjecture de Weil", Publications Mathématiques de l'IHÉS, 65 (65): 131–210, doi:10.1007/BF02698937, ISSN 1618-1913, MR 0908218, S2CID 119951352
Wikipedia/Fourier–Deligne_transform
The Cauchy distribution, named after Augustin-Louis Cauchy, is a continuous probability distribution. It is also known, especially among physicists, as the Lorentz distribution (after Hendrik Lorentz), Cauchy–Lorentz distribution, Lorentz(ian) function, or Breit–Wigner distribution. The Cauchy distribution f ( x ; x 0 , γ ) {\displaystyle f(x;x_{0},\gamma )} is the distribution of the x-intercept of a ray issuing from ( x 0 , γ ) {\displaystyle (x_{0},\gamma )} with a uniformly distributed angle. It is also the distribution of the ratio of two independent normally distributed random variables with mean zero. The Cauchy distribution is often used in statistics as the canonical example of a "pathological" distribution since both its expected value and its variance are undefined (but see § Moments below). The Cauchy distribution does not have finite moments of order greater than or equal to one; only fractional absolute moments exist. The Cauchy distribution has no moment generating function. In mathematics, it is closely related to the Poisson kernel, which is the fundamental solution for the Laplace equation in the upper half-plane. It is one of the few stable distributions with a probability density function that can be expressed analytically, the others being the normal distribution and the Lévy distribution. == Definitions == Here are the most important constructions. === Rotational symmetry === If one stands in front of a line and kicks a ball with at a uniformly distributed random angle towards the line, then the distribution of the point where the ball hits the line is a Cauchy distribution. For example, consider a point at ( x 0 , γ ) {\displaystyle (x_{0},\gamma )} in the x-y plane, and select a line passing through the point, with its direction (angle with the x {\displaystyle x} -axis) chosen uniformly (between −180° and 0°) at random. The intersection of the line with the x-axis follows a Cauchy distribution with location x 0 {\displaystyle x_{0}} and scale γ {\displaystyle \gamma } . This definition gives a simple way to sample from the standard Cauchy distribution. Let u {\displaystyle u} be a sample from a uniform distribution from [ 0 , 1 ] {\displaystyle [0,1]} , then we can generate a sample, x {\displaystyle x} from the standard Cauchy distribution using x = tan ⁡ ( π ( u − 1 2 ) ) {\displaystyle x=\tan \left(\pi (u-{\tfrac {1}{2}})\right)} When U {\displaystyle U} and V {\displaystyle V} are two independent normally distributed random variables with expected value 0 and variance 1, then the ratio U / V {\displaystyle U/V} has the standard Cauchy distribution. More generally, if ( U , V ) {\displaystyle (U,V)} is a rotationally symmetric distribution on the plane, then the ratio U / V {\displaystyle U/V} has the standard Cauchy distribution. === Probability density function (PDF) === The Cauchy distribution is the probability distribution with the following probability density function (PDF) f ( x ; x 0 , γ ) = 1 π γ [ 1 + ( x − x 0 γ ) 2 ] = 1 π [ γ ( x − x 0 ) 2 + γ 2 ] , {\displaystyle f(x;x_{0},\gamma )={\frac {1}{\pi \gamma \left[1+\left({\frac {x-x_{0}}{\gamma }}\right)^{2}\right]}}={1 \over \pi }\left[{\gamma \over (x-x_{0})^{2}+\gamma ^{2}}\right],} where x 0 {\displaystyle x_{0}} is the location parameter, specifying the location of the peak of the distribution, and γ {\displaystyle \gamma } is the scale parameter which specifies the half-width at half-maximum (HWHM), alternatively 2 γ {\displaystyle 2\gamma } is full width at half maximum (FWHM). γ {\displaystyle \gamma } is also equal to half the interquartile range and is sometimes called the probable error. This function is also known as a Lorentzian function, and an example of a nascent delta function, and therefore approaches a Dirac delta function in the limit as γ → 0 {\displaystyle \gamma \to 0} . Augustin-Louis Cauchy exploited such a density function in 1827 with an infinitesimal scale parameter, defining this Dirac delta function. ==== Properties of PDF ==== The maximum value or amplitude of the Cauchy PDF is 1 π γ {\displaystyle {\frac {1}{\pi \gamma }}} , located at x = x 0 {\displaystyle x=x_{0}} . It is sometimes convenient to express the PDF in terms of the complex parameter ψ = x 0 + i γ {\displaystyle \psi =x_{0}+i\gamma } f ( x ; ψ ) = 1 π Im ( 1 x − ψ ) = 1 π Re ( − i x − ψ ) {\displaystyle f(x;\psi )={\frac {1}{\pi }}\,{\textrm {Im}}\left({\frac {1}{x-\psi }}\right)={\frac {1}{\pi }}\,{\textrm {Re}}\left({\frac {-i}{x-\psi }}\right)} The special case when x 0 = 0 {\displaystyle x_{0}=0} and γ = 1 {\displaystyle \gamma =1} is called the standard Cauchy distribution with the probability density function f ( x ; 0 , 1 ) = 1 π ( 1 + x 2 ) . {\displaystyle f(x;0,1)={\frac {1}{\pi \left(1+x^{2}\right)}}.} In physics, a three-parameter Lorentzian function is often used: f ( x ; x 0 , γ , I ) = I [ 1 + ( x − x 0 γ ) 2 ] = I [ γ 2 ( x − x 0 ) 2 + γ 2 ] , {\displaystyle f(x;x_{0},\gamma ,I)={\frac {I}{\left[1+{\left({\frac {x-x_{0}}{\gamma }}\right)}^{2}\right]}}=I\left[{\frac {\gamma ^{2}}{{\left(x-x_{0}\right)}^{2}+\gamma ^{2}}}\right],} where I {\displaystyle I} is the height of the peak. The three-parameter Lorentzian function indicated is not, in general, a probability density function, since it does not integrate to 1, except in the special case where I = 1 π γ . {\displaystyle I={\frac {1}{\pi \gamma }}.\!} === Cumulative distribution function (CDF) === The Cauchy distribution is the probability distribution with the following cumulative distribution function (CDF): F ( x ; x 0 , γ ) = 1 π arctan ⁡ ( x − x 0 γ ) + 1 2 {\displaystyle F(x;x_{0},\gamma )={\frac {1}{\pi }}\arctan \left({\frac {x-x_{0}}{\gamma }}\right)+{\frac {1}{2}}} and the quantile function (inverse cdf) of the Cauchy distribution is Q ( p ; x 0 , γ ) = x 0 + γ tan ⁡ [ π ( p − 1 2 ) ] . {\displaystyle Q(p;x_{0},\gamma )=x_{0}+\gamma \,\tan \left[\pi \left(p-{\tfrac {1}{2}}\right)\right].} It follows that the first and third quartiles are ( x 0 − γ , x 0 + γ ) {\displaystyle (x_{0}-\gamma ,x_{0}+\gamma )} , and hence the interquartile range is 2 γ {\displaystyle 2\gamma } . For the standard distribution, the cumulative distribution function simplifies to arctangent function arctan ⁡ ( x ) {\displaystyle \arctan(x)} : F ( x ; 0 , 1 ) = 1 π arctan ⁡ ( x ) + 1 2 {\displaystyle F(x;0,1)={\frac {1}{\pi }}\arctan \left(x\right)+{\frac {1}{2}}} === Other constructions === The standard Cauchy distribution is the Student's t-distribution with one degree of freedom, and so it may be constructed by any method that constructs the Student's t-distribution. If Σ {\displaystyle \Sigma } is a p × p {\displaystyle p\times p} positive-semidefinite covariance matrix with strictly positive diagonal entries, then for independent and identically distributed X , Y ∼ N ( 0 , Σ ) {\displaystyle X,Y\sim N(0,\Sigma )} and any random p {\displaystyle p} -vector w {\displaystyle w} independent of X {\displaystyle X} and Y {\displaystyle Y} such that w 1 + ⋯ + w p = 1 {\displaystyle w_{1}+\cdots +w_{p}=1} and w i ≥ 0 , i = 1 , … , p , {\displaystyle w_{i}\geq 0,i=1,\ldots ,p,} (defining a categorical distribution) it holds that ∑ j = 1 p w j X j Y j ∼ C a u c h y ( 0 , 1 ) . {\displaystyle \sum _{j=1}^{p}w_{j}{\frac {X_{j}}{Y_{j}}}\sim \mathrm {Cauchy} (0,1).} == Properties == The Cauchy distribution is an example of a distribution which has no mean, variance or higher moments defined. Its mode and median are well defined and are both equal to x 0 {\displaystyle x_{0}} . The Cauchy distribution is an infinitely divisible probability distribution. It is also a strictly stable distribution. Like all stable distributions, the location-scale family to which the Cauchy distribution belongs is closed under linear transformations with real coefficients. In addition, the family of Cauchy-distributed random variables is closed under linear fractional transformations with real coefficients. In this connection, see also McCullagh's parametrization of the Cauchy distributions. === Sum of Cauchy-distributed random variables === If X 1 , X 2 , … , X n {\displaystyle X_{1},X_{2},\ldots ,X_{n}} are an IID sample from the standard Cauchy distribution, then their sample mean X ¯ = 1 n ∑ i X i {\textstyle {\bar {X}}={\frac {1}{n}}\sum _{i}X_{i}} is also standard Cauchy distributed. In particular, the average does not converge to the mean, and so the standard Cauchy distribution does not follow the law of large numbers. This can be proved by repeated integration with the PDF, or more conveniently, by using the characteristic function of the standard Cauchy distribution (see below): φ X ( t ) = E ⁡ [ e i X t ] = e − | t | . {\displaystyle \varphi _{X}(t)=\operatorname {E} \left[e^{iXt}\right]=e^{-|t|}.} With this, we have φ ∑ i X i ( t ) = e − n | t | {\displaystyle \varphi _{\sum _{i}X_{i}}(t)=e^{-n|t|}} , and so X ¯ {\displaystyle {\bar {X}}} has a standard Cauchy distribution. More generally, if X 1 , X 2 , … , X n {\displaystyle X_{1},X_{2},\ldots ,X_{n}} are independent and Cauchy distributed with location parameters x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} and scales γ 1 , … , γ n {\displaystyle \gamma _{1},\ldots ,\gamma _{n}} , and a 1 , … , a n {\displaystyle a_{1},\ldots ,a_{n}} are real numbers, then ∑ i a i X i {\textstyle \sum _{i}a_{i}X_{i}} is Cauchy distributed with location ∑ i a i x i {\textstyle \sum _{i}a_{i}x_{i}} and scale ∑ i | a i | γ i {\textstyle \sum _{i}|a_{i}|\gamma _{i}} . We see that there is no law of large numbers for any weighted sum of independent Cauchy distributions. This shows that the condition of finite variance in the central limit theorem cannot be dropped. It is also an example of a more generalized version of the central limit theorem that is characteristic of all stable distributions, of which the Cauchy distribution is a special case. === Central limit theorem === If X 1 , X 2 , … {\displaystyle X_{1},X_{2},\ldots } are an IID sample with PDF ρ {\displaystyle \rho } such that lim c → ∞ 1 c ∫ − c c x 2 ρ ( x ) d x = 2 γ π {\textstyle \lim _{c\to \infty }{\frac {1}{c}}\int _{-c}^{c}x^{2}\rho (x)\,dx={\frac {2\gamma }{\pi }}} is finite, but nonzero, then 1 n ∑ i = 1 n X i {\textstyle {\frac {1}{n}}\sum _{i=1}^{n}X_{i}} converges in distribution to a Cauchy distribution with scale γ {\displaystyle \gamma } . === Characteristic function === Let X {\displaystyle X} denote a Cauchy distributed random variable. The characteristic function of the Cauchy distribution is given by φ X ( t ) = E ⁡ [ e i X t ] = ∫ − ∞ ∞ f ( x ; x 0 , γ ) e i x t d x = e i x 0 t − γ | t | . {\displaystyle \varphi _{X}(t)=\operatorname {E} \left[e^{iXt}\right]=\int _{-\infty }^{\infty }f(x;x_{0},\gamma )e^{ixt}\,dx=e^{ix_{0}t-\gamma |t|}.} which is just the Fourier transform of the probability density. The original probability density may be expressed in terms of the characteristic function, essentially by using the inverse Fourier transform: f ( x ; x 0 , γ ) = 1 2 π ∫ − ∞ ∞ φ X ( t ; x 0 , γ ) e − i x t d t {\displaystyle f(x;x_{0},\gamma )={\frac {1}{2\pi }}\int _{-\infty }^{\infty }\varphi _{X}(t;x_{0},\gamma )e^{-ixt}\,dt\!} The nth moment of a distribution is the nth derivative of the characteristic function evaluated at t = 0 {\displaystyle t=0} . Observe that the characteristic function is not differentiable at the origin: this corresponds to the fact that the Cauchy distribution does not have well-defined moments higher than the zeroth moment. === Kullback–Leibler divergence === The Kullback–Leibler divergence between two Cauchy distributions has the following symmetric closed-form formula: K L ( p x 0 , 1 , γ 1 : p x 0 , 2 , γ 2 ) = log ⁡ ( γ 1 + γ 2 ) 2 + ( x 0 , 1 − x 0 , 2 ) 2 4 γ 1 γ 2 . {\displaystyle \mathrm {KL} \left(p_{x_{0,1},\gamma _{1}}:p_{x_{0,2},\gamma _{2}}\right)=\log {\frac {{\left(\gamma _{1}+\gamma _{2}\right)}^{2}+{\left(x_{0,1}-x_{0,2}\right)}^{2}}{4\gamma _{1}\gamma _{2}}}.} Any f-divergence between two Cauchy distributions is symmetric and can be expressed as a function of the chi-squared divergence. Closed-form expression for the total variation, Jensen–Shannon divergence, Hellinger distance, etc. are available. === Entropy === The entropy of the Cauchy distribution is given by: H ( γ ) = − ∫ − ∞ ∞ f ( x ; x 0 , γ ) log ⁡ ( f ( x ; x 0 , γ ) ) d x = log ⁡ ( 4 π γ ) {\displaystyle {\begin{aligned}H(\gamma )&=-\int _{-\infty }^{\infty }f(x;x_{0},\gamma )\log(f(x;x_{0},\gamma ))\,dx\\[6pt]&=\log(4\pi \gamma )\end{aligned}}} The derivative of the quantile function, the quantile density function, for the Cauchy distribution is: Q ′ ( p ; γ ) = γ π sec 2 ⁡ [ π ( p − 1 2 ) ] . {\displaystyle Q'(p;\gamma )=\gamma \pi \,\sec ^{2}\left[\pi \left(p-{\tfrac {1}{2}}\right)\right].} The differential entropy of a distribution can be defined in terms of its quantile density, specifically: H ( γ ) = ∫ 0 1 log ( Q ′ ( p ; γ ) ) d p = log ⁡ ( 4 π γ ) {\displaystyle H(\gamma )=\int _{0}^{1}\log \,(Q'(p;\gamma ))\,\mathrm {d} p=\log(4\pi \gamma )} The Cauchy distribution is the maximum entropy probability distribution for a random variate X {\displaystyle X} for which E ⁡ [ log ⁡ ( 1 + ( X − x 0 γ ) 2 ) ] = log ⁡ 4 {\displaystyle \operatorname {E} \left[\log \left(1+{\left({\frac {X-x_{0}}{\gamma }}\right)}^{2}\right)\right]=\log 4} === Moments === The Cauchy distribution is usually used as an illustrative counterexample in elementary probability courses, as a distribution with no well-defined (or "indefinite") moments. ==== Sample moments ==== If we take an IID sample X 1 , X 2 , … {\displaystyle X_{1},X_{2},\ldots } from the standard Cauchy distribution, then the sequence of their sample mean is S n = 1 n ∑ i = 1 n X i {\textstyle S_{n}={\frac {1}{n}}\sum _{i=1}^{n}X_{i}} , which also has the standard Cauchy distribution. Consequently, no matter how many terms we take, the sample average does not converge. Similarly, the sample variance V n = 1 n ∑ i = 1 n ( X i − S n ) 2 {\textstyle V_{n}={\frac {1}{n}}\sum _{i=1}^{n}{\left(X_{i}-S_{n}\right)}^{2}} also does not converge. A typical trajectory of S 1 , S 2 , . . . {\displaystyle S_{1},S_{2},...} looks like long periods of slow convergence to zero, punctuated by large jumps away from zero, but never getting too far away. A typical trajectory of V 1 , V 2 , . . . {\displaystyle V_{1},V_{2},...} looks similar, but the jumps accumulate faster than the decay, diverging to infinity. These two kinds of trajectories are plotted in the figure. Moments of sample lower than order 1 would converge to zero. Moments of sample higher than order 2 would diverge to infinity even faster than sample variance. ==== Mean ==== If a probability distribution has a density function f ( x ) {\displaystyle f(x)} , then the mean, if it exists, is given by We may evaluate this two-sided improper integral by computing the sum of two one-sided improper integrals. That is, for an arbitrary real number a {\displaystyle a} . For the integral to exist (even as an infinite value), at least one of the terms in this sum should be finite, or both should be infinite and have the same sign. But in the case of the Cauchy distribution, both the terms in this sum (2) are infinite and have opposite sign. Hence (1) is undefined, and thus so is the mean. When the mean of a probability distribution function (PDF) is undefined, no one can compute a reliable average over the experimental data points, regardless of the sample's size. Note that the Cauchy principal value of the mean of the Cauchy distribution is lim a → ∞ ∫ − a a x f ( x ) d x {\displaystyle \lim _{a\to \infty }\int _{-a}^{a}xf(x)\,dx} which is zero. On the other hand, the related integral lim a → ∞ ∫ − 2 a a x f ( x ) d x {\displaystyle \lim _{a\to \infty }\int _{-2a}^{a}xf(x)\,dx} is not zero, as can be seen by computing the integral. This again shows that the mean (1) cannot exist. Various results in probability theory about expected values, such as the strong law of large numbers, fail to hold for the Cauchy distribution. ==== Smaller moments ==== The absolute moments for p ∈ ( − 1 , 1 ) {\displaystyle p\in (-1,1)} are defined. For X ∼ C a u c h y ( 0 , γ ) {\displaystyle X\sim \mathrm {Cauchy} (0,\gamma )} we have E ⁡ [ | X | p ] = γ p s e c ( π p / 2 ) . {\displaystyle \operatorname {E} [|X|^{p}]=\gamma ^{p}\mathrm {sec} (\pi p/2).} ==== Higher moments ==== The Cauchy distribution does not have finite moments of any order. Some of the higher raw moments do exist and have a value of infinity, for example, the raw second moment: E ⁡ [ X 2 ] ∝ ∫ − ∞ ∞ x 2 1 + x 2 d x = ∫ − ∞ ∞ 1 − 1 1 + x 2 d x = ∫ − ∞ ∞ d x − ∫ − ∞ ∞ 1 1 + x 2 d x = ∫ − ∞ ∞ d x − π = ∞ . {\displaystyle {\begin{aligned}\operatorname {E} [X^{2}]&\propto \int _{-\infty }^{\infty }{\frac {x^{2}}{1+x^{2}}}\,dx=\int _{-\infty }^{\infty }1-{\frac {1}{1+x^{2}}}\,dx\\[8pt]&=\int _{-\infty }^{\infty }dx-\int _{-\infty }^{\infty }{\frac {1}{1+x^{2}}}\,dx=\int _{-\infty }^{\infty }dx-\pi =\infty .\end{aligned}}} By re-arranging the formula, one can see that the second moment is essentially the infinite integral of a constant (here 1). Higher even-powered raw moments will also evaluate to infinity. Odd-powered raw moments, however, are undefined, which is distinctly different from existing with the value of infinity. The odd-powered raw moments are undefined because their values are essentially equivalent to ∞ − ∞ {\displaystyle \infty -\infty } since the two halves of the integral both diverge and have opposite signs. The first raw moment is the mean, which, being odd, does not exist. (See also the discussion above about this.) This in turn means that all of the central moments and standardized moments are undefined since they are all based on the mean. The variance—which is the second central moment—is likewise non-existent (despite the fact that the raw second moment exists with the value infinity). The results for higher moments follow from Hölder's inequality, which implies that higher moments (or halves of moments) diverge if lower ones do. ==== Moments of truncated distributions ==== Consider the truncated distribution defined by restricting the standard Cauchy distribution to the interval [−10100, 10100]. Such a truncated distribution has all moments (and the central limit theorem applies for i.i.d. observations from it); yet for almost all practical purposes it behaves like a Cauchy distribution. === Transformation properties === If X ∼ Cauchy ⁡ ( x 0 , γ ) {\displaystyle X\sim \operatorname {Cauchy} (x_{0},\gamma )} then k X + ℓ ∼ Cauchy ( x 0 k + ℓ , γ | k | ) {\displaystyle kX+\ell \sim {\textrm {Cauchy}}(x_{0}k+\ell ,\gamma |k|)} If X ∼ Cauchy ⁡ ( x 0 , γ 0 ) {\displaystyle X\sim \operatorname {Cauchy} (x_{0},\gamma _{0})} and Y ∼ Cauchy ⁡ ( x 1 , γ 1 ) {\displaystyle Y\sim \operatorname {Cauchy} (x_{1},\gamma _{1})} are independent, then X + Y ∼ Cauchy ⁡ ( x 0 + x 1 , γ 0 + γ 1 ) {\displaystyle X+Y\sim \operatorname {Cauchy} (x_{0}+x_{1},\gamma _{0}+\gamma _{1})} and X − Y ∼ Cauchy ⁡ ( x 0 − x 1 , γ 0 + γ 1 ) {\displaystyle X-Y\sim \operatorname {Cauchy} (x_{0}-x_{1},\gamma _{0}+\gamma _{1})} If X ∼ Cauchy ⁡ ( 0 , γ ) {\displaystyle X\sim \operatorname {Cauchy} (0,\gamma )} then 1 X ∼ Cauchy ⁡ ( 0 , 1 γ ) {\displaystyle {\tfrac {1}{X}}\sim \operatorname {Cauchy} (0,{\tfrac {1}{\gamma }})} McCullagh's parametrization of the Cauchy distributions: Expressing a Cauchy distribution in terms of one complex parameter ψ = x 0 + i γ {\displaystyle \psi =x_{0}+i\gamma } , define X ∼ Cauchy ⁡ ( ψ ) {\displaystyle X\sim \operatorname {Cauchy} (\psi )} to mean X ∼ Cauchy ⁡ ( x 0 , | γ | ) {\displaystyle X\sim \operatorname {Cauchy} (x_{0},|\gamma |)} . If X ∼ Cauchy ⁡ ( ψ ) {\displaystyle X\sim \operatorname {Cauchy} (\psi )} then: a X + b c X + d ∼ Cauchy ⁡ ( a ψ + b c ψ + d ) {\displaystyle {\frac {aX+b}{cX+d}}\sim \operatorname {Cauchy} \left({\frac {a\psi +b}{c\psi +d}}\right)} where a {\displaystyle a} , b {\displaystyle b} , c {\displaystyle c} and d {\displaystyle d} are real numbers. Using the same convention as above, if X ∼ Cauchy ⁡ ( ψ ) {\displaystyle X\sim \operatorname {Cauchy} (\psi )} then: X − i X + i ∼ CCauchy ⁡ ( ψ − i ψ + i ) {\displaystyle {\frac {X-i}{X+i}}\sim \operatorname {CCauchy} \left({\frac {\psi -i}{\psi +i}}\right)} where CCauchy {\displaystyle \operatorname {CCauchy} } is the circular Cauchy distribution. == Statistical inference == === Estimation of parameters === Because the parameters of the Cauchy distribution do not correspond to a mean and variance, attempting to estimate the parameters of the Cauchy distribution by using a sample mean and a sample variance will not succeed. For example, if an i.i.d. sample of size n is taken from a Cauchy distribution, one may calculate the sample mean as: x ¯ = 1 n ∑ i = 1 n x i {\displaystyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}} Although the sample values x i {\displaystyle x_{i}} will be concentrated about the central value x 0 {\displaystyle x_{0}} , the sample mean will become increasingly variable as more observations are taken, because of the increased probability of encountering sample points with a large absolute value. In fact, the distribution of the sample mean will be equal to the distribution of the observations themselves; i.e., the sample mean of a large sample is no better (or worse) an estimator of x 0 {\displaystyle x_{0}} than any single observation from the sample. Similarly, calculating the sample variance will result in values that grow larger as more observations are taken. Therefore, more robust means of estimating the central value x 0 {\displaystyle x_{0}} and the scaling parameter γ {\displaystyle \gamma } are needed. One simple method is to take the median value of the sample as an estimator of x 0 {\displaystyle x_{0}} and half the sample interquartile range as an estimator of γ {\displaystyle \gamma } . Other, more precise and robust methods have been developed. For example, the truncated mean of the middle 24% of the sample order statistics produces an estimate for x 0 {\displaystyle x_{0}} that is more efficient than using either the sample median or the full sample mean. However, because of the fat tails of the Cauchy distribution, the efficiency of the estimator decreases if more than 24% of the sample is used. Maximum likelihood can also be used to estimate the parameters x 0 {\displaystyle x_{0}} and γ {\displaystyle \gamma } . However, this tends to be complicated by the fact that this requires finding the roots of a high degree polynomial, and there can be multiple roots that represent local maxima. Also, while the maximum likelihood estimator is asymptotically efficient, it is relatively inefficient for small samples. The log-likelihood function for the Cauchy distribution for sample size n {\displaystyle n} is: ℓ ^ ( x 1 , … , x n ∣ x 0 , γ ) = − n log ⁡ ( γ π ) − ∑ i = 1 n log ⁡ ( 1 + ( x i − x 0 γ ) 2 ) {\displaystyle {\hat {\ell }}(x_{1},\dotsc ,x_{n}\mid \!x_{0},\gamma )=-n\log(\gamma \pi )-\sum _{i=1}^{n}\log \left(1+\left({\frac {x_{i}-x_{0}}{\gamma }}\right)^{2}\right)} Maximizing the log likelihood function with respect to x 0 {\displaystyle x_{0}} and γ {\displaystyle \gamma } by taking the first derivative produces the following system of equations: d ℓ d x 0 = ∑ i = 1 n 2 ( x i − x 0 ) γ 2 + ( x i − x 0 ) 2 = 0 {\displaystyle {\frac {d\ell }{dx_{0}}}=\sum _{i=1}^{n}{\frac {2(x_{i}-x_{0})}{\gamma ^{2}+\left(x_{i}-\!x_{0}\right)^{2}}}=0} d ℓ d γ = ∑ i = 1 n 2 ( x i − x 0 ) 2 γ ( γ 2 + ( x i − x 0 ) 2 ) − n γ = 0 {\displaystyle {\frac {d\ell }{d\gamma }}=\sum _{i=1}^{n}{\frac {2\left(x_{i}-x_{0}\right)^{2}}{\gamma (\gamma ^{2}+\left(x_{i}-x_{0}\right)^{2})}}-{\frac {n}{\gamma }}=0} Note that ∑ i = 1 n ( x i − x 0 ) 2 γ 2 + ( x i − x 0 ) 2 {\displaystyle \sum _{i=1}^{n}{\frac {\left(x_{i}-x_{0}\right)^{2}}{\gamma ^{2}+\left(x_{i}-x_{0}\right)^{2}}}} is a monotone function in γ {\displaystyle \gamma } and that the solution γ {\displaystyle \gamma } must satisfy min | x i − x 0 | ≤ γ ≤ max | x i − x 0 | . {\displaystyle \min |x_{i}-x_{0}|\leq \gamma \leq \max |x_{i}-x_{0}|.} Solving just for x 0 {\displaystyle x_{0}} requires solving a polynomial of degree 2 n − 1 {\displaystyle 2n-1} , and solving just for γ {\displaystyle \,\!\gamma } requires solving a polynomial of degree 2 n {\displaystyle 2n} . Therefore, whether solving for one parameter or for both parameters simultaneously, a numerical solution on a computer is typically required. The benefit of maximum likelihood estimation is asymptotic efficiency; estimating x 0 {\displaystyle x_{0}} using the sample median is only about 81% as asymptotically efficient as estimating x 0 {\displaystyle x_{0}} by maximum likelihood. The truncated sample mean using the middle 24% order statistics is about 88% as asymptotically efficient an estimator of x 0 {\displaystyle x_{0}} as the maximum likelihood estimate. When Newton's method is used to find the solution for the maximum likelihood estimate, the middle 24% order statistics can be used as an initial solution for x 0 {\displaystyle x_{0}} . The shape can be estimated using the median of absolute values, since for location 0 Cauchy variables X ∼ C a u c h y ( 0 , γ ) {\displaystyle X\sim \mathrm {Cauchy} (0,\gamma )} , the median ⁡ ( | X | ) = γ {\displaystyle \operatorname {median} (|X|)=\gamma } the shape parameter. == Related distributions == === General === Cauchy ⁡ ( 0 , 1 ) ∼ t ( d f = 1 ) {\displaystyle \operatorname {Cauchy} (0,1)\sim {\textrm {t}}(\mathrm {df} =1)\,} Student's t distribution Cauchy ⁡ ( μ , σ ) ∼ t ( d f = 1 ) ( μ , σ ) {\displaystyle \operatorname {Cauchy} (\mu ,\sigma )\sim {\textrm {t}}_{(\mathrm {df} =1)}(\mu ,\sigma )\,} non-standardized Student's t distribution If X , Y ∼ N ( 0 , 1 ) X , Y {\displaystyle X,Y\sim {\textrm {N}}(0,1)\,X,Y} independent, then X Y ∼ Cauchy ( 0 , 1 ) {\displaystyle {\tfrac {X}{Y}}\sim {\textrm {Cauchy}}(0,1)\,} If X ∼ U ( 0 , 1 ) {\displaystyle X\sim {\textrm {U}}(0,1)\,} then tan ⁡ ( π ( X − 1 2 ) ) ∼ Cauchy ( 0 , 1 ) {\displaystyle \tan \left(\pi \left(X-{\tfrac {1}{2}}\right)\right)\sim {\textrm {Cauchy}}(0,1)\,} If X ∼ L o g - C a u c h y ⁡ ( 0 , 1 ) {\displaystyle X\sim \operatorname {Log-Cauchy} (0,1)} then ln ⁡ ( X ) ∼ Cauchy ( 0 , 1 ) {\displaystyle \ln(X)\sim {\textrm {Cauchy}}(0,1)} If X ∼ Cauchy ⁡ ( x 0 , γ ) {\displaystyle X\sim \operatorname {Cauchy} (x_{0},\gamma )} then 1 X ∼ Cauchy ⁡ ( x 0 x 0 2 + γ 2 , γ x 0 2 + γ 2 ) {\displaystyle {\tfrac {1}{X}}\sim \operatorname {Cauchy} \left({\tfrac {x_{0}}{x_{0}^{2}+\gamma ^{2}}},{\tfrac {\gamma }{x_{0}^{2}+\gamma ^{2}}}\right)} The Cauchy distribution is a limiting case of a Pearson distribution of type 4 The Cauchy distribution is a special case of a Pearson distribution of type 7. The Cauchy distribution is a stable distribution: if X ∼ Stable ( 1 , 0 , γ , μ ) {\displaystyle X\sim {\textrm {Stable}}(1,0,\gamma ,\mu )} , then X ∼ Cauchy ⁡ ( μ , γ ) {\displaystyle X\sim \operatorname {Cauchy} (\mu ,\gamma )} . The Cauchy distribution is a singular limit of a hyperbolic distribution The wrapped Cauchy distribution, taking values on a circle, is derived from the Cauchy distribution by wrapping it around the circle. If X ∼ N ( 0 , 1 ) {\displaystyle X\sim {\textrm {N}}(0,1)} , Z ∼ I n v e r s e - G a m m a ⁡ ( 1 / 2 , s 2 / 2 ) {\displaystyle Z\sim \operatorname {Inverse-Gamma} (1/2,s^{2}/2)} , then Y = μ + X Z ∼ Cauchy ⁡ ( μ , s ) {\displaystyle Y=\mu +X{\sqrt {Z}}\sim \operatorname {Cauchy} (\mu ,s)} . For half-Cauchy distributions, the relation holds by setting X ∼ N ( 0 , 1 ) I { X ≥ 0 } {\displaystyle X\sim {\textrm {N}}(0,1)I\{X\geq 0\}} . === Lévy measure === The Cauchy distribution is the stable distribution of index 1. The Lévy–Khintchine representation of such a stable distribution of parameter γ {\displaystyle \gamma } is given, for X ∼ Stable ⁡ ( γ , 0 , 0 ) {\displaystyle X\sim \operatorname {Stable} (\gamma ,0,0)\,} by: E ⁡ ( e i x X ) = exp ⁡ ( ∫ R ( e i x y − 1 ) Π γ ( d y ) ) {\displaystyle \operatorname {E} \left(e^{ixX}\right)=\exp \left(\int _{\mathbb {R} }(e^{ixy}-1)\Pi _{\gamma }(dy)\right)} where Π γ ( d y ) = ( c 1 , γ 1 y 1 + γ 1 { y > 0 } + c 2 , γ 1 | y | 1 + γ 1 { y < 0 } ) d y {\displaystyle \Pi _{\gamma }(dy)=\left(c_{1,\gamma }{\frac {1}{y^{1+\gamma }}}1_{\left\{y>0\right\}}+c_{2,\gamma }{\frac {1}{|y|^{1+\gamma }}}1_{\left\{y<0\right\}}\right)\,dy} and c 1 , γ , c 2 , γ {\displaystyle c_{1,\gamma },c_{2,\gamma }} can be expressed explicitly. In the case γ = 1 {\displaystyle \gamma =1} of the Cauchy distribution, one has c 1 , γ = c 2 , γ {\displaystyle c_{1,\gamma }=c_{2,\gamma }} . This last representation is a consequence of the formula π | x | = PV ⁡ ∫ R ∖ { 0 } ( 1 − e i x y ) d y y 2 {\displaystyle \pi |x|=\operatorname {PV} \int _{\mathbb {R} \smallsetminus \lbrace 0\rbrace }(1-e^{ixy})\,{\frac {dy}{y^{2}}}} === Multivariate Cauchy distribution === A random vector X = ( X 1 , … , X k ) T {\displaystyle X=(X_{1},\ldots ,X_{k})^{T}} is said to have the multivariate Cauchy distribution if every linear combination of its components Y = a 1 X 1 + ⋯ + a k X k {\displaystyle Y=a_{1}X_{1}+\cdots +a_{k}X_{k}} has a Cauchy distribution. That is, for any constant vector a ∈ R k {\displaystyle a\in \mathbb {R} ^{k}} , the random variable Y = a T X {\displaystyle Y=a^{T}X} should have a univariate Cauchy distribution. The characteristic function of a multivariate Cauchy distribution is given by: φ X ( t ) = e i x 0 ( t ) − γ ( t ) , {\displaystyle \varphi _{X}(t)=e^{ix_{0}(t)-\gamma (t)},\!} where x 0 ( t ) {\displaystyle x_{0}(t)} and γ ( t ) {\displaystyle \gamma (t)} are real functions with x 0 ( t ) {\displaystyle x_{0}(t)} a homogeneous function of degree one and γ ( t ) {\displaystyle \gamma (t)} a positive homogeneous function of degree one. More formally: x 0 ( a t ) = a x 0 ( t ) , γ ( a t ) = | a | γ ( t ) , {\displaystyle {\begin{aligned}x_{0}(at)&=ax_{0}(t),\\\gamma (at)&=|a|\gamma (t),\end{aligned}}} for all t {\displaystyle t} . An example of a bivariate Cauchy distribution can be given by: f ( x , y ; x 0 , y 0 , γ ) = 1 2 π γ ( ( x − x 0 ) 2 + ( y − y 0 ) 2 + γ 2 ) 3 / 2 . {\displaystyle f(x,y;x_{0},y_{0},\gamma )={\frac {1}{2\pi }}\,{\frac {\gamma }{{\left({\left(x-x_{0}\right)}^{2}+{\left(y-y_{0}\right)}^{2}+\gamma ^{2}\right)}^{3/2}}}.} Note that in this example, even though the covariance between x {\displaystyle x} and y {\displaystyle y} is 0, x {\displaystyle x} and y {\displaystyle y} are not statistically independent. We also can write this formula for complex variable. Then the probability density function of complex Cauchy is : f ( z ; z 0 , γ ) = 1 2 π γ ( | z − z 0 | 2 + γ 2 ) 3 / 2 . {\displaystyle f(z;z_{0},\gamma )={\frac {1}{2\pi }}\,{\frac {\gamma }{{\left({\left|z-z_{0}\right|}^{2}+\gamma ^{2}\right)}^{3/2}}}.} Like how the standard Cauchy distribution is the Student t-distribution with one degree of freedom, the multidimensional Cauchy density is the multivariate Student distribution with one degree of freedom. The density of a k {\displaystyle k} dimension Student distribution with one degree of freedom is: f ( x ; μ , Σ , k ) = Γ ( 1 + k 2 ) Γ ( 1 2 ) π k 2 | Σ | 1 2 [ 1 + ( x − μ ) T Σ − 1 ( x − μ ) ] 1 + k 2 . {\displaystyle f(\mathbf {x} ;{\boldsymbol {\mu }},\mathbf {\Sigma } ,k)={\frac {\Gamma {\left({\frac {1+k}{2}}\right)}}{\Gamma ({\frac {1}{2}})\pi ^{\frac {k}{2}}\left|\mathbf {\Sigma } \right|^{\frac {1}{2}}\left[1+({\mathbf {x} }-{\boldsymbol {\mu }})^{\mathsf {T}}{\mathbf {\Sigma } }^{-1}({\mathbf {x} }-{\boldsymbol {\mu }})\right]^{\frac {1+k}{2}}}}.} The properties of multidimensional Cauchy distribution are then special cases of the multivariate Student distribution. == Occurrence and applications == === In general === In spectroscopy, the Cauchy distribution describes the shape of spectral lines which are subject to homogeneous broadening in which all atoms interact in the same way with the frequency range contained in the line shape. Many mechanisms cause homogeneous broadening, most notably collision broadening. Lifetime or natural broadening also gives rise to a line shape described by the Cauchy distribution. Applications of the Cauchy distribution or its transformation can be found in fields working with exponential growth. A 1958 paper by White derived the test statistic for estimators of β ^ {\displaystyle {\hat {\beta }}} for the equation x t + 1 = β x t + ε t + 1 , β > 1 {\displaystyle x_{t+1}=\beta {x}_{t}+\varepsilon _{t+1},\beta >1} and where the maximum likelihood estimator is found using ordinary least squares showed the sampling distribution of the statistic is the Cauchy distribution. The Cauchy distribution is often the distribution of observations for objects that are spinning. The classic reference for this is called the Gull's lighthouse problem and as in the above section as the Breit–Wigner distribution in particle physics. In hydrology the Cauchy distribution is applied to extreme events such as annual maximum one-day rainfalls and river discharges. The blue picture illustrates an example of fitting the Cauchy distribution to ranked monthly maximum one-day rainfalls showing also the 90% confidence belt based on the binomial distribution. The rainfall data are represented by plotting positions as part of the cumulative frequency analysis. The expression for the imaginary part of complex electrical permittivity, according to the Lorentz model, is a Cauchy distribution. As an additional distribution to model fat tails in computational finance, Cauchy distributions can be used to model VAR (value at risk) producing a much larger probability of extreme risk than Gaussian Distribution. === Relativistic Breit–Wigner distribution === In nuclear and particle physics, the energy profile of a resonance is described by the relativistic Breit–Wigner distribution, while the Cauchy distribution is the (non-relativistic) Breit–Wigner distribution. == History == A function with the form of the density function of the Cauchy distribution was studied geometrically by Fermat in 1659, and later was known as the witch of Agnesi, after Maria Gaetana Agnesi included it as an example in her 1748 calculus textbook. Despite its name, the first explicit analysis of the properties of the Cauchy distribution was published by the French mathematician Poisson in 1824, with Cauchy only becoming associated with it during an academic controversy in 1853. Poisson noted that if the mean of observations following such a distribution were taken, the standard deviation did not converge to any finite number. As such, Laplace's use of the central limit theorem with such a distribution was inappropriate, as it assumed a finite mean and variance. Despite this, Poisson did not regard the issue as important, in contrast to Bienaymé, who was to engage Cauchy in a long dispute over the matter. == See also == Lévy flight and Lévy process Laplace distribution, the Fourier transform of the Cauchy distribution Cauchy process Stable process Slash distribution == References == == External links == "Cauchy distribution", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Earliest Uses: The entry on Cauchy distribution has some historical information. Weisstein, Eric W. "Cauchy Distribution". MathWorld. GNU Scientific Library – Reference Manual Ratios of Normal Variables by George Marsaglia
Wikipedia/Lorentzian_function
In mathematics, an absolutely integrable function is a function whose absolute value is integrable, meaning that the integral of the absolute value over the whole domain is finite. For a real-valued function, since ∫ | f ( x ) | d x = ∫ f + ( x ) d x + ∫ f − ( x ) d x {\displaystyle \int |f(x)|\,dx=\int f^{+}(x)\,dx+\int f^{-}(x)\,dx} where f + ( x ) = max ( f ( x ) , 0 ) , f − ( x ) = max ( − f ( x ) , 0 ) , {\displaystyle f^{+}(x)=\max(f(x),0),\ \ \ f^{-}(x)=\max(-f(x),0),} both ∫ f + ( x ) d x {\textstyle \int f^{+}(x)\,dx} and ∫ f − ( x ) d x {\textstyle \int f^{-}(x)\,dx} must be finite. In Lebesgue integration, this is exactly the requirement for any measurable function f to be considered integrable, with the integral then equaling ∫ f + ( x ) d x − ∫ f − ( x ) d x {\textstyle \int f^{+}(x)\,dx-\int f^{-}(x)\,dx} , so that in fact "absolutely integrable" means the same thing as "Lebesgue integrable" for measurable functions. The same thing goes for a complex-valued function. Let us define f + ( x ) = max ( ℜ f ( x ) , 0 ) {\displaystyle f^{+}(x)=\max(\Re f(x),0)} f − ( x ) = max ( − ℜ f ( x ) , 0 ) {\displaystyle f^{-}(x)=\max(-\Re f(x),0)} f + i ( x ) = max ( ℑ f ( x ) , 0 ) {\displaystyle f^{+i}(x)=\max(\Im f(x),0)} f − i ( x ) = max ( − ℑ f ( x ) , 0 ) {\displaystyle f^{-i}(x)=\max(-\Im f(x),0)} where ℜ f ( x ) {\displaystyle \Re f(x)} and ℑ f ( x ) {\displaystyle \Im f(x)} are the real and imaginary parts of f ( x ) {\displaystyle f(x)} . Then | f ( x ) | ≤ f + ( x ) + f − ( x ) + f + i ( x ) + f − i ( x ) ≤ 2 | f ( x ) | {\displaystyle |f(x)|\leq f^{+}(x)+f^{-}(x)+f^{+i}(x)+f^{-i}(x)\leq {\sqrt {2}}\,|f(x)|} so ∫ | f ( x ) | d x ≤ ∫ f + ( x ) d x + ∫ f − ( x ) d x + ∫ f + i ( x ) d x + ∫ f − i ( x ) d x ≤ 2 ∫ | f ( x ) | d x {\displaystyle \int |f(x)|\,dx\leq \int f^{+}(x)\,dx+\int f^{-}(x)\,dx+\int f^{+i}(x)\,dx+\int f^{-i}(x)\,dx\leq {\sqrt {2}}\int |f(x)|\,dx} This shows that the sum of the four integrals (in the middle) is finite if and only if the integral of the absolute value is finite, and the function is Lebesgue integrable only if all the four integrals are finite. So having a finite integral of the absolute value is equivalent to the conditions for the function to be "Lebesgue integrable". == External links == "Absolutely integrable function – Encyclopedia of Mathematics". Retrieved 9 October 2015. == References == Tao, Terence, Analysis 2, 3rd ed., Texts and Readings in Mathematics, Hindustan Book Agency, New Delhi.
Wikipedia/Absolutely_integrable_function
A triangular function (also known as a triangle function, hat function, or tent function) is a function whose graph takes the shape of a triangle. Often this is an isosceles triangle of height 1 and base 2 in which case it is referred to as the triangular function. Triangular functions are useful in signal processing and communication systems engineering as representations of idealized signals, and the triangular function specifically as an integral transform kernel function from which more realistic signals can be derived, for example in kernel density estimation. It also has applications in pulse-code modulation as a pulse shape for transmitting digital signals and as a matched filter for receiving the signals. It is also used to define the triangular window sometimes called the Bartlett window. == Definitions == The most common definition is as a piecewise function: tri ⁡ ( x ) = Λ ( x ) = def max ( 1 − | x | , 0 ) = { 1 − | x | , | x | < 1 ; 0 otherwise . {\displaystyle {\begin{aligned}\operatorname {tri} (x)=\Lambda (x)\ &{\overset {\underset {\text{def}}{}}{=}}\ \max {\big (}1-|x|,0{\big )}\\&={\begin{cases}1-|x|,&|x|<1;\\0&{\text{otherwise}}.\\\end{cases}}\end{aligned}}} Equivalently, it may be defined as the convolution of two identical unit rectangular functions: tri ⁡ ( x ) = rect ⁡ ( x ) ∗ rect ⁡ ( x ) = ∫ − ∞ ∞ rect ⁡ ( x − τ ) ⋅ rect ⁡ ( τ ) d τ . {\displaystyle {\begin{aligned}\operatorname {tri} (x)&=\operatorname {rect} (x)*\operatorname {rect} (x)\\&=\int _{-\infty }^{\infty }\operatorname {rect} (x-\tau )\cdot \operatorname {rect} (\tau )\,d\tau .\\\end{aligned}}} The triangular function can also be represented as the product of the rectangular and absolute value functions: tri ⁡ ( x ) = rect ⁡ ( x / 2 ) ( 1 − | x | ) . {\displaystyle \operatorname {tri} (x)=\operatorname {rect} (x/2){\big (}1-|x|{\big )}.} Note that some authors instead define the triangle function to have a base of width 1 instead of width 2: tri ⁡ ( 2 x ) = Λ ( 2 x ) = def max ( 1 − 2 | x | , 0 ) = { 1 − 2 | x | , | x | < 1 2 ; 0 otherwise . {\displaystyle {\begin{aligned}\operatorname {tri} (2x)=\Lambda (2x)\ &{\overset {\underset {\text{def}}{}}{=}}\ \max {\big (}1-2|x|,0{\big )}\\&={\begin{cases}1-2|x|,&|x|<{\tfrac {1}{2}};\\0&{\text{otherwise}}.\\\end{cases}}\end{aligned}}} In its most general form a triangular function is any linear B-spline: tri j ⁡ ( x ) = { ( x − x j − 1 ) / ( x j − x j − 1 ) , x j − 1 ≤ x < x j ; ( x j + 1 − x ) / ( x j + 1 − x j ) , x j ≤ x < x j + 1 ; 0 otherwise . {\displaystyle \operatorname {tri} _{j}(x)={\begin{cases}(x-x_{j-1})/(x_{j}-x_{j-1}),&x_{j-1}\leq x<x_{j};\\(x_{j+1}-x)/(x_{j+1}-x_{j}),&x_{j}\leq x<x_{j+1};\\0&{\text{otherwise}}.\end{cases}}} Whereas the definition at the top is a special case Λ ( x ) = tri j ⁡ ( x ) , {\displaystyle \Lambda (x)=\operatorname {tri} _{j}(x),} where x j − 1 = − 1 {\displaystyle x_{j-1}=-1} , x j = 0 {\displaystyle x_{j}=0} , and x j + 1 = 1 {\displaystyle x_{j+1}=1} . A linear B-spline is the same as a continuous piecewise linear function f ( x ) {\displaystyle f(x)} , and this general triangle function is useful to formally define f ( x ) {\displaystyle f(x)} as f ( x ) = ∑ j y j ⋅ tri j ⁡ ( x ) , {\displaystyle f(x)=\sum _{j}y_{j}\cdot \operatorname {tri} _{j}(x),} where x j < x j + 1 {\displaystyle x_{j}<x_{j+1}} for all integer j {\displaystyle j} . The piecewise linear function passes through every point expressed as coordinates with ordered pair ( x j , y j ) {\displaystyle (x_{j},y_{j})} , that is, f ( x j ) = y j {\displaystyle f(x_{j})=y_{j}} . == Scaling == For any parameter a ≠ 0 {\displaystyle a\neq 0} : tri ⁡ ( t a ) = ( 1 a ) rect ⁡ ( t a ) ∗ ( 1 a ) rect ⁡ ( t a ) = ∫ − ∞ ∞ 1 | a | rect ⁡ ( τ a ) ⋅ rect ⁡ ( t − τ a ) d τ = { 1 − | t / a | , | t | < | a | ; 0 otherwise . {\displaystyle {\begin{aligned}\operatorname {tri} \left({\tfrac {t}{a}}\right)&=\left({\tfrac {1}{\sqrt {a}}}\right)\operatorname {rect} \left({\tfrac {t}{a}}\right)*\left({\tfrac {1}{\sqrt {a}}}\right)\operatorname {rect} \left({\tfrac {t}{a}}\right)=\int _{-\infty }^{\infty }{\tfrac {1}{|a|}}\operatorname {rect} \left({\tfrac {\tau }{a}}\right)\cdot \operatorname {rect} \left({\tfrac {t-\tau }{a}}\right)\,d\tau \\&={\begin{cases}1-|t/a|,&|t|<|a|;\\0&{\text{otherwise}}.\end{cases}}\end{aligned}}} == Fourier transform == The transform is easily determined using the convolution property of Fourier transforms and the Fourier transform of the rectangular function: F { tri ⁡ ( t ) } = F { rect ⁡ ( t ) ∗ rect ⁡ ( t ) } = F { rect ⁡ ( t ) } ⋅ F { rect ⁡ ( t ) } = F { rect ⁡ ( t ) } 2 = s i n c 2 ( f ) , {\displaystyle {\begin{aligned}{\mathcal {F}}\{\operatorname {tri} (t)\}&={\mathcal {F}}\{\operatorname {rect} (t)*\operatorname {rect} (t)\}\\&={\mathcal {F}}\{\operatorname {rect} (t)\}\cdot {\mathcal {F}}\{\operatorname {rect} (t)\}\\&={\mathcal {F}}\{\operatorname {rect} (t)\}^{2}\\&=\mathrm {sinc} ^{2}(f),\end{aligned}}} where sinc ⁡ ( x ) = sin ⁡ ( π x ) / ( π x ) {\displaystyle \operatorname {sinc} (x)=\sin(\pi x)/(\pi x)} is the normalized sinc function. For the general form, we have: F { tri ⁡ ( t a ) } = F { 1 a rect ⁡ ( t a ) ∗ 1 a rect ⁡ ( t a ) } = 1 a F { rect ⁡ ( t a ) } ⋅ F { rect ⁡ ( t a ) } = 1 a F { rect ⁡ ( t a ) } 2 = 1 a a 2 s i n c 2 ( a ⋅ f ) = a s i n c 2 ( a ⋅ f ) . {\displaystyle {\begin{aligned}{\mathcal {F}}\{\operatorname {tri} \left({\tfrac {t}{a}}\right)\}&={\mathcal {F}}\{{\tfrac {1}{\sqrt {a}}}\operatorname {rect} \left({\tfrac {t}{a}}\right)*{\tfrac {1}{\sqrt {a}}}\operatorname {rect} \left({\tfrac {t}{a}}\right)\}\\&={\tfrac {1}{a}}\ {\mathcal {F}}\{\operatorname {rect} \left({\tfrac {t}{a}}\right)\}\cdot {\mathcal {F}}\{\operatorname {rect} \left({\tfrac {t}{a}}\right)\}\\&={\tfrac {1}{a}}\ {\mathcal {F}}\{\operatorname {rect} \left({\tfrac {t}{a}}\right)\}^{2}\\&={\tfrac {1}{a}}\ {a}^{2}\ \mathrm {sinc} ^{2}(a\cdot f)={a}\ \mathrm {sinc} ^{2}(a\cdot f).\end{aligned}}} == See also == Källén function, also known as triangle function Tent map Triangular distribution Triangle wave, a piecewise linear periodic function Trigonometric functions == References ==
Wikipedia/Triangular_function
QuickTime Graphics is a lossy video compression and decompression algorithm (codec) developed by Apple Inc. and first released as part of QuickTime 1.x in the early 1990s. The codec is also known by the name Apple Graphics and its FourCC SMC. The codec operates on 8-bit palettized RGB data. The bit-stream format of QuickTime Graphics has been reverse-engineered and a decoder has been implemented in the projects XAnim and libavcodec. == Technical Details == The input video that the codec operates on is in an 8-bit palettized RGB colorspace. Compression is achieved by conditional replenishment and by reducing the palette from 256 colors to a per-4×4 block adaptive palette of 1-16 colors. Because Apple Video operates in the image domain without motion compensation, decoding is much faster than MPEG-style codecs which use motion compensation and perform coding in a transform domain. As a tradeoff, the compression performance of Apple Graphics is lower. The decoding complexity is approximately 50% that of the QuickTime Animation codec. Each frame is segmented into 4×4 blocks in raster-scan order. Each block can be coded in one of the following coding modes: skip mode, single color, 2-, 4-, and 8 color palette modes, two repeat modes, and PCM. === Skip mode === The skip mode realizes conditional replenishment. If a block is coded in skip mode, the content of the block at same location in the previous frame is copied to the current frame. Runs of skip blocks are coded in a run-length encoding scheme, enabling a high compression ratio in static areas of the picture. === Single color === In single color mode, the entire 4×4 block is painted with a single color. This mode can also be considered as a 1-color palette mode. === Palette (2, 4, or 8-color) modes === In the palette modes, each 4×4 block is coded with a 2, 4, or 8-color palette. To select one of the colors from the palette, 1, 2, or 3 bits per pixel are used, respectively. The palette can be written to the bitstream either explicitly or as a reference to an entry in the palette cache. The palette cache is a set of three circular buffers which store the 256 most recently used palettes, one each for of the 2, 4, and 8-color modes. Interpreted as vector quantization, three-dimensional vectors with components red, green, and blue are quantized using a forward adaptive codebook with between 1 and 8 entries. === Repeat modes === There are two different repeat modes. In the single block repeat mode, the previous block is repeated a specified number of times. In the two block repeat mode, the previous two blocks are repeated a specified number of times. === PCM (16 color) mode === In 16-color mode, the color of each pixel in a block is explicitly written to the bit-stream. This mode is lossless and equivalent to raw PCM without any compression. == See also == Indexed color Color quantization Block truncation coding, a similar coding technique for grayscale content Color Cell Compression, a similar coding technique for color content, based on block truncation coding Apple Video, a codec based on a similar design Microsoft Video 1, a codec based on a similar design Smacker video, a codec based on a similar design S3 Texture Compression, a texture compression format based on a similar design == References == == External links == QuickTime Graphics decoder - FFmpeg
Wikipedia/QuickTime_Graphics
Multiple-image Network Graphics (MNG) is a graphics file format published in 2001 for animated images. Its specification is publicly documented and there are free software reference implementations available. MNG is closely related to the PNG image format. When PNG development started in early 1995, developers decided not to incorporate support for animation, because the majority of the PNG developers felt that overloading a single file type with both still and animation features is a bad design, both for users (who have no simple way of determining to which class a given image file belongs) and for web servers (which should use a MIME type starting with image/ for stills and video/ for animations—GIF notwithstanding), but work soon started on MNG as an animation-supporting version of PNG. Version 1.0 of the MNG specification was released on 31 January 2001. == File support == === Support === Gwenview has native MNG support. GIMP can export images as MNG files. Imagemagick can create a MNG file from a series of PNG files. With the MNG plugin, Irfanview can read a MNG file. If MPlayer is linked against libmng, it and all its graphical front-ends like Gnome MPlayer can display MNG files. Mozilla browsers and Netscape 6.0, 6.01 and 7.0 included native support for MNG until the code was removed in 2003 due to code size and little actual usage, causing complaints on the Mozilla development site. Mozilla later added support for APNG as a simpler alternative. Similarly, early versions of the Konqueror browser included MNG support but it was later dropped. MNG support was never included in Google Chrome, Internet Explorer, Opera, or Safari. === Server support === Web servers generally don't come pre-configured to support MNG files. The MNG developers had hoped that MNG would replace GIF for animated images on the World Wide Web, just as PNG had done for still images. However, with the expiration of LZW patents and existence of alternative file formats such as APNG, Flash and SVG, combined with lack of MNG-supporting viewers and services, web usage was far less than expected. == Technical details == The structure of MNG files is essentially the same as that of PNG files, differing only in the slightly different signature (8A 4D 4E 47 0D 0A 1A 0A in hexadecimal, where 4D 4E 47 is ASCII for "MNG" – see Portable Network Graphics: File header) and the use of a much greater variety of chunks to support all the animation features that it provides. Images to be used in the animation are stored in the MNG file as encapsulated PNG or JNG images. Two versions of MNG of reduced complexity are also defined: MNG-LC (low complexity) and MNG-VLC (very low complexity). These allow applications to include some level of MNG support without having to implement the entire MNG specification, just as the SVG standard offers the "SVG Basic" and "SVG Tiny" subsets. MNG does not have a registered MIME media type, but video/x-mng or image/x-mng can be used. MNG animations may be included in HTML pages using the <embed> or <object> tag. MNG can either be lossy or lossless, depending whether the frames are encoded in PNG (lossless) or JNG (lossy). == Alternatives == Most modern web browsers support animations in APNG, SVG, WebP, and WebM. As of February 2024 only Apple Safari supports HEIF and JPEG XL. The most common alternatives have been Animated GIF and – up until its deprecation in 2017 – Adobe Flash. GIF images are restricted to 256 colors with limited compression, but the format is supported in all graphical web browsers and is still widely used. Animations can be generated ad hoc in a browser with the CSS 3 features animations, transitions, and sprites, or also the JavaScript web animations API, by specifying frames or motions of still images or rendered shapes. This can be resource-intensive, and the animation generally cannot be saved in a portable image file or posted on imageboards. Internet Explorer only supported GIF, CSS, and Flash animations. == See also == Animated Portable Network Graphics (APNG) JPEG Network Graphics (JNG) == References == == External links == MNG Home Page List of applications that support MNG images MNGzilla - A Mozilla variant with MNG support, dormant since 2007 MNG test cases (archive copy)
Wikipedia/Multiple-image_Network_Graphics
The Adaptive Multi-Rate (AMR, AMR-NB or GSM-AMR) audio codec is an audio compression format optimized for speech coding. AMR is a multi-rate narrowband speech codec that encodes narrowband (200–3400 Hz) signals at variable bit rates ranging from 4.75 to 12.2 kbit/s with toll quality speech starting at 7.4 kbit/s. AMR was adopted as the standard speech codec by 3GPP in October 1999 and is now widely used in GSM and UMTS. It uses link adaptation to select from one of eight different bit rates based on link conditions. AMR is also a file format for storing spoken audio using the AMR codec. Many modern mobile telephone handsets can store short audio recordings in the AMR format, and both free and proprietary programs exist (see Software support) to convert between this and other formats, although AMR is a speech format and is unlikely to give ideal results for other audio. The common filename extension is .amr. There also exists another storage format for AMR that is suitable for applications with more advanced demands on the storage format, like random access or synchronization with video. This format is the 3GPP-specified 3GP container format based on ISO base media file format. == Usage == The frames contain 160 samples and are 20 milliseconds long. AMR uses various techniques, such as ACELP, DTX, VAD and CNG. The usage of AMR requires optimized link adaptation that selects the best codec mode to meet the local radio channel and capacity requirements. If the radio conditions are bad, source coding is reduced and channel coding is increased. This improves the quality and robustness of the network connection while sacrificing some voice clarity. In the particular case of AMR this improvement is somewhere around S/N = 4–6 dB for usable communication. The new intelligent system allows the network operator to prioritize capacity or quality per base station. There are a total of 14 modes of the AMR codec, eight are available in a full rate channel (FR) and six on a half rate channel (HR). == Features == Sampling frequency 8 kHz/13-bit (160 samples for 20 ms frames), filtered to 200–3400 Hz. The AMR codec uses eight source codecs with bit-rates of 12.2, 10.2, 7.95, 7.40, 6.70, 5.90, 5.15 and 4.75 kbit/s. Generates frame length of 95, 103, 118, 134, 148, 159, 204, or 244 bits for AMR FR bit rates 4.75, 5.15, 5.90, 6.70, 7.40, 7.95, 10.2, or 12.2 kbit/s, respectively. AMR HR frame lengths are different. AMR utilizes discontinuous transmission (DTX), with voice activity detection (VAD) and comfort noise generation (CNG) to reduce bandwidth usage during silence periods Algorithmic delay is 20 ms per frame. For bit-rates of 12.2, there is no "algorithm" look-ahead delay. For other rates, look-ahead delay is 5 ms. Note that there is 5 ms "dummy" look-ahead delay, to allow seamless frame-wise mode switching with the rest of rates. AMR is a hybrid speech coder, and as such transmits both speech parameters and a waveform signal Linear predictive coding (LPC) is used to synthesize the speech from a residual waveform. The LPC parameters are encoded as line spectral pairs (LSP). The residual waveform is coded using algebraic code-excited linear prediction (ACELP). The complexity of the algorithm is rated at 5, using a relative scale where G.711 is 1 and G.729a is 15. PSQM testing under ideal conditions yields mean opinion scores of 4.14 for AMR (12.2 kbit/s), compared to 4.45 for G.711 (μ-law) PSQM testing under network stress yields mean opinion scores of 3.79 for AMR (12.2 kbit/s), compared to 4.13 for G.711 (μ-law) == Licensing and patent issues == AMR codecs incorporate several patents of Nokia, Ericsson, NTT and VoiceAge, the last one being the License Administrator for the AMR patent pools. VoiceAge also accepts submission of patents for determination of their possible essentiality to these standards. The initial fee for professional content creation tools and "real-time channel" products is US$6,500. The minimum annual royalty is $10,000, which, in the first year, excludes the initial fee. Per-channel license fees fall from $0.99 to $0.50 with volume, up to a maximum of $2 million annually. In the category of personal computer products, e.g., media players, the AMR decoder is licensed for free. The license fee for a sold encoder falls from $0.40 to $0.30 with volume, up to a maximum of $300,000 annually. The minimum annual royalty is not applied to licensed products that fall under the category of personal computer products and use only the free decoder. More information: VoiceAge licensing information, including pricing to license the AMR codecs 3GPP legal issues The 3G Patent Platform and its licensing policy AMR Codecs as Shared Libraries — legal notices for usage of amrnb and amrwb libraries based on the reference implementation == Software support == 3GPP TS 26.073 – AMR speech Codec (C source code) – reference implementation Audacity (beta version 1.3) via the FFmpeg integration libraries (both input and output format) FFmpeg with OpenCORE AMR libraries Android Used for voice recorder. AMR Codecs as Shared Libraries – amrnb and amrwb libraries development site. These libraries are based on the reference implementation and were created to prevent embedding of possibly patented source code into many open source projects. Open source software to convert the .amr format: RetroCode, Amr2Wav, both are in an early developmental stage AMR Player is freeware to play AMR audio files, and can convert AMR from/to MP3/WAV audio format. Nokia Multimedia Converter 2.0 can convert (create) samples, one can use Nokia's conversion tool to create both .amr and .awb files. It works in Windows 7 as well if the setup is run in XP compatibility mode. MPlayer (SMPlayer, KMPlayer) Parole Media Player 0.8.1 (in Ubuntu 16.04) QuickTime Player and multimedia framework RealPlayer version 11 and later VLC media player version 1.1.0 and later (input format only, not output format) ffdshow Apple iPhone (can play back AMR files) iOS & macOS (iMessage) BlackBerry smartphones (used for voice recorder file format, while BlackBerry 10 cannot play AMR format) K-Lite Codec Pack Media Player Classic Home Cinema, around 1.7.1 foobar2000 with the component foo_input_amr == See also == Adaptive Multi-Rate Wideband (AMR-WB) Extended Adaptive Multi-Rate – Wideband (AMR-WB+) Half Rate Full Rate Enhanced Full Rate (EFR) Sampling rate IS-641 3GP Comparison of audio coding formats RTP audio video profile == References == == External links == 3GPP TS 26.090 – Mandatory Speech Codec speech processing functions; Adaptive Multi-Rate (AMR) speech codec; Transcoding functions 3GPP TS 26.071 – Mandatory Speech Codec speech processing functions; AMR Speech Codec; General Description 3GPP codecs specifications; 3G and beyond / GSM, 26 series RFC 4867 – RTP Payload Format and File Storage Format for the Adaptive Multi-Rate (AMR) and Adaptive Multi-Rate Wideband (AMR-WB) Audio Codecs RFC 4281 – The Codecs Parameter for "Bucket" Media Types
Wikipedia/Adaptive_Multi-Rate_audio_codec
Variable-Rate Multimode Wideband (VMR-WB) is a source-controlled variable-rate multimode codec designed for robust encoding/decoding of wideband/narrowband speech. The operation of VMR-WB is controlled by speech signal characteristics (i.e., source-controlled) and by traffic condition of the network (i.e., network-controlled mode switching). Depending on the traffic conditions and the desired quality of service (QoS), one of the 4 operational modes is used. All operating modes of the existing VMR-WB standard are fully compliant with cdma2000 rate-set II. VMR-WB modes 0, 1, and 2 are cdma2000 native modes with mode 0 providing the highest quality and mode 2 the lowest ADR. VMR-WB mode 3 is the AMR-WB interoperable mode operating at an ADR slightly higher than mode 0 and providing a quality equal or better than that of AMR-WB at 12.65 kbit/s when in an interoperable interconnection with AMR-WB at 12.65 kbit/s. Now also a cdma2000 rate-set I compliant mode is implemented to the coder as mode 4. The average bitrate of the mode is 6.1 kbit/s (maximum is 8.55 kbit/s). Source coding bitrates are: Rate-Set I - 8.55, 4.0, 2.0, 0.8 kbit/s, Rate-Set II - 13.3, 6.2, 2.7, 1.0 kbit/s. VMR-WB uses 16 kHz sampling frequency. Algorithmic delay is 33.75ms. VMR-WB can be also used in 3GPP2 container file format - 3G2. VMR-WB was designed by Nokia and VoiceAge. It is based on AMR-WB. == References == == External links == 3GPP2 specification RFC 4424 - Real-Time Transport Protocol (RTP) Payload Format for the Variable-Rate Multimode Wideband (VMR-WB) Extension Audio Codec RFC 4348 - Real-Time Transport Protocol (RTP) Payload Format for the Variable-Rate Multimode Wideband (VMR-WB) Audio Codec VoiceAge website: VMR-WB — Source-controlled Variable Bit Rate Wideband Compression (archived) VoiceAge website
Wikipedia/Variable-Rate_Multimode_Wideband
Adaptive Multi-Rate Wideband (AMR-WB) is a patented wideband speech audio coding standard developed based on Adaptive Multi-Rate encoding, using a similar methodology to algebraic code-excited linear prediction (ACELP). AMR-WB provides improved speech quality due to a wider speech bandwidth of 50–7000 Hz compared to narrowband speech coders which in general are optimized for POTS wireline quality of 300–3400 Hz. AMR-WB was developed by Nokia and VoiceAge and it was first specified by 3GPP. AMR-WB is codified as G.722.2, an ITU-T standard speech codec, formally known as Wideband coding of speech at around 16 kbit/s using Adaptive Multi-Rate Wideband (AMR-WB). G.722.2 AMR-WB is the same codec as the 3GPP AMR-WB. The corresponding 3GPP specifications are TS 26.190 for the speech codec and TS 26.194 for the Voice Activity Detector. The AMR-WB format has the following parameters: Frequency bands processed: 50–6400 Hz (all modes) plus 6400–7000 Hz (23.85 kbit/s mode only) Delay frame size: 20 ms Look ahead: 5 ms AMR-WB codec employs a bandsplitting filter; the one-way delay of this filter is 0.9375 ms Complexity: 38 WMOPS, RAM 5.3 kilowords Voice activity detection, discontinuous transmission, comfort noise generator Fixed point: bit-exact C code Floating point: under work A common file extension for the AMR-WB file format is .awb. There also exists another storage format for AMR-WB that is suitable for applications with more advanced demands on the storage format, like random access or synchronization with video. This format is the 3GPP-specified 3GP container format, based on the ISO base media file format. 3GP also allows use of AMR-WB bit streams for stereo sound. == AMR modes == AMR-WB operates, like AMR, with nine different bit rates. The lowest bit rate providing excellent speech quality in a clean environment is 12.65 kbit/s. Higher bit rates are useful in background noise conditions and for music. Also, lower bit rates of 6.60 and 8.85 kbit/s provide reasonable quality, especially when compared to narrow-band codecs. The frequencies from 6.4 kHz to 7 kHz are only transmitted in the highest bitrate mode (23.85 kbit/s), while in the rest of the modes the decoder generates sounds by using the lower frequency data (75–6400 Hz) along with random noise (in order to simulate the high frequency band). All modes are sampled at 16 kHz (using 14-bit resolution) and processed at 12.8 kHz. The bit rates are the following: Mandatory multi-rate configuration 6.60 kbit/s (used for circuit switched GSM and UMTS connections; should only be used temporarily during bad radio connections and is not considered wideband speech) 8.85 kbit/s (used for circuit switched GSM and UMTS connections; should only be used temporarily during bad radio connections and is not considered wideband speech; provides quality equal to G.722 at 48 kbit/s for clean speech) 12.65 kbit/s (main anchor bitrate; used for circuit switched GSM and UMTS connections; offers superior audio quality to AMR at and above this bit rate; provides quality equal to or better than G722 at 56 kbit/s for clean speech) Higher bitrates for speech in adverse background noise environments, combined speech and music, and multi-party conferencing. 14.25 kbit/s 15.85 kbit/s 18.25 kbit/s 19.85 kbit/s 23.05 kbit/s (not targeted for full-rate GSM channels) 23.85 kbit/s (provides quality equal to G.722 at 64 kbit/s for clean speech; not targeted for full-rate GSM channels) Notes: "The codec mode can be changed every 20 ms in 3G WCDMA channels and every 40 ms in GSM/GERAN channels. (For Tandem Free Operation interoperability with GSM/GERAN, mode change rate is restricted in 3G to 40 ms in AMR-WB encoders.)" == Configurations for 3GPP == When used in mobile phone networks, there are three different configurations (combinations of bitrates) that may be used for voice channels: Configuration A (Config-WB-Code 0): 6.6, 8.85, and 12.65 kbit/s (Mandatory multi-rate configuration) Configuration B (Config-WB-Code 2): 6.6, 8.85, 12.65, and 15.85 kbit/s Configuration C (Config-WB-Code 4): 6.6, 8.85, 12.65, and 23.85 kbit/s This limitation was designed to simplify the negotiation of bitrate between the handset and the base station, thus vastly simplifying the implementation and testing. All other bitrates can still be used for other purposes in mobile phone networks, including multimedia messaging, streaming audio, etc. == Deployment == AMR-WB has been standardized by a mobile phone manufacturer consortium for future usage in networks such as UMTS. Its speech quality is high, but older networks will have to be upgraded to support a wideband codec. In October 2006, the first AMR-WB tests were conducted in a deployed network by T-Mobile in Germany, in cooperation with Ericsson. In 2007 an end-to-end AMR-WB TrFO capable 3G & VoIP product line was commercially released by NSN (M13.6 MSS, U3C MGW). AMR-WB TFO support was commercially released in 2008 (M14.2, U4.0). End-to-end TFO/TrFO negotiation and mid-call optimization (e.g. on handover, CF or CT events) was released in 2009 (M14.3, U4.1). In late 2009, Orange UK announced that it would be introducing AMR-WB on its network in 2010. In France Orange S.A. and SFR are using AMR-WB format on their 3G+ networks since the end of summer 2010. WIND Mobile in Canada launched HD Voice (AMR-WB) on its 3G+ network in February, 2011. WIND Mobile also announced that several handsets will support HD Voice (AMR-WB) in the first half of 2011, with the first one being Alcatel Tribe. In January 2013, T-Mobile became the first GSM/UMTS based network in the US to enable AMR-WB. In Feb 2013, Chunghwa Telecom became the first GSM/UMTS based network in Taiwan to enable AMR-WB. In August 2013 the AMR-WB standard was introduced in Ukraine by Kyivstar. Nokia developed the VMR-WB format for CDMA2000 networks, which is fully interoperable with 3GPP AMR-WB. AMR-WB is also a widely adapted format in mobile handsets for ringtones. The AMR wideband speech format shall be supported in 3G multimedia services when wideband speech working at 16 kHz sampling frequency is supported. This requirement is defined in 3GPP technical specifications for IP Multimedia Subsystem (IMS), Multimedia Messaging Service (MMS) and Transparent end-to-end Packet-switched Streaming Service (PSS). In 3GPP specifications is AMR-WB format also used in 3GP container format. == Licensing == The patent for AMR expired in 2024. Previously G.722.2 was licensed by VoiceAge Corporation. == Tools == For encoding and decoding AMR-WB, an open-source library named OpenCORE exists. The OpenCORE codec can be used in ffmpeg. For encoding, another open-source library exists as well, provided by VisualOn. It is included in the Android mobile operating system. == See also == Enhanced Voice Services (EVS) Adaptive Multi-Rate (AMR) Extended Adaptive Multi-Rate – Wideband (AMR-WB+) Half Rate Full Rate Enhanced Full Rate (EFR) G.722 G.722.1 3GP Comparison of audio coding formats RTP audio video profile Wideband audio == References == == External links == ITU-T Recommendation G.722.2 (AMR-WB) – technical specification Adaptive Multi-Rate – Wideband (AMR-WB) speech codec; Transcoding functions; 3GPP TS 26.190 – 3GPP technical specification Adaptive Multi-Rate – Wideband (AMR-WB) speech codec; Voice Activity Detector (VAD); 3GPP TS 26.194 – 3GPP technical specification Adaptive Multi-Rate – Wideband (AMR-WB) speech codec; General description; 3GPP TS 26.171 – 3GPP technical specification 3GPP codecs specifications; 3G and beyond / GSM, 26 series RFC 4867 – RTP Payload Format and File Storage Format for the Adaptive Multi-Rate (AMR) and Adaptive Multi-Rate Wideband (AMR-WB) Audio Codecs RFC 4281 – The Codecs Parameter for "Bucket" Media Types Deep Inside the Network, Episode 2: AMR-WB – Skype-like Audio Quality for Mobile Networks Wideband Speech Coding Standards and Applications 3GPP – Technical Specification Group Services and System Aspects ITU-T Implementors' Guide for G.722.2 Report on Mobile HD Voice using AMR Wideband, as of 20th of Feb 2012
Wikipedia/Adaptive_Multi-Rate_Wideband
In mathematics, the Fourier sine and cosine transforms are integral equations that decompose arbitrary functions into a sum of sine waves representing the odd component of the function plus cosine waves representing the even component of the function. The modern Fourier transform concisely contains both the sine and cosine transforms. Since the sine and cosine transforms use sine and cosine waves instead of complex exponentials and don't require complex numbers or negative frequency, they more closely correspond to Joseph Fourier's original transform equations and are still preferred in some signal processing and statistics applications and may be better suited as an introduction to Fourier analysis. == Definition == The Fourier sine transform of f ( t ) {\displaystyle f(t)} is: If t {\displaystyle t} means time, then ξ {\displaystyle \xi } is frequency in cycles per unit time, but in the abstract, they can be any dual pair of variables (e.g. position and spatial frequency). The sine transform is necessarily an odd function of frequency, i.e. for all ξ {\displaystyle \xi } : f ^ s ( − ξ ) = − f ^ s ( ξ ) . {\displaystyle {\hat {f}}^{s}(-\xi )=-{\hat {f}}^{s}(\xi ).} The Fourier cosine transform of f ( t ) {\displaystyle f(t)} is: The cosine transform is necessarily an even function of frequency, i.e. for all ξ {\displaystyle \xi } : f ^ c ( − ξ ) = f ^ c ( ξ ) . {\displaystyle {\hat {f}}^{c}(-\xi )={\hat {f}}^{c}(\xi ).} === Odd and even simplification === The multiplication rules for even and odd functions shown in the overbraces in the following equations dramatically simplify the integrands when transforming even and odd functions. Some authors even only define the cosine transform for even functions f even ( t ) {\displaystyle f_{\text{even}}(t)} . Since cosine is an even function and because the integral of an even function from − ∞ {\displaystyle {-}\infty } to ∞ {\displaystyle \infty } is twice its integral from 0 {\displaystyle 0} to ∞ {\displaystyle \infty } , the cosine transform of any even function can be simplified to avoid negative t {\displaystyle t} : f ^ c ( ξ ) = ∫ − ∞ ∞ f even ( t ) ⋅ cos ⁡ ( 2 π ξ t ) ⏞ even·even=even d t = 2 ∫ 0 ∞ f even ( t ) cos ⁡ ( 2 π ξ t ) d t . {\displaystyle {\hat {f}}^{c}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{even}}(t)\cdot \cos(2\pi \xi t)} ^{\text{even·even=even}}\,dt=2\int _{0}^{\infty }f_{\text{even}}(t)\cos(2\pi \xi t)\,dt.} And because the integral from − ∞ {\displaystyle {-}\infty } to ∞ {\displaystyle \infty } of any odd function is zero, the cosine transform of any odd function is simply zero: f ^ c ( ξ ) = ∫ − ∞ ∞ f odd ( t ) ⋅ cos ⁡ ( 2 π ξ t ) ⏞ odd·even=odd d t = 0. {\displaystyle {\hat {f}}^{c}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{odd}}(t)\cdot \cos(2\pi \xi t)} ^{\text{odd·even=odd}}\,dt=0.} Similarly, because sin is odd, the sine transform of any odd function f odd ( t ) {\displaystyle f_{\text{odd}}(t)} also simplifies to avoid negative t {\displaystyle t} : f ^ s ( ξ ) = ∫ − ∞ ∞ f odd ( t ) ⋅ sin ⁡ ( 2 π ξ t ) ⏞ odd·odd=even d t = 2 ∫ 0 ∞ f odd ( t ) sin ⁡ ( 2 π ξ t ) d t {\displaystyle {\hat {f}}^{s}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{odd}}(t)\cdot \sin(2\pi \xi t)} ^{\text{odd·odd=even}}\,dt=2\int _{0}^{\infty }f_{\text{odd}}(t)\sin(2\pi \xi t)\,dt} and the sine transform of any even function is simply zero: f ^ s ( ξ ) = ∫ − ∞ ∞ f even ( t ) ⋅ sin ⁡ ( 2 π ξ t ) ⏞ even·odd=odd d t = 0. {\displaystyle {\hat {f}}^{s}(\xi )=\int _{-\infty }^{\infty }\overbrace {f_{\text{even}}(t)\cdot \sin(2\pi \xi t)} ^{\text{even·odd=odd}}\,dt=0.} The sine transform represents the odd part of a function, while the cosine transform represents the even part of a function. === Other conventions === Just like the Fourier transform takes the form of different equations with different constant factors (see Fourier transform § Unitarity and definition for square integrable functions for discussion), other authors also define the cosine transform as f ^ c ( ξ ) = 2 π ∫ 0 ∞ f ( t ) cos ⁡ ( 2 π ξ t ) d t {\displaystyle {\hat {f}}^{c}(\xi )={\sqrt {\frac {2}{\pi }}}\int _{0}^{\infty }f(t)\cos(2\pi \xi t)\,dt} and the sine transform as f ^ s ( ξ ) = 2 π ∫ 0 ∞ f ( t ) sin ⁡ ( 2 π ξ t ) d t . {\displaystyle {\hat {f}}^{s}(\xi )={\sqrt {\frac {2}{\pi }}}\int _{0}^{\infty }f(t)\sin(2\pi \xi t)\,dt.} Another convention defines the cosine transform as F c ( α ) = 2 π ∫ 0 ∞ f ( x ) cos ⁡ ( α x ) d x {\displaystyle F_{c}(\alpha )={\frac {2}{\pi }}\int _{0}^{\infty }f(x)\cos(\alpha x)\,dx} and the sine transform as F s ( α ) = 2 π ∫ 0 ∞ f ( x ) sin ⁡ ( α x ) d x {\displaystyle F_{s}(\alpha )={\frac {2}{\pi }}\int _{0}^{\infty }f(x)\sin(\alpha x)\,dx} using α {\displaystyle \alpha } as the transformation variable. And while t {\displaystyle t} is typically used to represent the time domain, x {\displaystyle x} is often instead used to represent a spatial domain when transforming to spatial frequencies. == Fourier inversion == The original function f {\displaystyle f} can be recovered from its sine and cosine transforms under the usual hypotheses using the inversion formula: === Simplifications === Note that since both integrands are even functions of ξ {\displaystyle \xi } , the concept of negative frequency can be avoided by doubling the result of integrating over non-negative frequencies: f ( t ) = 2 ∫ 0 ∞ f ^ s ( ξ ) sin ⁡ ( 2 π ξ t ) d ξ + 2 ∫ 0 ∞ f ^ c ( ξ ) cos ⁡ ( 2 π ξ t ) d ξ . {\displaystyle f(t)=2\int _{0}^{\infty }{\hat {f}}^{s}(\xi )\sin(2\pi \xi t)\,d\xi \,+2\int _{0}^{\infty }{\hat {f}}^{c}(\xi )\cos(2\pi \xi t)\,d\xi \,.} Also, if f {\displaystyle f} is an odd function, then the cosine transform is zero, so its inversion simplifies to: f ( t ) = ∫ − ∞ ∞ f ^ s ( ξ ) sin ⁡ ( 2 π ξ t ) d ξ , only if f ( t ) is odd. {\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}^{s}(\xi )\sin(2\pi \xi t)\,d\xi ,{\text{ only if }}f(t){\text{ is odd.}}} Likewise, if the original function f {\displaystyle f} is an even function, then the sine transform is zero, so its inversion also simplifies to: f ( t ) = ∫ − ∞ ∞ f ^ c ( ξ ) cos ⁡ ( 2 π ξ t ) d ξ , only if f ( t ) is even. {\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}^{c}(\xi )\cos(2\pi \xi t)\,d\xi ,{\text{ only if }}f(t){\text{ is even.}}} Remarkably, these last two simplified inversion formulas look identical to the original sine and cosine transforms, respectively, though with t {\displaystyle t} swapped with ξ {\displaystyle \xi } (and with f {\displaystyle f} swapped with f ^ s {\displaystyle {\hat {f}}^{s}} or f ^ c {\displaystyle {\hat {f}}^{c}} ). A consequence of this symmetry is that their inversion and transform processes still work when the two functions are swapped. Two such functions are called transform pairs. === Overview of inversion proof === Using the addition formula for cosine, the full inversion formula can also be rewritten as Fourier's integral formula: f ( t ) = ∫ − ∞ ∞ ∫ − ∞ ∞ f ( x ) cos ⁡ ( 2 π ξ ( x − t ) ) d x d ξ . {\displaystyle f(t)=\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f(x)\cos(2\pi \xi (x-t))\,dx\,d\xi .} This theorem is often stated under different hypotheses, that f {\displaystyle f} is integrable, and is of bounded variation on an open interval containing the point t {\displaystyle t} , in which case 1 2 lim h → 0 ( f ( t + h ) + f ( t − h ) ) = 2 ∫ 0 ∞ ∫ − ∞ ∞ f ( x ) cos ⁡ ( 2 π ξ ( x − t ) ) d x d ξ . {\displaystyle {\tfrac {1}{2}}\lim _{h\to 0}\left(f(t+h)+f(t-h)\right)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(x)\cos(2\pi \xi (x-t))\,dx\,d\xi .} This latter form is a useful intermediate step in proving the inverse formulae for the since and cosine transforms. One method of deriving it, due to Cauchy is to insert a e − δ ξ {\displaystyle e^{-\delta \xi }} into the integral, where δ > 0 {\displaystyle \delta >0} is fixed. Then 2 ∫ − ∞ ∞ ∫ 0 ∞ e − δ ξ cos ⁡ ( 2 π ξ ( x − t ) ) d ξ f ( x ) d x = ∫ − ∞ ∞ f ( x ) 2 δ δ 2 + 4 π 2 ( x − t ) 2 d x . {\displaystyle 2\int _{-\infty }^{\infty }\int _{0}^{\infty }e^{-\delta \xi }\cos(2\pi \xi (x-t))\,d\xi \,f(x)\,dx=\int _{-\infty }^{\infty }f(x){\frac {2\delta }{\delta ^{2}+4\pi ^{2}(x-t)^{2}}}\,dx.} Now when δ → 0 {\displaystyle \delta \to 0} , the integrand tends to zero except at x = t {\displaystyle x=t} , so that formally the above is f ( t ) ∫ − ∞ ∞ 2 δ δ 2 + 4 π 2 ( x − t ) 2 d x = f ( t ) . {\displaystyle f(t)\int _{-\infty }^{\infty }{\frac {2\delta }{\delta ^{2}+4\pi ^{2}(x-t)^{2}}}\,dx=f(t).} == Relation with complex exponentials == The complex exponential form of the Fourier transform used more often today is f ^ ( ξ ) = ∫ − ∞ ∞ f ( t ) e − 2 π i ξ t d t {\displaystyle {\begin{aligned}{\hat {f}}(\xi )&=\int _{-\infty }^{\infty }f(t)e^{-2\pi i\xi t}\,dt\\\end{aligned}}\,} where i {\displaystyle i} is the square root of negative one. By applying Euler's formula ( e i x = cos ⁡ x + i sin ⁡ x ) , {\textstyle (e^{ix}=\cos x+i\sin x),} it can be shown (for real-valued functions) that the Fourier transform's real component is the cosine transform (representing the even component of the original function) and the Fourier transform's imaginary component is the negative of the sine transform (representing the odd component of the original function): f ^ ( ξ ) = ∫ − ∞ ∞ f ( t ) ( cos ⁡ ( 2 π ξ t ) − i sin ⁡ ( 2 π ξ t ) ) d t Euler's Formula = ( ∫ − ∞ ∞ f ( t ) cos ⁡ ( 2 π ξ t ) d t ) − i ( ∫ − ∞ ∞ f ( t ) sin ⁡ ( 2 π ξ t ) d t ) = f ^ c ( ξ ) − i f ^ s ( ξ ) . {\displaystyle {\begin{aligned}{\hat {f}}(\xi )&=\int _{-\infty }^{\infty }f(t)\left(\cos(2\pi \xi t)-i\,\sin(2\pi \xi t)\right)dt&&{\text{Euler's Formula}}\\&=\left(\int _{-\infty }^{\infty }f(t)\cos(2\pi \xi t)\,dt\right)-i\left(\int _{-\infty }^{\infty }f(t)\sin(2\pi \xi t)\,dt\right)\\&={\hat {f}}^{c}(\xi )-i\,{\hat {f}}^{s}(\xi )\,.\end{aligned}}} Because of this relationship, the cosine transform of functions whose Fourier transform is known (e.g. in Fourier transform § Tables of important Fourier transforms) can be simply found by taking the real part of the Fourier transform: f ^ c ( ξ ) = R e [ f ^ ( ξ ) ] {\displaystyle {\hat {f}}^{c}(\xi )=\mathrm {Re} {[\;{\hat {f}}(\xi )\;]}} while the sine transform is simply the negative of the imaginary part of the Fourier transform: f ^ s ( ξ ) = − I m [ f ^ ( ξ ) ] . {\displaystyle {\hat {f}}^{s}(\xi )=-\mathrm {Im} {[\;{\hat {f}}(\xi )\;]}\,.} === Pros and cons === An advantage of the modern Fourier transform is that while the sine and cosine transforms together are required to extract the phase information of a frequency, the modern Fourier transform instead compactly packs both phase and amplitude information inside its complex valued result. But a disadvantage is its requirement on understanding complex numbers, complex exponentials, and negative frequency. The sine and cosine transforms meanwhile have the advantage that all quantities are real. Since positive frequencies can fully express them, the non-trivial concept of negative frequency needed in the regular Fourier transform can be avoided. They may also be convenient when the original function is already even or odd or can be made even or odd, in which case only the cosine or the sine transform respectively is needed. For instance, even though an input may not be even or odd, a discrete cosine transform may start by assuming an even extension of its input while a discrete sine transform may start by assuming an odd extension of its input, to avoid having to compute the entire discrete Fourier transform. == Numerical evaluation == Using standard methods of numerical evaluation for Fourier integrals, such as Gaussian or tanh-sinh quadrature, is likely to lead to completely incorrect results, as the quadrature sum is (for most integrands of interest) highly ill-conditioned. Special numerical methods which exploit the structure of the oscillation are required, an example of which is Ooura's method for Fourier integrals This method attempts to evaluate the integrand at locations which asymptotically approach the zeros of the oscillation (either the sine or cosine), quickly reducing the magnitude of positive and negative terms which are summed. == See also == Discrete cosine transform Discrete sine transform List of Fourier-related transforms == Notes == == References == Whittaker, Edmund, and James Watson, A Course in Modern Analysis, Fourth Edition, Cambridge Univ. Press, 1927, pp. 189, 211
Wikipedia/Cosine_transform
Enhanced Variable Rate Codec B (EVRC-B) is a speech codec used by CDMA networks. EVRC-B is an enhancement to EVRC and compresses each 20 milliseconds of 8000 Hz, 16-bit sampled speech input into output frames of one of the four different sizes: Rate 1 - 171 bits, Rate 1/2 - 80 bits, Rate 1/4 - 40 bits, Rate 1/8 - 16 bits. In addition, there are two zero bit codec frame types: null frames and erasure frames, similar to EVRC. One significant enhancement in EVRC-B is the use of 1/4 rate frames that were not used in EVRC. This provides lower average data rates (ADRs) compared to EVRC, for a given voice quality. The new 4GV Codecs used in CDMA2000 are based on EVRC-B. 4GV is designed to allow service providers to dynamically prioritize voice capacity on their network as required. The Enhanced Variable Rate Coder (EVRC) is a speech codec used for cellular telephony in cdma2000 systems. EVRC provides excellent speech quality using variable rate coding with 3 possible rates, 8.55, 4.0 and 0.8 kbit/s. However, the Quality of Service (QoS) in cdma2000 systems can significantly benefit from a codec which allows tradeoffs between voice quality and network capacity, which cannot be achieved efficiently with the EVRC. An upgrade of the EVRC vocoder, known as EVRC-B, was recently introduced by 3GPP2. The EVRC-B speech codec is based on the 4GV concept and is the newest and most advanced speech codec for cellular applications. In addition to the Relaxed Code Excitation Linear Prediction (RCELP) used by EVRC, EVRC-B uses Prototype Pitch Period (PPP) approach for coding of stationary voice frames and Noise Excitation Linear Prediction (NELP) for efficient coding of unvoiced or noise frames. Using NELP and PPP coding at 2.0 kbit/s provides EVRC-B with superior flexibility in rate assignment, allowing it to operate at several operating points, each with a different trade-off between speech quality and system capacity. EVRC-B replaced EVRC as the main speech codec for cdma2000 and its first network commercial deployment started in 2007. A wideband extension, EVRC-WB, provides speech quality that exceeds regular wireline telephony and its standardization process was completed at the summer of 2007. EVRC-WB uses a modified discrete cosine transform (MDCT) audio coding algorithm. EVRC-B can be also used in 3GPP2 container file format - 3G2. == References == == External links == RFC 4788 - Enhancements to RTP Payload Formats for EVRC Family Codecs RFC 5188 - RTP Payload Format for the Enhanced Variable Rate Wideband Codec (EVRC-WB) and the Media Subtype Updates for EVRC-B Codec
Wikipedia/Enhanced_Variable_Rate_Codec_B
Better Portable Graphics (BPG) is a file format for coding digital images, which was created by programmer Fabrice Bellard in 2014. He has proposed it as a replacement for the JPEG image format as the more compression-efficient alternative in terms of image quality or file size. It is based on the intra-frame encoding of the High Efficiency Video Coding (HEVC) video compression standard. Tests on photographic images in July 2014 found that BPG produced smaller files for a given quality than JPEG, JPEG XR and WebP. The format has been designed to be portable and work in low memory environments, and used in portable handheld and IoT devices, where those properties are particularly important. In 2015 research was working on designing and developing more energy-efficient BPG hardware which could potentially then be integrated in portable devices such as digital cameras. While there is no built-in native support for BPG in any mainstream browsers, websites can still deliver BPG images to all browsers by including a JavaScript library written by Bellard. Others followed Bellard's idea and created the AVIF image format based on the AV1 video codec, which is patent free and therefore got implemented in browsers. == Origin in HEVC == HEVC has several profiles defined for extending its intra-frame encoding to still images at various bit depths and color formats, including "Main Still Picture," "Main 4:4:4 Still Picture," and "Main 4:4:4 16 Still Picture profiles." BPG is a wrapper for the "Main 4:4:4 16 Still Picture" profile up to 14 bits per sample. == Specifications == BPG's container format is intended to be more suited to a generic image format than the raw bitstream format used in HEVC (which is otherwise ordinarily used within some other wrapper format, such as the .mp4 file format). BPG supports the color formats known as 4:4:4, 4:2:2, and 4:2:0. Support for a separately coded extra channel is also included for an alpha channel or the fourth channel of a CMYK image. Metadata support is included for Exif, ICC profiles, and XMP. Color space support is included for YCbCr with ITU-R BT.601, BT.709, and BT.2020 (non-constant luminance) definitions, YCgCo, RGB, CMYK, and grayscale. Support for HEVC's lossy and lossless data compression is included. BPG supports animation. == Patents == According to Bellard's site BPG may be covered by some of the patents on HEVC, but any device licensed to support HEVC will also be covered for BPG. Patent issues may prevent JPEG replacement by BPG despite BPG's better technical performance. == Other proposed JPEG replacements == Several other image formats have also been proposed as JPEG replacements, including: AVIF, image format based on the AV1 video codec FLIF HEIF, another container for HEVC intra-frames JPEG 2000 JPEG XL JPEG XR WebP, image format based on VP8 == References == == External links == Official website BPG – image comparison
Wikipedia/Better_Portable_Graphics
PGF (Progressive Graphics File) is a wavelet-based bitmapped image format that employs lossless and lossy data compression. PGF was created to improve upon and replace the JPEG format. It was developed at the same time as JPEG 2000 but with a focus on speed over compression ratio. PGF can operate at higher compression ratios without taking more encoding/decoding time and without generating the characteristic "blocky and blurry" artifacts of the original DCT-based JPEG standard. It also allows more sophisticated progressive downloads. == Color models == PGF supports a wide variety of color models: Grayscale with 1, 8, 16, or 31 bits per pixel Indexed color with palette size of 256 RGB color image with 12, 16 (red: 5 bits, green: 6 bits, blue: 5 bits), 24, or 48 bits per pixel ARGB color image with 32 bits per pixel L*a*b color image with 24 or 48 bits per pixel CMYK color image with 32 or 64 bits per pixel == Technical discussion == PGF claims to achieve an improved compression quality over JPEG adding or improving features such as scalability. Its compression performance is similar to the original JPEG standard. Very low and very high compression rates (including lossless compression) are also supported in PGF. The ability of the design to handle a very large range of effective bit rates is one of the strengths of PGF. For example, to reduce the number of bits for a picture below a certain amount, the advisable thing to do with the first JPEG standard is to reduce the resolution of the input image before encoding it — something that is ordinarily not necessary for that purpose when using PGF because of its wavelet scalability properties. The PGF process chain contains the following four steps: Color space transform (in case of color images) Discrete Wavelet Transform Quantization (in case of lossy data compression) Hierarchical bit-plane run-length encoding === Color components transformation === Initially, images have to be transformed from the RGB color space to another color space, leading to three components that are handled separately. PGF uses a fully reversible modified YUV color transform. The transformation matrices are: [ Y r U r V r ] = [ 1 4 1 2 1 4 1 − 1 0 0 − 1 1 ] [ R G B ] ; [ R G B ] = [ 1 3 4 − 1 4 1 − 1 4 − 1 4 1 − 1 4 3 4 ] [ Y r U r V r ] {\displaystyle {\begin{bmatrix}Y_{r}\\U_{r}\\V_{r}\end{bmatrix}}={\begin{bmatrix}{\frac {1}{4}}&{\frac {1}{2}}&{\frac {1}{4}}\\1&-1&0\\0&-1&1\end{bmatrix}}{\begin{bmatrix}R\\G\\B\end{bmatrix}};\qquad \qquad {\begin{bmatrix}R\\G\\B\end{bmatrix}}={\begin{bmatrix}1&{\frac {3}{4}}&-{\frac {1}{4}}\\1&-{\frac {1}{4}}&-{\frac {1}{4}}\\1&-{\frac {1}{4}}&{\frac {3}{4}}\end{bmatrix}}{\begin{bmatrix}Y_{r}\\U_{r}\\V_{r}\end{bmatrix}}} The chrominance components can be, but do not necessarily have to be, down-scaled in resolution. === Wavelet transform === The color components are then wavelet transformed to an arbitrary depth. In contrast to JPEG 1992 which uses an 8x8 block-size discrete cosine transform, PGF uses one reversible wavelet transform: a rounded version of the biorthogonal CDF 5/3 wavelet transform. This wavelet filter bank is exactly the same as the reversible wavelet used in JPEG 2000. It uses only integer coefficients, so the output does not require rounding (quantization) and so it does not introduce any quantization noise. === Quantization === After the wavelet transform, the coefficients are scalar-quantized to reduce the amount of bits to represent them, at the expense of a loss of quality. The output is a set of integer numbers which have to be encoded bit-by-bit. The parameter that can be changed to set the final quality is the quantization step: the greater the step, the greater is the compression and the loss of quality. With a quantization step that equals 1, no quantization is performed (it is used in lossless compression). In contrast to JPEG 2000, PGF uses only powers of two, therefore the parameter value i represents a quantization step of 2i. Just using powers of two makes no need of integer multiplication and division operations. === Coding === The result of the previous process is a collection of sub-bands which represent several approximation scales. A sub-band is a set of coefficients — integer numbers which represent aspects of the image associated with a certain frequency range as well as a spatial area of the image. The quantized sub-bands are split further into blocks, rectangular regions in the wavelet domain. They are typically selected in a way that the coefficients within them across the sub-bands form approximately spatial blocks in the (reconstructed) image domain and collected in a fixed size macroblock. The encoder has to encode the bits of all quantized coefficients of a macroblock, starting with the most significant bits and progressing to less significant bits. In this encoding process, each bit-plane of the macroblock gets encoded in two so-called coding passes, first encoding bits of significant coefficients, then refinement bits of significant coefficients. Clearly, in lossless mode all bit-planes have to be encoded, and no bit-planes can be dropped. Only significant coefficients are compressed with an adaptive run-length/Rice (RLR) coder, because they contain long runs of zeros. The RLR coder with parameter k (logarithmic length of a run of zeros) is also known as the elementary Golomb code of order 2k. === Comparison with other file formats === JPEG 2000 is slightly more space-efficient in handling natural images. The PSNR for the same compression ratio is on average 3% better than the PSNR of PGF. It has a small advantage in compression ratio but longer encoding and decoding times. PNG (Portable Network Graphics) is more space-efficient in handling images with many pixels of the same color. There are several self-proclaimed advantages of PGF over the ordinary JPEG standard: Superior compression performance: The image quality (measured in PSNR) for the same compression ratio is on average 3% better than the PSNR of JPEG. At lower bit rates (e.g. less than 0.25 bits/pixel for gray-scale images), PGF has a much more significant advantage over certain modes of JPEG: artifacts are less visible and there is almost no blocking. The compression gains over JPEG are attributed to the use of DWT. Multiple resolution representation: PGF provides seamless compression of multiple image components, with each component carrying from 1 to 31 bits per component sample. With this feature there is no need for separately stored preview images (thumbnails). Progressive transmission by resolution accuracy, commonly referred to as progressive decoding: PGF provides efficient code-stream organizations which are progressive by resolution. This way, after a smaller part of the whole file has been received, it is possible to see a lower quality of the final picture, the quality can be improved monotonically getting more data from the source. Lossless and lossy compression: PGF provides both lossless and lossy compression in a single compression architecture. Both lossy and lossless compression are provided by the use of a reversible (integer) wavelet transform. Side channel spatial information: Transparency and alpha planes are fully supported ROI extraction: Since version 5, PGF supports extraction of regions of interest (ROI) without decoding the whole image. == Available software == The author published libPGF via a SourceForge, under the GNU Lesser General Public License version 2.0. Xeraina offers a free Windows console encoder and decoder, and PGF viewers based on WIC for 32bit and 64bit Windows platforms. Other WIC applications including File Explorer are able to display PGF images after installing this viewer. Digikam is a popular open-source image editing and cataloging software that uses libPGF for its thumbnails. It makes use of the progressive decoding feature of PGF images to store a single version of each thumbnail, which can then be decoded to different resolutions without loss, thus allowing users to dynamically change the size of the thumbnails without having to recalculate them again. == See also == Comparison of graphics file formats Related graphics file formats: ECW, JPEG, JPEG 2000, JPEG XR Image file formats Image compression === File extension === File extension .pgf and the TLA PGF are also used for unrelated purposes: Adobe Illustrator used a Progressive Graphics Format before Encapsulated PostScript. PGF/TikZ uses a Portable Graphics Format in SourceForge project PGF. XnView and Konvertor associate file extension .pgf with Portfolio Graphics. == References ==
Wikipedia/Progressive_Graphics_File
Variable bitrate (VBR) is a term used in telecommunications and computing that relates to the bitrate used in sound or video encoding. As opposed to constant bitrate (CBR), VBR files vary the amount of output data per time segment. VBR allows a higher bitrate (and therefore more storage space) to be allocated to the more complex segments of media files while less space is allocated to less complex segments. The average of these rates can be calculated to produce an average bitrate for the file. MP3, WMA and AAC audio files can optionally be encoded in VBR, while Opus and Vorbis are encoded in VBR by default. Variable bit rate encoding is also commonly used on MPEG-2 video, MPEG-4 Part 2 video (Xvid, DivX, etc.), MPEG-4 Part 10/H.264 video, Theora, Dirac and other video compression formats. Additionally, variable rate encoding is inherent in lossless compression schemes such as FLAC and Apple Lossless. == Advantages and disadvantages of VBR == The advantages of VBR are that it produces a better quality-to-space ratio compared to a CBR file of the same data. The bits available are used more flexibly to encode the sound or video data more accurately, with fewer bits used in less demanding passages and more bits used in difficult-to-encode passages. The disadvantages are that it may take more time to encode, as the process is more complex, and that some hardware might not be compatible with VBR files. == Methods of VBR encoding == === Multi-pass encoding and single-pass encoding === VBR is created using so-called single-pass encoding or multi-pass encoding. Single-pass encoding analyzes and encodes the data "on the fly" and it is also used in constant bitrate encoding. Single-pass encoding is used when the encoding speed is most important — e.g. for real-time encoding. Single-pass VBR encoding is usually controlled by the fixed quality setting or by the bitrate range (minimum and maximum allowed bitrate) or by the average bitrate setting. Multi-pass encoding is used when the encoding quality is most important. Multi-pass encoding cannot be used in real-time encoding, live broadcast or live streaming. Multi-pass encoding takes much longer than single-pass encoding, because every pass means one pass through the input data (usually through the whole input file). Multi-pass encoding is used only for VBR encoding, because CBR encoding doesn't offer any flexibility to change the bitrate. The most common multi-pass encoding is two-pass encoding. In the first pass of two-pass encoding, the input data is being analyzed and the result is stored in a log file. In the second pass, the collected data from the first pass is used to achieve the best encoding quality. In a video encoding, two-pass encoding is usually controlled by the average bitrate setting or by the bitrate range setting (minimal and maximal allowed bitrate) or by the target video file size setting. === Bitrate range === This VBR encoding method allows the user to specify a bitrate range — a minimum and/or maximum allowed bitrate. Some encoders extend this method with an average bitrate. The minimum and maximum allowed bitrate set bounds in which the bitrate may vary. The disadvantage of this method is that the average bitrate (and hence file size) will not be known ahead of time. The bitrate range is also used in some fixed quality encoding methods, but usually without permission to change a particular bitrate. === Average bitrate === The disadvantage of single pass ABR encoding (with or without Constrained Variable Bitrate) is the opposite of fixed quantizer VBR — the size of the output is known ahead of time, but the resulting quality is unknown, although still better than CBR. The multi-pass ABR encoding is more similar to fixed quantizer VBR, because a higher average will really increase the quality. === File size === VBR encoding using the file size setting is usually multi-pass encoding. It allows the user to specify a specific target file size. In the first pass, the encoder analyzes the input file and automatically calculates possible bitrate range and/or average bitrate. In the last pass, the encoder distributes the available bits among the entire video to achieve uniform quality. == See also == Bitrate Average bitrate Constant bitrate Adaptive bitrate streaming == References ==
Wikipedia/Variable_bitrate_encoding
Extended Adaptive Multi-Rate – Wideband (AMR-WB+) is an audio codec that extends AMR-WB. It adds support for stereo signals and higher sampling rates. Another main improvement is the use of transform coding (transform coded excitation – TCX) additionally to ACELP. This greatly improves the generic audio coding. Automatic switching between transform coding and ACELP provides both good speech and audio quality with moderate bit rates. As AMR-WB operates at internal sampling rate 12.8 kHz, AMR-WB+ also supports various internal sampling frequencies ranges from 12.8 kHz to 38.4 kHz. AMR-WB uses 16 kHz sampling frequency with a resolution of 14 bits left justified in a 16-bit word. AMR-WB+ uses 16/24/32/48 kHz sampling frequencies with a resolution of 16 bits in a 16-bit word. 3GPP originally developed the AMR-WB+ audio codec for streaming and messaging services in Global System for Mobile communications (GSM) and Third Generation (3G) cellular systems. Its primary target applications are Packet-Switched Streaming service (PSS), Multimedia Messaging Service (MMS) and Multimedia Broadcast and Multicast Service (MBMS). File storage of AMR-WB+ encoded audio is specified within the 3GP container format, 3GPP-defined ISO-based multimedia file format defined in 3GPP TS 26.244. The AMR-WB+ codec has a wide bit-rate range, from 5.2 to 48 kbit/s. Mono rates are scalable from 5.2 to 36 kbit/s, and stereo rates are scalable from 6.2 to 48 kbit/s, reproducing bandwidth up to 20 kHz (approaching CD quality). Moreover, it provides backward compatibility with AMR wideband. == Software support == In September 2005, VoiceAge Corporation announced availability of AMR-WB+ decoder in Helix DNA Client. == Licensing and patent issues == AMR-WB+ compression incorporate several patents of Nokia Corporation, Telefonaktiebolaget L. M. Ericsson and VoiceAge Corporation. VoiceAge Corporation is the License Administrator for the AMR and AMR-WB+ patent pools. VoiceAge also accepts submission of patents for determination of their possible essentiality to these standards. The initial fee for applications using "real-time channels" with AMR-WB+ is $6,500. Minimum annual royalty shall be $10,000, excluding the initial fee in year 1 of the license agreement. AMR-WB+ monaural decoder in a category of personal computer products is licensed for free. Stereo AMR-WB+ decoder for personal computer products is licensed for $0.30. == See also == Adaptive Multi-Rate (AMR) Adaptive Multi-Rate Wideband (AMR-WB) 3GP Comparison of audio coding formats RTP audio video profile == References == == External links == 3GPP codecs specifications; 3G and beyond / GSM, 26 series 3GPP TS 26.290; Audio codec processing functions; Extended Adaptive Multi-Rate – Wideband (AMR-WB+) codec; Transcoding functions RFC 4352 – RTP Payload Format for the Extended Adaptive Multi-Rate Wideband (AMR-WB+) Audio Codec RFC 4281 – The Codecs Parameter for "Bucket" Media Types
Wikipedia/Extended_Adaptive_Multi-Rate_–_Wideband
Full Rate (FR), also known as GSM-FR or GSM 06.10 (sometimes simply GSM), was the first digital speech coding standard used in the GSM digital mobile phone system. It uses linear predictive coding (LPC). The bit rate of the codec is 13 kbit/s, or 1.625 bits/audio sample (often padded out to 33 bytes/20 ms or 13.2 kbit/s). The quality of the coded speech is quite poor by modern standards, but at the time of development (early 1990s) it was a good compromise between computational complexity and quality, requiring only on the order of a million additions and multiplications per second. The codec is still widely used in networks around the world. Gradually FR will be replaced by Enhanced Full Rate (EFR) and Adaptive Multi-Rate (AMR) standards, which provide much higher speech quality with lower bit rate. == Technology == GSM-FR is specified in ETSI 06.10 (ETS 300 961) and is based on RPE-LTP (Regular Pulse Excitation - Long Term Prediction) speech coding paradigm. Like many other linear predictive coding (LPC) speech codecs, linear prediction is used in the synthesis filter. However, unlike most modern speech codecs, the order of the linear prediction is only 8. In modern narrowband speech codecs the order is usually 10 and in wideband speech codecs the order is usually 16. The speech encoder accepts 13 bit linear PCM at an 8 kHz sample rate. This can be direct from an analog-to-digital converter in a phone or computer, or converted from G.711 8-bit nonlinear A-law or μ-law PCM from the PSTN with a lookup table. In GSM, the encoded speech is passed to the channel encoder specified in GSM 05.03. In the receive direction, the inverse operations take place. The codec operates on 160 sample frames that span 20 ms, so this is the minimum transcoder delay possible even with infinitely fast CPUs and zero network latency. The operational requirement is that the transcoder delay should be less than 30 ms. The transcoder delay is defined as the time interval between the instant a speech frame of 160 samples has been received at the encoder input and the instant the corresponding 160 reconstructed speech samples have been out-put by the speech decoder at an 8 kHz sample rate. == Implementations == The free libgsm codec can encode and decode GSM Full Rate audio. "libgsm" was developed 1992–1994 by Jutta Degener and Carsten Bormann, then at Technische Universität Berlin. Since a GSM speech frame is 32.5 bytes, this implementation also defined a 33-byte nibble-padded representation of a GSM frame (which, at a frame rate of 50/s, is the basis for the incorrect claim that the GSM bit rate is 13.2 kbit/s). This codec can also be compiled into Wine to provide GSM audio support. There is also a Winamp plugin for raw GSM 06.10 based on the libgsm. The GSM 06.10 is also used in VoIP software, for example in Ekiga, QuteCom, Linphone, Asterisk (PBX), Ventrilo and others. == See also == Half Rate Enhanced Full Rate (EFR) Adaptive Multi-Rate (AMR) Adaptive Multi-Rate Wideband (AMR-WB) Extended Adaptive Multi-Rate - Wideband (AMR-WB+) Comparison of audio coding formats RTP audio video profile == References == == External links == RFC 3551 - RTP payload format for GSM (GSM 06.10) ETS 300 961 (GSM 06.10) - European Standard ETS 300 580-2 (GSM 06.10) - legacy specifications 3GPP TS06.10 - Technical Specification Libgsm homepage
Wikipedia/Full_Rate
In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (DFTs) into a larger DFT, or vice versa (breaking a larger DFT up into subtransforms). The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described below. The earliest occurrence in print of the term is thought to be in a 1969 MIT technical report. The same structure can also be found in the Viterbi algorithm, used for finding the most likely sequence of hidden states. Most commonly, the term "butterfly" appears in the context of the Cooley–Tukey FFT algorithm, which recursively breaks down a DFT of composite size n = rm into r smaller transforms of size m where r is the "radix" of the transform. These smaller DFTs are then combined via size-r butterflies, which themselves are DFTs of size r (performed m times on corresponding outputs of the sub-transforms) pre-multiplied by roots of unity (known as twiddle factors). (This is the "decimation in time" case; one can also perform the steps in reverse, known as "decimation in frequency", where the butterflies come first and are post-multiplied by twiddle factors. See also the Cooley–Tukey FFT article.) == Radix-2 butterfly diagram == In the case of the radix-2 Cooley–Tukey algorithm, the butterfly is simply a DFT of size-2 that takes two inputs (x0, x1) (corresponding outputs of the two sub-transforms) and gives two outputs (y0, y1) by the formula (not including twiddle factors): y 0 = x 0 + x 1 {\displaystyle y_{0}=x_{0}+x_{1}\,} y 1 = x 0 − x 1 . {\displaystyle y_{1}=x_{0}-x_{1}.\,} If one draws the data-flow diagram for this pair of operations, the (x0, x1) to (y0, y1) lines cross and resemble the wings of a butterfly, hence the name (see also the illustration at right). More specifically, a radix-2 decimation-in-time FFT algorithm on n = 2 p inputs with respect to a primitive n-th root of unity ω n k = e − 2 π i k n {\displaystyle \omega _{n}^{k}=e^{-{\frac {2\pi ik}{n}}}} relies on O(n log2 n) butterflies of the form: y 0 = x 0 + x 1 ω n k {\displaystyle y_{0}=x_{0}+x_{1}\omega _{n}^{k}\,} y 1 = x 0 − x 1 ω n k , {\displaystyle y_{1}=x_{0}-x_{1}\omega _{n}^{k},\,} where k is an integer depending on the part of the transform being computed. Whereas the corresponding inverse transform can mathematically be performed by replacing ω with ω−1 (and possibly multiplying by an overall scale factor, depending on the normalization convention), one may also directly invert the butterflies: x 0 = 1 2 ( y 0 + y 1 ) {\displaystyle x_{0}={\frac {1}{2}}(y_{0}+y_{1})\,} x 1 = ω n − k 2 ( y 0 − y 1 ) , {\displaystyle x_{1}={\frac {\omega _{n}^{-k}}{2}}(y_{0}-y_{1}),\,} corresponding to a decimation-in-frequency FFT algorithm. == Other uses == The butterfly can also be used to improve the randomness of large arrays of partially random numbers, by bringing every 32 or 64 bit word into causal contact with every other word through a desired hashing algorithm, so that a change in any one bit has the possibility of changing all the bits in the large array. == See also == Mathematical diagram Zassenhaus lemma Signal-flow graph == References == == External links == explanation of the FFT and butterfly diagrams. butterfly diagrams of various FFT implementations (Radix-2, Radix-4, Split-Radix).
Wikipedia/Butterfly_(FFT_algorithm)
Enhanced Full Rate or EFR or GSM-EFR or GSM 06.60 is a speech coding standard that was developed in order to improve the quality of GSM. Enhanced Full Rate was developed by Nokia and the Université de Sherbrooke (Canada). In 1995, ETSI selected the Enhanced Full Rate voice codec as the industry standard codec for GSM/DCS. == Technology == The sampling rate is 8000 sample/s leading to a bit rate for the encoded bit stream of 12.2 kbit/s. The coding scheme is the so-called Algebraic Code Excited Linear Prediction Coder (ACELP). The encoder is fed with data consisting of samples with a resolution of 13 bits left justified in a 16-bit word. The three least significant bits are set to 0. The decoder outputs data in the same format. The Enhanced Full Rate (GSM 06.60) technical specification describes the detailed mapping between input blocks of 160 speech samples in 13-bit uniform PCM format to encoded blocks of 244 bits and from encoded blocks of 244 bits to output blocks of 160 reconstructed speech samples. It also specifies the conversion between A-law or μ-law (PCS 1900) 8-bit PCM and 13-bit uniform PCM. This part of specification also describes the codec down to the bit level, thus enabling the verification of compliance to the part to a high degree of confidence by use of a set of digital test sequences. These test sequences are described in GSM 06.54 and are available on disks. This standard is defined in ETSI ETS 300 726 (GSM 06.60). The packing is specified in ETSI Technical Specification TS 101 318. ETSI has selected the Enhanced Full Rate voice codec as the industry standard codec for GSM/DCS in 1995. Enhanced Full Rate was also chosen as the industry standard in US market for PCS 1900 GSM frequency band. == Licensing and patent issues == The Enhanced Full Rate incorporate several patents. It uses the patented ACELP technology, which is licensed by the VoiceAge Corporation. Enhanced Full Rate was developed by Nokia and the Université de Sherbrooke (Canada). == See also == Half Rate Full Rate Adaptive Multi-Rate (AMR) Adaptive Multi-Rate Wideband (AMR-WB) Extended Adaptive Multi-Rate - Wideband (AMR-WB+) Comparison of audio coding formats == References == == External links == RFC 3551 - GSM-EFR (GSM 06.60) ETS 300 726 (GSM 06.60) 3GPP TS06.60 - technical specification Summary of GSM Codecs
Wikipedia/Enhanced_full_rate
Half Rate (HR or GSM-HR or GSM 06.20) is a speech coding system for GSM, developed in the early 1990s. Since the codec, operating at 5.6 kbit/s, requires half the bandwidth of the Full Rate codec, network capacity for voice traffic is doubled, at the expense of audio quality. The sampling rate is 8 kHz with resolution 13 bit, frame length 160 samples (20 ms) and subframe length 40 samples (5 ms). GSM Half Rate is specified in ETSI EN 300 969 (GSM 06.20), and uses a form of the VSELP algorithm. Previous specification was in ETSI ETS 300 581–2, which first edition was published in December 1995. For some Nokia phones one can configure the use of this codec: To activate HR codec use enter the following code: *4720# To deactivate HR codec use enter the following code: #4720# == See also == Full Rate Enhanced Full Rate (EFR) Adaptive Multi-Rate (AMR) Adaptive Multi-Rate Wideband (AMR-WB) Extended Adaptive Multi-Rate - Wideband (AMR-WB+) == References == == External links == ETSI EN 300 969 - Half rate speech transcoding (GSM 06.20 version 8.0.1 Release 1999) - technical specification ETSI ETS 300 581-2 - Half rate speech transcoding (GSM 06.20 version 4.3.1) - obsoleted 3GPP TS06.20 - technical specification RFC 5993 - RTP Payload format for GSM-HR
Wikipedia/Half_Rate
Engineering is the practice of using natural science, mathematics, and the engineering design process to solve problems within technology, increase efficiency and productivity, and improve systems. Modern engineering comprises many subfields which include designing and improving infrastructure, machinery, vehicles, electronics, materials, and energy systems. The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis for applications of mathematics and science. See glossary of engineering. The word engineering is derived from the Latin ingenium. == Definition == The American Engineers' Council for Professional Development (the predecessor of the Accreditation Board for Engineering and Technology aka ABET) has defined "engineering" as: The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property. == History == Engineering has existed since ancient times, when humans devised inventions such as the wedge, lever, wheel and pulley, etc. The term engineering is derived from the word engineer, which itself dates back to the 14th century when an engine'er (literally, one who builds or operates a siege engine) referred to "a constructor of military engines". In this context, now obsolete, an "engine" referred to a military machine, i.e., a mechanical contraption used in war (for example, a catapult). Notable examples of the obsolete usage which have survived to the present day are military engineering corps, e.g., the U.S. Army Corps of Engineers. The word "engine" itself is of even older origin, ultimately deriving from the Latin ingenium (c. 1250), meaning "innate quality, especially mental power, hence a clever invention." Later, as the design of civilian structures, such as bridges and buildings, matured as a technical discipline, the term civil engineering entered the lexicon as a way to distinguish between those specializing in the construction of such non-military projects and those involved in the discipline of military engineering. === Ancient era === The pyramids in ancient Egypt, ziggurats of Mesopotamia, the Acropolis and Parthenon in Greece, the Roman aqueducts, Via Appia and Colosseum, Teotihuacán, and the Brihadeeswarar Temple of Thanjavur, among many others, stand as a testament to the ingenuity and skill of ancient civil and military engineers. Other monuments, no longer standing, such as the Hanging Gardens of Babylon and the Pharos of Alexandria, were important engineering achievements of their time and were considered among the Seven Wonders of the Ancient World. The six classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. The wheel, along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia c. 3000 BC, and then in ancient Egyptian technology c. 2000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC, and ancient Egypt during the Twelfth Dynasty (1991–1802 BC). The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911–609) BC. The Egyptian pyramids were built using three of the six simple machines, the inclined plane, the wedge, and the lever, to create structures like the Great Pyramid of Giza. The earliest civil engineer known by name is Imhotep. As one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser (the Step Pyramid) at Saqqara in Egypt around 2630–2611 BC. The earliest practical water-powered machines, the water wheel and watermill, first appeared in the Persian Empire, in what are now Iraq and Iran, by the early 4th century BC. Kush developed the Sakia during the 4th century BC, which relied on animal power instead of human energy. Hafirs were developed as a type of reservoir in Kush to store and contain water as well as boost irrigation. Sappers were employed to build causeways during military campaigns. Kushite ancestors built speos during the Bronze Age between 3700 and 3250 BC. Bloomeries and blast furnaces were also created during the 7th centuries BC in Kush. Ancient Greece developed machines in both civilian and military domains. The Antikythera mechanism, an early known mechanical analog computer, and the mechanical inventions of Archimedes, are examples of Greek mechanical engineering. Some of Archimedes' inventions, as well as the Antikythera mechanism, required sophisticated knowledge of differential gearing or epicyclic gearing, two key principles in machine theory that helped design the gear trains of the Industrial Revolution, and are widely used in fields such as robotics and automotive engineering. Ancient Chinese, Greek, Roman and Hunnic armies employed military machines and inventions such as artillery which was developed by the Greeks around the 4th century BC, the trireme, the ballista and the catapult, the trebuchet by Chinese circa 6th-5th century BCE. === Middle Ages === The earliest practical wind-powered machines, the windmill and wind pump, first appeared in the Muslim world during the Islamic Golden Age, in what are now Iran, Afghanistan, and Pakistan, by the 9th century AD. The earliest practical steam-powered machine was a steam jack driven by a steam turbine, described in 1551 by Taqi al-Din Muhammad ibn Ma'ruf in Ottoman Egypt. The cotton gin was invented in India by the 6th century AD, and the spinning wheel was invented in the Islamic world by the early 11th century, both of which were fundamental to the growth of the cotton industry. The spinning wheel was also a precursor to the spinning jenny, which was a key development during the early Industrial Revolution in the 18th century. The earliest programmable machines were developed in the Muslim world. A music sequencer, a programmable musical instrument, was the earliest type of programmable machine. The first music sequencer was an automated flute player invented by the Banu Musa brothers, described in their Book of Ingenious Devices, in the 9th century. In 1206, Al-Jazari invented programmable automata/robots. He described four automaton musicians, including drummers operated by a programmable drum machine, where they could be made to play different rhythms and different drum patterns. Before the development of modern engineering, mathematics was used by artisans and craftsmen, such as millwrights, clockmakers, instrument makers and surveyors. Aside from these professions, universities were not believed to have had much practical significance to technology.: 32  A standard reference for the state of mechanical arts during the Renaissance is given in the mining engineering treatise De re metallica (1556), which also contains sections on geology, mining, and chemistry. De re metallica was the standard chemistry reference for the next 180 years. === Modern era === The science of classical mechanics, sometimes called Newtonian mechanics, formed the scientific basis of much of modern engineering. With the rise of engineering as a profession in the 18th century, the term became more narrowly applied to fields in which mathematics and science were applied to these ends. Similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. Canal building was an important engineering work during the early phases of the Industrial Revolution. John Smeaton was the first self-proclaimed civil engineer and is often regarded as the "father" of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbors, and lighthouses. He was also a capable mechanical engineer and an eminent physicist. Using a model water wheel, Smeaton conducted experiments for seven years, determining ways to increase efficiency.: 127  Smeaton introduced iron axles and gears to water wheels.: 69  Smeaton also made mechanical improvements to the Newcomen steam engine. Smeaton designed the third Eddystone Lighthouse (1755–59) where he pioneered the use of 'hydraulic lime' (a form of mortar which will set under water) and developed a technique involving dovetailed blocks of granite in the building of the lighthouse. He is important in the history, rediscovery of, and development of modern cement, because he identified the compositional requirements needed to obtain "hydraulicity" in lime; work which led ultimately to the invention of Portland cement. Applied science led to the development of the steam engine. The sequence of events began with the invention of the barometer and the measurement of atmospheric pressure by Evangelista Torricelli in 1643, demonstration of the force of atmospheric pressure by Otto von Guericke using the Magdeburg hemispheres in 1656, laboratory experiments by Denis Papin, who built experimental model steam engines and demonstrated the use of a piston, which he published in 1707. Edward Somerset, 2nd Marquess of Worcester published a book of 100 inventions containing a method for raising waters similar to a coffee percolator. Samuel Morland, a mathematician and inventor who worked on pumps, left notes at the Vauxhall Ordinance Office on a steam pump design that Thomas Savery read. In 1698 Savery built a steam pump called "The Miner's Friend". It employed both vacuum and pressure. Iron merchant Thomas Newcomen, who built the first commercial piston steam engine in 1712, was not known to have any scientific training.: 32  The application of steam-powered cast iron blowing cylinders for providing pressurized air for blast furnaces lead to a large increase in iron production in the late 18th century. The higher furnace temperatures made possible with steam-powered blast allowed for the use of more lime in blast furnaces, which enabled the transition from charcoal to coke. These innovations lowered the cost of iron, making horse railways and iron bridges practical. The puddling process, patented by Henry Cort in 1784 produced large scale quantities of wrought iron. Hot blast, patented by James Beaumont Neilson in 1828, greatly lowered the amount of fuel needed to smelt iron. With the development of the high pressure steam engine, the power to weight ratio of steam engines made practical steamboats and locomotives possible. New steel making processes, such as the Bessemer process and the open hearth furnace, ushered in an area of heavy engineering in the late 19th century. One of the most famous engineers of the mid-19th century was Isambard Kingdom Brunel, who built railroads, dockyards and steamships. The Industrial Revolution created a demand for machinery with metal parts, which led to the development of several machine tools. Boring cast iron cylinders with precision was not possible until John Wilkinson invented his boring machine, which is considered the first machine tool. Other machine tools included the screw cutting lathe, milling machine, turret lathe and the metal planer. Precision machining techniques were developed in the first half of the 19th century. These included the use of gigs to guide the machining tool over the work and fixtures to hold the work in the proper position. Machine tools and machining techniques capable of producing interchangeable parts lead to large scale factory production by the late 19th century. The United States Census of 1850 listed the occupation of "engineer" for the first time with a count of 2,000. There were fewer than 50 engineering graduates in the U.S. before 1865. The first PhD in engineering (technically, applied science and engineering) awarded in the United States went to Josiah Willard Gibbs at Yale University in 1863; it was also the second PhD awarded in science in the U.S. In 1870 there were a dozen U.S. mechanical engineering graduates, with that number increasing to 43 per year in 1875. In 1890, there were 6,000 engineers in civil, mining, mechanical and electrical. There was no chair of applied mechanism and applied mechanics at Cambridge until 1875, and no chair of engineering at Oxford until 1907. Germany established technical universities earlier. The foundations of electrical engineering in the 1800s included the experiments of Alessandro Volta, Michael Faraday, Georg Ohm and others and the invention of the electric telegraph in 1816 and the electric motor in 1872. The theoretical work of James Maxwell (see: Maxwell's equations) and Heinrich Hertz in the late 19th century gave rise to the field of electronics. The later inventions of the vacuum tube and the transistor further accelerated the development of electronics to such an extent that electrical and electronics engineers currently outnumber their colleagues of any other engineering specialty. Chemical engineering developed in the late nineteenth century. Industrial scale manufacturing demanded new materials and new processes and by 1880 the need for large scale production of chemicals was such that a new industry was created, dedicated to the development and large scale manufacturing of chemicals in new industrial plants. The role of the chemical engineer was the design of these chemical plants and processes. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials. Aeronautical engineering deals with aircraft design process design while aerospace engineering is a more modern term that expands the reach of the discipline by including spacecraft design. Its origins can be traced back to the aviation pioneers around the start of the 20th century although the work of Sir George Cayley has recently been dated as being from the last decade of the 18th century. Early knowledge of aeronautical engineering was largely empirical with some concepts and skills imported from other branches of engineering. Only a decade after the successful flights by the Wright brothers, there was extensive development of aeronautical engineering through development of military aircraft that were used in World War I. Meanwhile, research to provide fundamental background science continued by combining theoretical physics with experiments. == Branches of engineering == Engineering is a broad discipline that is often broken down into several sub-disciplines. Although most engineers will usually be trained in a specific discipline, some engineers become multi-disciplined through experience. Engineering is often characterized as having five main branches: chemical engineering, civil engineering, electrical engineering, materials science and engineering, and mechanical engineering. Below is a list of recognized branches of engineering. There are additional sub-disciplines as well. == Interdisciplinary engineering == Interdisciplinary engineering draws from more than one of the principle branches of the practice. Historically, naval engineering and mining engineering were major branches. Other engineering fields are manufacturing engineering, acoustical engineering, corrosion engineering, instrumentation and control, automotive, information engineering, petroleum, systems, audio, software, architectural, biosystems, and textile engineering. These and other branches of engineering are represented in the 36 licensed member institutions of the UK Engineering Council. New specialties sometimes combine with the traditional fields and form new branches – for example, Earth systems engineering and management involves a wide range of subject areas including engineering studies, environmental science, engineering ethics and philosophy of engineering. == Practice == One who practices engineering is called an engineer, and those licensed to do so may have more formal designations such as Professional Engineer, Chartered Engineer, Incorporated Engineer, Ingenieur, European Engineer, or Designated Engineering Representative. == Methodology == In the engineering design process, engineers apply mathematics and sciences such as physics to find novel solutions to problems or to improve existing solutions. Engineers need proficient knowledge of relevant sciences for their design projects. As a result, many engineers continue to learn new material throughout their careers. If multiple solutions exist, engineers weigh each design choice based on their merit and choose the solution that best matches the requirements. The task of the engineer is to identify, understand, and interpret the constraints on a design in order to yield a successful result. It is generally insufficient to build a technically successful product, rather, it must also meet further requirements. Constraints may include available resources, physical, imaginative or technical limitations, flexibility for future modifications and additions, and other factors, such as requirements for cost, safety, marketability, productivity, and serviceability. By understanding the constraints, engineers derive specifications for the limits within which a viable object or system may be produced and operated. === Problem solving === Engineers use their knowledge of science, mathematics, logic, economics, and appropriate experience or tacit knowledge to find suitable solutions to a particular problem. Creating an appropriate mathematical model of a problem often allows them to analyze it (sometimes definitively), and to test potential solutions. More than one solution to a design problem usually exists so the different design choices have to be evaluated on their merits before the one judged most suitable is chosen. Genrich Altshuller, after gathering statistics on a large number of patents, suggested that compromises are at the heart of "low-level" engineering designs, while at a higher level the best design is one which eliminates the core contradiction causing the problem. Engineers typically attempt to predict how well their designs will perform to their specifications prior to full-scale production. They use, among other things: prototypes, scale models, simulations, destructive tests, nondestructive tests, and stress tests. Testing ensures that products will perform as expected but only in so far as the testing has been representative of use in service. For products, such as aircraft, that are used differently by different users failures and unexpected shortcomings (and necessary design changes) can be expected throughout the operational life of the product. Engineers take on the responsibility of producing designs that will perform as well as expected and, except those employed in specific areas of the arms industry, will not harm people. Engineers typically include a factor of safety in their designs to reduce the risk of unexpected failure. The study of failed products is known as forensic engineering. It attempts to identify the cause of failure to allow a redesign of the product and so prevent a re-occurrence. Careful analysis is needed to establish the cause of failure of a product. The consequences of a failure may vary in severity from the minor cost of a machine breakdown to large loss of life in the case of accidents involving aircraft and large stationary structures like buildings and dams. === Computer use === As with all modern scientific and technological endeavors, computers and software play an increasingly important role. As well as the typical business application software there are a number of computer aided applications (computer-aided technologies) specifically for engineering. Computers can be used to generate models of fundamental physical processes, which can be solved using numerical methods. One of the most widely used design tools in the profession is computer-aided design (CAD) software. It enables engineers to create 3D models, 2D drawings, and schematics of their designs. CAD together with digital mockup (DMU) and CAE software such as finite element method analysis or analytic element method allows engineers to create models of designs that can be analyzed without having to make expensive and time-consuming physical prototypes. These allow products and components to be checked for flaws; assess fit and assembly; study ergonomics; and to analyze static and dynamic characteristics of systems such as stresses, temperatures, electromagnetic emissions, electrical currents and voltages, digital logic levels, fluid flows, and kinematics. Access and distribution of all this information is generally organized with the use of product data management software. There are also many tools to support specific engineering tasks such as computer-aided manufacturing (CAM) software to generate CNC machining instructions; manufacturing process management software for production engineering; EDA for printed circuit board (PCB) and circuit schematics for electronic engineers; MRO applications for maintenance management; and Architecture, engineering and construction (AEC) software for civil engineering. In recent years the use of computer software to aid the development of goods has collectively come to be known as product lifecycle management (PLM). == Social context == The engineering profession engages in a range of activities, from collaboration at the societal level, and smaller individual projects. Almost all engineering projects are obligated to a funding source: a company, a set of investors, or a government. The types of engineering that are less constrained by such a funding source, are pro bono, and open-design engineering. Engineering has interconnections with society, culture and human behavior. Most products and constructions used by modern society, are influenced by engineering. Engineering activities have an impact on the environment, society, economies, and public safety. Engineering projects can be controversial. Examples from different engineering disciplines include: the development of nuclear weapons, the Three Gorges Dam, the design and use of sport utility vehicles and the extraction of oil. In response, some engineering companies have enacted serious corporate and social responsibility policies. The attainment of many of the Millennium Development Goals requires the achievement of sufficient engineering capacity to develop infrastructure and sustainable technological development. Overseas development and relief NGOs make considerable use of engineers, to apply solutions in disaster and development scenarios. Some charitable organizations use engineering directly for development: Engineers Without Borders Engineers Against Poverty Registered Engineers for Disaster Relief Engineers for a Sustainable World Engineering for Change Engineering Ministries International Engineering companies in more developed economies face challenges with regard to the number of engineers being trained, compared with those retiring. This problem is prominent in the UK where engineering has a poor image and low status. There are negative economic and political issues that this can cause, as well as ethical issues. It is agreed the engineering profession faces an "image crisis". The UK holds the most engineering companies compared to other European countries, together with the United States. === Code of ethics === Many engineering societies have established codes of practice and codes of ethics to guide members and inform the public at large. The National Society of Professional Engineers code of ethics states: Engineering is an important and learned profession. As members of this profession, engineers are expected to exhibit the highest standards of honesty and integrity. Engineering has a direct and vital impact on the quality of life for all people. Accordingly, the services provided by engineers require honesty, impartiality, fairness, and equity, and must be dedicated to the protection of the public health, safety, and welfare. Engineers must perform under a standard of professional behavior that requires adherence to the highest principles of ethical conduct. In Canada, engineers wear the Iron Ring as a symbol and reminder of the obligations and ethics associated with their profession. == Relationships with other disciplines == === Science === Scientists study the world as it is; engineers create the world that has never been. There exists an overlap between the sciences and engineering practice; in engineering, one applies science. Both areas of endeavor rely on accurate observation of materials and phenomena. Both use mathematics and classification criteria to analyze and communicate observations. Scientists may also have to complete engineering tasks, such as designing experimental apparatus or building prototypes. Conversely, in the process of developing technology, engineers sometimes find themselves exploring new phenomena, thus becoming, for the moment, scientists or more precisely "engineering scientists". In the book What Engineers Know and How They Know It, Walter Vincenti asserts that engineering research has a character different from that of scientific research. First, it often deals with areas in which the basic physics or chemistry are well understood, but the problems themselves are too complex to solve in an exact manner. There is a "real and important" difference between engineering and physics as similar to any science field has to do with technology. Physics is an exploratory science that seeks knowledge of principles while engineering uses knowledge for practical applications of principles. The former equates an understanding into a mathematical principle while the latter measures variables involved and creates technology. For technology, physics is an auxiliary and in a way technology is considered as applied physics. Though physics and engineering are interrelated, it does not mean that a physicist is trained to do an engineer's job. A physicist would typically require additional and relevant training. Physicists and engineers engage in different lines of work. But PhD physicists who specialize in sectors of engineering physics and applied physics are titled as Technology officer, R&D Engineers and System Engineers. An example of this is the use of numerical approximations to the Navier–Stokes equations to describe aerodynamic flow over an aircraft, or the use of the finite element method to calculate the stresses in complex components. Second, engineering research employs many semi-empirical methods that are foreign to pure scientific research, one example being the method of parameter variation. As stated by Fung et al. in the revision to the classic engineering text Foundations of Solid Mechanics: Engineering is quite different from science. Scientists try to understand nature. Engineers try to make things that do not exist in nature. Engineers stress innovation and invention. To embody an invention the engineer must put his idea in concrete terms, and design something that people can use. That something can be a complex system, device, a gadget, a material, a method, a computing program, an innovative experiment, a new solution to a problem, or an improvement on what already exists. Since a design has to be realistic and functional, it must have its geometry, dimensions, and characteristics data defined. In the past engineers working on new designs found that they did not have all the required information to make design decisions. Most often, they were limited by insufficient scientific knowledge. Thus they studied mathematics, physics, chemistry, biology and mechanics. Often they had to add to the sciences relevant to their profession. Thus engineering sciences were born. Although engineering solutions make use of scientific principles, engineers must also take into account safety, efficiency, economy, reliability, and constructability or ease of fabrication as well as the environment, ethical and legal considerations such as patent infringement or liability in the case of failure of the solution. === Medicine and biology === The study of the human body, albeit from different directions and for different purposes, is an important common link between medicine and some engineering disciplines. Medicine aims to sustain, repair, enhance and even replace functions of the human body, if necessary, through the use of technology. Modern medicine can replace several of the body's functions through the use of artificial organs and can significantly alter the function of the human body through artificial devices such as, for example, brain implants and pacemakers. The fields of bionics and medical bionics are dedicated to the study of synthetic implants pertaining to natural systems. Conversely, some engineering disciplines view the human body as a biological machine worth studying and are dedicated to emulating many of its functions by replacing biology with technology. This has led to fields such as artificial intelligence, neural networks, fuzzy logic, and robotics. There are also substantial interdisciplinary interactions between engineering and medicine. Both fields provide solutions to real world problems. This often requires moving forward before phenomena are completely understood in a more rigorous scientific sense and therefore experimentation and empirical knowledge is an integral part of both. Medicine, in part, studies the function of the human body. The human body, as a biological machine, has many functions that can be modeled using engineering methods. The heart for example functions much like a pump, the skeleton is like a linked structure with levers, the brain produces electrical signals etc. These similarities as well as the increasing importance and application of engineering principles in medicine, led to the development of the field of biomedical engineering that uses concepts developed in both disciplines. Newly emerging branches of science, such as systems biology, are adapting analytical tools traditionally used for engineering, such as systems modeling and computational analysis, to the description of biological systems. === Art === There are connections between engineering and art, for example, architecture, landscape architecture and industrial design (even to the extent that these disciplines may sometimes be included in a university's Faculty of Engineering). The Art Institute of Chicago, for instance, held an exhibition about the art of NASA's aerospace design. Robert Maillart's bridge design is perceived by some to have been deliberately artistic. At the University of South Florida, an engineering professor, through a grant with the National Science Foundation, has developed a course that connects art and engineering. Among famous historical figures, Leonardo da Vinci is a well-known Renaissance artist and engineer, and a prime example of the nexus between art and engineering. === Business === Business engineering deals with the relationship between professional engineering, IT systems, business administration and change management. Engineering management or "Management engineering" is a specialized field of management concerned with engineering practice or the engineering industry sector. The demand for management-focused engineers (or from the opposite perspective, managers with an understanding of engineering), has resulted in the development of specialized engineering management degrees that develop the knowledge and skills needed for these roles. During an engineering management course, students will develop industrial engineering skills, knowledge, and expertise, alongside knowledge of business administration, management techniques, and strategic thinking. Engineers specializing in change management must have in-depth knowledge of the application of industrial and organizational psychology principles and methods. Professional engineers often train as certified management consultants in the very specialized field of management consulting applied to engineering practice or the engineering sector. This work often deals with large scale complex business transformation or business process management initiatives in aerospace and defence, automotive, oil and gas, machinery, pharmaceutical, food and beverage, electrical and electronics, power distribution and generation, utilities and transportation systems. This combination of technical engineering practice, management consulting practice, industry sector knowledge, and change management expertise enables professional engineers who are also qualified as management consultants to lead major business transformation initiatives. These initiatives are typically sponsored by C-level executives. === Other fields === In political science, the term engineering has been borrowed for the study of the subjects of social engineering and political engineering, which deal with forming political and social structures using engineering methodology coupled with political science principles. Marketing engineering and financial engineering have similarly borrowed the term. == See also == Lists Glossaries Related subjects == References == == Further reading == == External links == The dictionary definition of engineering at Wiktionary Learning materials related to Engineering at Wikiversity Quotations related to Engineering at Wikiquote Works related to Engineering at Wikisource
Wikipedia/Science_and_engineering
In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève), also known as the Kosambi–Karhunen–Loève theorem states that a stochastic process can be represented as an infinite linear combination of orthogonal functions, analogous to a Fourier series representation of a function on a bounded interval. The transformation is also known as Hotelling transform and eigenvector transform, and is closely related to principal component analysis (PCA) technique widely used in image processing and in data analysis in many fields. There exist many such expansions of a stochastic process: if the process is indexed over [a, b], any orthonormal basis of L2([a, b]) yields an expansion thereof in that form. The importance of the Karhunen–Loève theorem is that it yields the best such basis in the sense that it minimizes the total mean squared error. In contrast to a Fourier series where the coefficients are fixed numbers and the expansion basis consists of sinusoidal functions (that is, sine and cosine functions), the coefficients in the Karhunen–Loève theorem are random variables and the expansion basis depends on the process. In fact, the orthogonal basis functions used in this representation are determined by the covariance function of the process. One can think that the Karhunen–Loève transform adapts to the process in order to produce the best possible basis for its expansion. In the case of a centered stochastic process {Xt}t ∈ [a, b] (centered means E[Xt] = 0 for all t ∈ [a, b]) satisfying a technical continuity condition, X admits a decomposition X t = ∑ k = 1 ∞ Z k e k ( t ) {\displaystyle X_{t}=\sum _{k=1}^{\infty }Z_{k}e_{k}(t)} where Zk are pairwise uncorrelated random variables and the functions ek are continuous real-valued functions on [a, b] that are pairwise orthogonal in L2([a, b]). It is therefore sometimes said that the expansion is bi-orthogonal since the random coefficients Zk are orthogonal in the probability space while the deterministic functions ek are orthogonal in the time domain. The general case of a process Xt that is not centered can be brought back to the case of a centered process by considering Xt − E[Xt] which is a centered process. Moreover, if the process is Gaussian, then the random variables Zk are Gaussian and stochastically independent. This result generalizes the Karhunen–Loève transform. An important example of a centered real stochastic process on [0, 1] is the Wiener process; the Karhunen–Loève theorem can be used to provide a canonical orthogonal representation for it. In this case the expansion consists of sinusoidal functions. The above expansion into uncorrelated random variables is also known as the Karhunen–Loève expansion or Karhunen–Loève decomposition. The empirical version (i.e., with the coefficients computed from a sample) is known as the Karhunen–Loève transform (KLT), principal component analysis, proper orthogonal decomposition (POD), empirical orthogonal functions (a term used in meteorology and geophysics), or the Hotelling transform. == Formulation == Throughout this article, we will consider a random process Xt defined over a probability space (Ω, F, P) and indexed over a closed interval [a, b], which is square-integrable, has zero-mean, and with covariance function KX(s, t). In other words, we have: ∀ t ∈ [ a , b ] X t ∈ L 2 ( Ω , F , P ) , i.e. E [ X t 2 ] < ∞ , {\displaystyle \forall t\in [a,b]\qquad X_{t}\in L^{2}(\Omega ,F,\mathbf {P} ),\quad {\text{i.e. }}\mathbf {E} [X_{t}^{2}]<\infty ,} ∀ t ∈ [ a , b ] E [ X t ] = 0 , {\displaystyle \forall t\in [a,b]\qquad \mathbf {E} [X_{t}]=0,} ∀ t , s ∈ [ a , b ] K X ( s , t ) = E [ X s X t ] . {\displaystyle \forall t,s\in [a,b]\qquad K_{X}(s,t)=\mathbf {E} [X_{s}X_{t}].} The square-integrable condition E [ X t 2 ] < ∞ {\displaystyle \mathbf {E} [X_{t}^{2}]<\infty } is logically equivalent to K X ( s , t ) {\displaystyle K_{X}(s,t)} being finite for all s , t ∈ [ a , b ] {\displaystyle s,t\in [a,b]} . We associate to KX a linear operator (more specifically a Hilbert–Schmidt integral operator) TKX defined in the following way: T K X : L 2 ( [ a , b ] ) → L 2 ( [ a , b ] ) : f ↦ T K X f = ∫ a b K X ( s , ⋅ ) f ( s ) d s {\displaystyle {\begin{aligned}&T_{K_{X}}&:L^{2}([a,b])&\to L^{2}([a,b])\\&&:f\mapsto T_{K_{X}}f&=\int _{a}^{b}K_{X}(s,\cdot )f(s)\,ds\end{aligned}}} Since TKX is a linear operator, it makes sense to talk about its eigenvalues λk and eigenfunctions ek, which are found solving the homogeneous Fredholm integral equation of the second kind ∫ a b K X ( s , t ) e k ( s ) d s = λ k e k ( t ) {\displaystyle \int _{a}^{b}K_{X}(s,t)e_{k}(s)\,ds=\lambda _{k}e_{k}(t)} == Statement of the theorem == Theorem. Let Xt be a zero-mean square-integrable stochastic process defined over a probability space (Ω, F, P) and indexed over a closed and bounded interval [a, b], with continuous covariance function KX(s, t). Then KX(s,t) is a Mercer kernel and letting ek be an orthonormal basis on L2([a, b]) formed by the eigenfunctions of TKX with respective eigenvalues λk, Xt admits the following representation X t = ∑ k = 1 ∞ Z k e k ( t ) {\displaystyle X_{t}=\sum _{k=1}^{\infty }Z_{k}e_{k}(t)} where the convergence is in L2, uniform in t and Z k = ∫ a b X t e k ( t ) d t {\displaystyle Z_{k}=\int _{a}^{b}X_{t}e_{k}(t)\,dt} Furthermore, the random variables Zk have zero-mean, are uncorrelated and have variance λk E [ Z k ] = 0 , ∀ k ∈ N and E [ Z i Z j ] = δ i j λ j , ∀ i , j ∈ N {\displaystyle \mathbf {E} [Z_{k}]=0,~\forall k\in \mathbb {N} \qquad {\mbox{and}}\qquad \mathbf {E} [Z_{i}Z_{j}]=\delta _{ij}\lambda _{j},~\forall i,j\in \mathbb {N} } Note that by generalizations of Mercer's theorem we can replace the interval [a, b] with other compact spaces C and the Lebesgue measure on [a, b] with a Borel measure whose support is C. == Proof == The covariance function KX satisfies the definition of a Mercer kernel. By Mercer's theorem, there consequently exists a set λk, ek(t) of eigenvalues and eigenfunctions of TKX forming an orthonormal basis of L2([a,b]), and KX can be expressed as K X ( s , t ) = ∑ k = 1 ∞ λ k e k ( s ) e k ( t ) {\displaystyle K_{X}(s,t)=\sum _{k=1}^{\infty }\lambda _{k}e_{k}(s)e_{k}(t)} The process Xt can be expanded in terms of the eigenfunctions ek as: X t = ∑ k = 1 ∞ Z k e k ( t ) {\displaystyle X_{t}=\sum _{k=1}^{\infty }Z_{k}e_{k}(t)} where the coefficients (random variables) Zk are given by the projection of Xt on the respective eigenfunctions Z k = ∫ a b X t e k ( t ) d t {\displaystyle Z_{k}=\int _{a}^{b}X_{t}e_{k}(t)\,dt} We may then derive E [ Z k ] = E [ ∫ a b X t e k ( t ) d t ] = ∫ a b E [ X t ] e k ( t ) d t = 0 E [ Z i Z j ] = E [ ∫ a b ∫ a b X t X s e j ( t ) e i ( s ) d t d s ] = ∫ a b ∫ a b E [ X t X s ] e j ( t ) e i ( s ) d t d s = ∫ a b ∫ a b K X ( s , t ) e j ( t ) e i ( s ) d t d s = ∫ a b e i ( s ) ( ∫ a b K X ( s , t ) e j ( t ) d t ) d s = λ j ∫ a b e i ( s ) e j ( s ) d s = δ i j λ j {\displaystyle {\begin{aligned}\mathbf {E} [Z_{k}]&=\mathbf {E} \left[\int _{a}^{b}X_{t}e_{k}(t)\,dt\right]=\int _{a}^{b}\mathbf {E} [X_{t}]e_{k}(t)dt=0\\[8pt]\mathbf {E} [Z_{i}Z_{j}]&=\mathbf {E} \left[\int _{a}^{b}\int _{a}^{b}X_{t}X_{s}e_{j}(t)e_{i}(s)\,dt\,ds\right]\\&=\int _{a}^{b}\int _{a}^{b}\mathbf {E} \left[X_{t}X_{s}\right]e_{j}(t)e_{i}(s)\,dt\,ds\\&=\int _{a}^{b}\int _{a}^{b}K_{X}(s,t)e_{j}(t)e_{i}(s)\,dt\,ds\\&=\int _{a}^{b}e_{i}(s)\left(\int _{a}^{b}K_{X}(s,t)e_{j}(t)\,dt\right)\,ds\\&=\lambda _{j}\int _{a}^{b}e_{i}(s)e_{j}(s)\,ds\\&=\delta _{ij}\lambda _{j}\end{aligned}}} where we have used the fact that the ek are eigenfunctions of TKX and are orthonormal. Let us now show that the convergence is in L2. Let S N = ∑ k = 1 N Z k e k ( t ) . {\displaystyle S_{N}=\sum _{k=1}^{N}Z_{k}e_{k}(t).} Then: E [ | X t − S N | 2 ] = E [ X t 2 ] + E [ S N 2 ] − 2 E [ X t S N ] = K X ( t , t ) + E [ ∑ k = 1 N ∑ l = 1 N Z k Z ℓ e k ( t ) e ℓ ( t ) ] − 2 E [ X t ∑ k = 1 N Z k e k ( t ) ] = K X ( t , t ) + ∑ k = 1 N λ k e k ( t ) 2 − 2 E [ ∑ k = 1 N ∫ a b X t X s e k ( s ) e k ( t ) d s ] = K X ( t , t ) − ∑ k = 1 N λ k e k ( t ) 2 {\displaystyle {\begin{aligned}\mathbf {E} \left[\left|X_{t}-S_{N}\right|^{2}\right]&=\mathbf {E} \left[X_{t}^{2}\right]+\mathbf {E} \left[S_{N}^{2}\right]-2\mathbf {E} \left[X_{t}S_{N}\right]\\&=K_{X}(t,t)+\mathbf {E} \left[\sum _{k=1}^{N}\sum _{l=1}^{N}Z_{k}Z_{\ell }e_{k}(t)e_{\ell }(t)\right]-2\mathbf {E} \left[X_{t}\sum _{k=1}^{N}Z_{k}e_{k}(t)\right]\\&=K_{X}(t,t)+\sum _{k=1}^{N}\lambda _{k}e_{k}(t)^{2}-2\mathbf {E} \left[\sum _{k=1}^{N}\int _{a}^{b}X_{t}X_{s}e_{k}(s)e_{k}(t)\,ds\right]\\&=K_{X}(t,t)-\sum _{k=1}^{N}\lambda _{k}e_{k}(t)^{2}\end{aligned}}} which goes to 0 by Mercer's theorem. == Properties of the Karhunen–Loève transform == === Special case: Gaussian distribution === Since the limit in the mean of jointly Gaussian random variables is jointly Gaussian, and jointly Gaussian random (centered) variables are independent if and only if they are orthogonal, we can also conclude: Theorem. The variables Zi have a joint Gaussian distribution and are stochastically independent if the original process {Xt}t is Gaussian. In the Gaussian case, since the variables Zi are independent, we can say more: lim N → ∞ ∑ i = 1 N e i ( t ) Z i ( ω ) = X t ( ω ) {\displaystyle \lim _{N\to \infty }\sum _{i=1}^{N}e_{i}(t)Z_{i}(\omega )=X_{t}(\omega )} almost surely. === The Karhunen–Loève transform decorrelates the process === This is a consequence of the independence of the Zk. === The Karhunen–Loève expansion minimizes the total mean square error === In the introduction, we mentioned that the truncated Karhunen–Loeve expansion was the best approximation of the original process in the sense that it reduces the total mean-square error resulting of its truncation. Because of this property, it is often said that the KL transform optimally compacts the energy. More specifically, given any orthonormal basis {fk} of L2([a, b]), we may decompose the process Xt as: X t ( ω ) = ∑ k = 1 ∞ A k ( ω ) f k ( t ) {\displaystyle X_{t}(\omega )=\sum _{k=1}^{\infty }A_{k}(\omega )f_{k}(t)} where A k ( ω ) = ∫ a b X t ( ω ) f k ( t ) d t {\displaystyle A_{k}(\omega )=\int _{a}^{b}X_{t}(\omega )f_{k}(t)\,dt} and we may approximate Xt by the finite sum X ^ t ( ω ) = ∑ k = 1 N A k ( ω ) f k ( t ) {\displaystyle {\hat {X}}_{t}(\omega )=\sum _{k=1}^{N}A_{k}(\omega )f_{k}(t)} for some integer N. Claim. Of all such approximations, the KL approximation is the one that minimizes the total mean square error (provided we have arranged the eigenvalues in decreasing order). === Explained variance === An important observation is that since the random coefficients Zk of the KL expansion are uncorrelated, the Bienaymé formula asserts that the variance of Xt is simply the sum of the variances of the individual components of the sum: var ⁡ [ X t ] = ∑ k = 0 ∞ e k ( t ) 2 var ⁡ [ Z k ] = ∑ k = 1 ∞ λ k e k ( t ) 2 {\displaystyle \operatorname {var} [X_{t}]=\sum _{k=0}^{\infty }e_{k}(t)^{2}\operatorname {var} [Z_{k}]=\sum _{k=1}^{\infty }\lambda _{k}e_{k}(t)^{2}} Integrating over [a, b] and using the orthonormality of the ek, we obtain that the total variance of the process is: ∫ a b var ⁡ [ X t ] d t = ∑ k = 1 ∞ λ k {\displaystyle \int _{a}^{b}\operatorname {var} [X_{t}]\,dt=\sum _{k=1}^{\infty }\lambda _{k}} In particular, the total variance of the N-truncated approximation is ∑ k = 1 N λ k . {\displaystyle \sum _{k=1}^{N}\lambda _{k}.} As a result, the N-truncated expansion explains ∑ k = 1 N λ k ∑ k = 1 ∞ λ k {\displaystyle {\frac {\sum _{k=1}^{N}\lambda _{k}}{\sum _{k=1}^{\infty }\lambda _{k}}}} of the variance; and if we are content with an approximation that explains, say, 95% of the variance, then we just have to determine an N ∈ N {\displaystyle N\in \mathbb {N} } such that ∑ k = 1 N λ k ∑ k = 1 ∞ λ k ≥ 0.95. {\displaystyle {\frac {\sum _{k=1}^{N}\lambda _{k}}{\sum _{k=1}^{\infty }\lambda _{k}}}\geq 0.95.} === The Karhunen–Loève expansion has the minimum representation entropy property === Given a representation of X t = ∑ k = 1 ∞ W k φ k ( t ) {\displaystyle X_{t}=\sum _{k=1}^{\infty }W_{k}\varphi _{k}(t)} , for some orthonormal basis φ k ( t ) {\displaystyle \varphi _{k}(t)} and random W k {\displaystyle W_{k}} , we let p k = E [ | W k | 2 ] / E [ | X t | L 2 2 ] {\displaystyle p_{k}=\mathbb {E} [|W_{k}|^{2}]/\mathbb {E} [|X_{t}|_{L^{2}}^{2}]} , so that ∑ k = 1 ∞ p k = 1 {\displaystyle \sum _{k=1}^{\infty }p_{k}=1} . We may then define the representation entropy to be H ( { φ k } ) = − ∑ i p k log ⁡ ( p k ) {\displaystyle H(\{\varphi _{k}\})=-\sum _{i}p_{k}\log(p_{k})} . Then we have H ( { φ k } ) ≥ H ( { e k } ) {\displaystyle H(\{\varphi _{k}\})\geq H(\{e_{k}\})} , for all choices of φ k {\displaystyle \varphi _{k}} . That is, the KL-expansion has minimal representation entropy. Proof: Denote the coefficients obtained for the basis e k ( t ) {\displaystyle e_{k}(t)} as p k {\displaystyle p_{k}} , and for φ k ( t ) {\displaystyle \varphi _{k}(t)} as q k {\displaystyle q_{k}} . Choose N ≥ 1 {\displaystyle N\geq 1} . Note that since e k {\displaystyle e_{k}} minimizes the mean squared error, we have that E | ∑ k = 1 N Z k e k ( t ) − X t | L 2 2 ≤ E | ∑ k = 1 N W k φ k ( t ) − X t | L 2 2 {\displaystyle \mathbb {E} \left|\sum _{k=1}^{N}Z_{k}e_{k}(t)-X_{t}\right|_{L^{2}}^{2}\leq \mathbb {E} \left|\sum _{k=1}^{N}W_{k}\varphi _{k}(t)-X_{t}\right|_{L^{2}}^{2}} Expanding the right hand size, we get: E | ∑ k = 1 N W k φ k ( t ) − X t | L 2 2 = E | X t 2 | L 2 + ∑ k = 1 N ∑ ℓ = 1 N E [ W ℓ φ ℓ ( t ) W k ∗ φ k ∗ ( t ) ] L 2 − ∑ k = 1 N E [ W k φ k X t ∗ ] L 2 − ∑ k = 1 N E [ X t W k ∗ φ k ∗ ( t ) ] L 2 {\displaystyle \mathbb {E} \left|\sum _{k=1}^{N}W_{k}\varphi _{k}(t)-X_{t}\right|_{L^{2}}^{2}=\mathbb {E} |X_{t}^{2}|_{L^{2}}+\sum _{k=1}^{N}\sum _{\ell =1}^{N}\mathbb {E} [W_{\ell }\varphi _{\ell }(t)W_{k}^{*}\varphi _{k}^{*}(t)]_{L^{2}}-\sum _{k=1}^{N}\mathbb {E} [W_{k}\varphi _{k}X_{t}^{*}]_{L^{2}}-\sum _{k=1}^{N}\mathbb {E} [X_{t}W_{k}^{*}\varphi _{k}^{*}(t)]_{L^{2}}} Using the orthonormality of φ k ( t ) {\displaystyle \varphi _{k}(t)} , and expanding X t {\displaystyle X_{t}} in the φ k ( t ) {\displaystyle \varphi _{k}(t)} basis, we get that the right hand size is equal to: E [ X t ] L 2 2 − ∑ k = 1 N E [ | W k | 2 ] {\displaystyle \mathbb {E} [X_{t}]_{L^{2}}^{2}-\sum _{k=1}^{N}\mathbb {E} [|W_{k}|^{2}]} We may perform identical analysis for the e k ( t ) {\displaystyle e_{k}(t)} , and so rewrite the above inequality as: E [ X t ] L 2 2 − ∑ k = 1 N E [ | Z k | 2 ] ≤ E [ X t ] L 2 2 − ∑ k = 1 N E [ | W k | 2 ] {\displaystyle {\displaystyle \mathbb {E} [X_{t}]_{L^{2}}^{2}-\sum _{k=1}^{N}\mathbb {E} [|Z_{k}|^{2}]}\leq {\displaystyle \mathbb {E} [X_{t}]_{L^{2}}^{2}-\sum _{k=1}^{N}\mathbb {E} [|W_{k}|^{2}]}} Subtracting the common first term, and dividing by E [ | X t | L 2 2 ] {\displaystyle \mathbb {E} [|X_{t}|_{L^{2}}^{2}]} , we obtain that: ∑ k = 1 N p k ≥ ∑ k = 1 N q k {\displaystyle \sum _{k=1}^{N}p_{k}\geq \sum _{k=1}^{N}q_{k}} This implies that: − ∑ k = 1 ∞ p k log ⁡ ( p k ) ≤ − ∑ k = 1 ∞ q k log ⁡ ( q k ) {\displaystyle -\sum _{k=1}^{\infty }p_{k}\log(p_{k})\leq -\sum _{k=1}^{\infty }q_{k}\log(q_{k})} == Linear Karhunen–Loève approximations == Consider a whole class of signals we want to approximate over the first M vectors of a basis. These signals are modeled as realizations of a random vector Y[n] of size N. To optimize the approximation we design a basis that minimizes the average approximation error. This section proves that optimal bases are Karhunen–Loeve bases that diagonalize the covariance matrix of Y. The random vector Y can be decomposed in an orthogonal basis { g m } 0 ≤ m ≤ N {\displaystyle \left\{g_{m}\right\}_{0\leq m\leq N}} as follows: Y = ∑ m = 0 N − 1 ⟨ Y , g m ⟩ g m , {\displaystyle Y=\sum _{m=0}^{N-1}\left\langle Y,g_{m}\right\rangle g_{m},} where each ⟨ Y , g m ⟩ = ∑ n = 0 N − 1 Y [ n ] g m ∗ [ n ] {\displaystyle \left\langle Y,g_{m}\right\rangle =\sum _{n=0}^{N-1}{Y[n]}g_{m}^{*}[n]} is a random variable. The approximation from the first M ≤ N vectors of the basis is Y M = ∑ m = 0 M − 1 ⟨ Y , g m ⟩ g m {\displaystyle Y_{M}=\sum _{m=0}^{M-1}\left\langle Y,g_{m}\right\rangle g_{m}} The energy conservation in an orthogonal basis implies ε [ M ] = E { ‖ Y − Y M ‖ 2 } = ∑ m = M N − 1 E { | ⟨ Y , g m ⟩ | 2 } {\displaystyle \varepsilon [M]=\mathbf {E} \left\{\left\|Y-Y_{M}\right\|^{2}\right\}=\sum _{m=M}^{N-1}\mathbf {E} \left\{\left|\left\langle Y,g_{m}\right\rangle \right|^{2}\right\}} This error is related to the covariance of Y defined by R [ n , m ] = E { Y [ n ] Y ∗ [ m ] } {\displaystyle R[n,m]=\mathbf {E} \left\{Y[n]Y^{*}[m]\right\}} For any vector x[n] we denote by K the covariance operator represented by this matrix, E { | ⟨ Y , x ⟩ | 2 } = ⟨ K x , x ⟩ = ∑ n = 0 N − 1 ∑ m = 0 N − 1 R [ n , m ] x [ n ] x ∗ [ m ] {\displaystyle \mathbf {E} \left\{\left|\langle Y,x\rangle \right|^{2}\right\}=\langle Kx,x\rangle =\sum _{n=0}^{N-1}\sum _{m=0}^{N-1}R[n,m]x[n]x^{*}[m]} The error ε[M] is therefore a sum of the last N − M coefficients of the covariance operator ε [ M ] = ∑ m = M N − 1 ⟨ K g m , g m ⟩ {\displaystyle \varepsilon [M]=\sum _{m=M}^{N-1}{\left\langle Kg_{m},g_{m}\right\rangle }} The covariance operator K is Hermitian and Positive and is thus diagonalized in an orthogonal basis called a Karhunen–Loève basis. The following theorem states that a Karhunen–Loève basis is optimal for linear approximations. Theorem (Optimality of Karhunen–Loève basis). Let K be a covariance operator. For all M ≥ 1, the approximation error ε [ M ] = ∑ m = M N − 1 ⟨ K g m , g m ⟩ {\displaystyle \varepsilon [M]=\sum _{m=M}^{N-1}\left\langle Kg_{m},g_{m}\right\rangle } is minimum if and only if { g m } 0 ≤ m < N {\displaystyle \left\{g_{m}\right\}_{0\leq m<N}} is a Karhunen–Loeve basis ordered by decreasing eigenvalues. ⟨ K g m , g m ⟩ ≥ ⟨ K g m + 1 , g m + 1 ⟩ , 0 ≤ m < N − 1. {\displaystyle \left\langle Kg_{m},g_{m}\right\rangle \geq \left\langle Kg_{m+1},g_{m+1}\right\rangle ,\qquad 0\leq m<N-1.} == Non-Linear approximation in bases == Linear approximations project the signal on M vectors a priori. The approximation can be made more precise by choosing the M orthogonal vectors depending on the signal properties. This section analyzes the general performance of these non-linear approximations. A signal f ∈ H {\displaystyle f\in \mathrm {H} } is approximated with M vectors selected adaptively in an orthonormal basis for H {\displaystyle \mathrm {H} } B = { g m } m ∈ N {\displaystyle \mathrm {B} =\left\{g_{m}\right\}_{m\in \mathbb {N} }} Let f M {\displaystyle f_{M}} be the projection of f over M vectors whose indices are in IM: f M = ∑ m ∈ I M ⟨ f , g m ⟩ g m {\displaystyle f_{M}=\sum _{m\in I_{M}}\left\langle f,g_{m}\right\rangle g_{m}} The approximation error is the sum of the remaining coefficients ε [ M ] = { ‖ f − f M ‖ 2 } = ∑ m ∉ I M N − 1 { | ⟨ f , g m ⟩ | 2 } {\displaystyle \varepsilon [M]=\left\{\left\|f-f_{M}\right\|^{2}\right\}=\sum _{m\notin I_{M}}^{N-1}\left\{\left|\left\langle f,g_{m}\right\rangle \right|^{2}\right\}} To minimize this error, the indices in IM must correspond to the M vectors having the largest inner product amplitude | ⟨ f , g m ⟩ | . {\displaystyle \left|\left\langle f,g_{m}\right\rangle \right|.} These are the vectors that best correlate f. They can thus be interpreted as the main features of f. The resulting error is necessarily smaller than the error of a linear approximation which selects the M approximation vectors independently of f. Let us sort { | ⟨ f , g m ⟩ | } m ∈ N {\displaystyle \left\{\left|\left\langle f,g_{m}\right\rangle \right|\right\}_{m\in \mathbb {N} }} in decreasing order | ⟨ f , g m k ⟩ | ≥ | ⟨ f , g m k + 1 ⟩ | . {\displaystyle \left|\left\langle f,g_{m_{k}}\right\rangle \right|\geq \left|\left\langle f,g_{m_{k+1}}\right\rangle \right|.} The best non-linear approximation is f M = ∑ k = 1 M ⟨ f , g m k ⟩ g m k {\displaystyle f_{M}=\sum _{k=1}^{M}\left\langle f,g_{m_{k}}\right\rangle g_{m_{k}}} It can also be written as inner product thresholding: f M = ∑ m = 0 ∞ θ T ( ⟨ f , g m ⟩ ) g m {\displaystyle f_{M}=\sum _{m=0}^{\infty }\theta _{T}\left(\left\langle f,g_{m}\right\rangle \right)g_{m}} with T = | ⟨ f , g m M ⟩ | , θ T ( x ) = { x | x | ≥ T 0 | x | < T {\displaystyle T=\left|\left\langle f,g_{m_{M}}\right\rangle \right|,\qquad \theta _{T}(x)={\begin{cases}x&|x|\geq T\\0&|x|<T\end{cases}}} The non-linear error is ε [ M ] = { ‖ f − f M ‖ 2 } = ∑ k = M + 1 ∞ { | ⟨ f , g m k ⟩ | 2 } {\displaystyle \varepsilon [M]=\left\{\left\|f-f_{M}\right\|^{2}\right\}=\sum _{k=M+1}^{\infty }\left\{\left|\left\langle f,g_{m_{k}}\right\rangle \right|^{2}\right\}} this error goes quickly to zero as M increases, if the sorted values of | ⟨ f , g m k ⟩ | {\displaystyle \left|\left\langle f,g_{m_{k}}\right\rangle \right|} have a fast decay as k increases. This decay is quantified by computing the I P {\displaystyle \mathrm {I} ^{\mathrm {P} }} norm of the signal inner products in B: ‖ f ‖ B , p = ( ∑ m = 0 ∞ | ⟨ f , g m ⟩ | p ) 1 p {\displaystyle \|f\|_{\mathrm {B} ,p}=\left(\sum _{m=0}^{\infty }\left|\left\langle f,g_{m}\right\rangle \right|^{p}\right)^{\frac {1}{p}}} The following theorem relates the decay of ε[M] to ‖ f ‖ B , p {\displaystyle \|f\|_{\mathrm {B} ,p}} Theorem (decay of error). If ‖ f ‖ B , p < ∞ {\displaystyle \|f\|_{\mathrm {B} ,p}<\infty } with p < 2 then ε [ M ] ≤ ‖ f ‖ B , p 2 2 p − 1 M 1 − 2 p {\displaystyle \varepsilon [M]\leq {\frac {\|f\|_{\mathrm {B} ,p}^{2}}{{\frac {2}{p}}-1}}M^{1-{\frac {2}{p}}}} and ε [ M ] = o ( M 1 − 2 p ) . {\displaystyle \varepsilon [M]=o\left(M^{1-{\frac {2}{p}}}\right).} Conversely, if ε [ M ] = o ( M 1 − 2 p ) {\displaystyle \varepsilon [M]=o\left(M^{1-{\frac {2}{p}}}\right)} then ‖ f ‖ B , q < ∞ {\displaystyle \|f\|_{\mathrm {B} ,q}<\infty } for any q > p. === Non-optimality of Karhunen–Loève bases === To further illustrate the differences between linear and non-linear approximations, we study the decomposition of a simple non-Gaussian random vector in a Karhunen–Loève basis. Processes whose realizations have a random translation are stationary. The Karhunen–Loève basis is then a Fourier basis and we study its performance. To simplify the analysis, consider a random vector Y[n] of size N that is random shift modulo N of a deterministic signal f[n] of zero mean ∑ n = 0 N − 1 f [ n ] = 0 {\displaystyle \sum _{n=0}^{N-1}f[n]=0} Y [ n ] = f [ ( n − p ) mod N ] {\displaystyle Y[n]=f[(n-p){\bmod {N}}]} The random shift P is uniformly distributed on [0, N − 1]: Pr ( P = p ) = 1 N , 0 ≤ p < N {\displaystyle \Pr(P=p)={\frac {1}{N}},\qquad 0\leq p<N} Clearly E { Y [ n ] } = 1 N ∑ p = 0 N − 1 f [ ( n − p ) mod N ] = 0 {\displaystyle \mathbf {E} \{Y[n]\}={\frac {1}{N}}\sum _{p=0}^{N-1}f[(n-p){\bmod {N}}]=0} and R [ n , k ] = E { Y [ n ] Y [ k ] } = 1 N ∑ p = 0 N − 1 f [ ( n − p ) mod N ] f [ ( k − p ) mod N ] = 1 N f Θ f ¯ [ n − k ] , f ¯ [ n ] = f [ − n ] {\displaystyle R[n,k]=\mathbf {E} \{Y[n]Y[k]\}={\frac {1}{N}}\sum _{p=0}^{N-1}f[(n-p){\bmod {N}}]f[(k-p){\bmod {N}}]={\frac {1}{N}}f\Theta {\bar {f}}[n-k],\quad {\bar {f}}[n]=f[-n]} Hence R [ n , k ] = R Y [ n − k ] , R Y [ k ] = 1 N f Θ f ¯ [ k ] {\displaystyle R[n,k]=R_{Y}[n-k],\qquad R_{Y}[k]={\frac {1}{N}}f\Theta {\bar {f}}[k]} Since RY is N periodic, Y is a circular stationary random vector. The covariance operator is a circular convolution with RY and is therefore diagonalized in the discrete Fourier Karhunen–Loève basis { 1 N e i 2 π m n / N } 0 ≤ m < N . {\displaystyle \left\{{\frac {1}{\sqrt {N}}}e^{i2\pi mn/N}\right\}_{0\leq m<N}.} The power spectrum is Fourier transform of RY: P Y [ m ] = R ^ Y [ m ] = 1 N | f ^ [ m ] | 2 {\displaystyle P_{Y}[m]={\hat {R}}_{Y}[m]={\frac {1}{N}}\left|{\hat {f}}[m]\right|^{2}} Example: Consider an extreme case where f [ n ] = δ [ n ] − δ [ n − 1 ] {\displaystyle f[n]=\delta [n]-\delta [n-1]} . A theorem stated above guarantees that the Fourier Karhunen–Loève basis produces a smaller expected approximation error than a canonical basis of Diracs { g m [ n ] = δ [ n − m ] } 0 ≤ m < N {\displaystyle \left\{g_{m}[n]=\delta [n-m]\right\}_{0\leq m<N}} . Indeed, we do not know a priori the abscissa of the non-zero coefficients of Y, so there is no particular Dirac that is better adapted to perform the approximation. But the Fourier vectors cover the whole support of Y and thus absorb a part of the signal energy. E { | ⟨ Y [ n ] , 1 N e i 2 π m n / N ⟩ | 2 } = P Y [ m ] = 4 N sin 2 ⁡ ( π k N ) {\displaystyle \mathbf {E} \left\{\left|\left\langle Y[n],{\frac {1}{\sqrt {N}}}e^{i2\pi mn/N}\right\rangle \right|^{2}\right\}=P_{Y}[m]={\frac {4}{N}}\sin ^{2}\left({\frac {\pi k}{N}}\right)} Selecting higher frequency Fourier coefficients yields a better mean-square approximation than choosing a priori a few Dirac vectors to perform the approximation. The situation is totally different for non-linear approximations. If f [ n ] = δ [ n ] − δ [ n − 1 ] {\displaystyle f[n]=\delta [n]-\delta [n-1]} then the discrete Fourier basis is extremely inefficient because f and hence Y have an energy that is almost uniformly spread among all Fourier vectors. In contrast, since f has only two non-zero coefficients in the Dirac basis, a non-linear approximation of Y with M ≥ 2 gives zero error. == Principal component analysis == We have established the Karhunen–Loève theorem and derived a few properties thereof. We also noted that one hurdle in its application was the numerical cost of determining the eigenvalues and eigenfunctions of its covariance operator through the Fredholm integral equation of the second kind ∫ a b K X ( s , t ) e k ( s ) d s = λ k e k ( t ) . {\displaystyle \int _{a}^{b}K_{X}(s,t)e_{k}(s)\,ds=\lambda _{k}e_{k}(t).} However, when applied to a discrete and finite process ( X n ) n ∈ { 1 , … , N } {\displaystyle \left(X_{n}\right)_{n\in \{1,\ldots ,N\}}} , the problem takes a much simpler form and standard algebra can be used to carry out the calculations. Note that a continuous process can also be sampled at N points in time in order to reduce the problem to a finite version. We henceforth consider a random N-dimensional vector X = ( X 1 X 2 … X N ) T {\displaystyle X=\left(X_{1}~X_{2}~\ldots ~X_{N}\right)^{T}} . As mentioned above, X could contain N samples of a signal but it can hold many more representations depending on the field of application. For instance it could be the answers to a survey or economic data in an econometrics analysis. As in the continuous version, we assume that X is centered, otherwise we can let X := X − μ X {\displaystyle X:=X-\mu _{X}} (where μ X {\displaystyle \mu _{X}} is the mean vector of X) which is centered. Let us adapt the procedure to the discrete case. === Covariance matrix === Recall that the main implication and difficulty of the KL transformation is computing the eigenvectors of the linear operator associated to the covariance function, which are given by the solutions to the integral equation written above. Define Σ, the covariance matrix of X, as an N × N matrix whose elements are given by: Σ i j = E [ X i X j ] , ∀ i , j ∈ { 1 , … , N } {\displaystyle \Sigma _{ij}=\mathbf {E} [X_{i}X_{j}],\qquad \forall i,j\in \{1,\ldots ,N\}} Rewriting the above integral equation to suit the discrete case, we observe that it turns into: ∑ j = 1 N Σ i j e j = λ e i ⇔ Σ e = λ e {\displaystyle \sum _{j=1}^{N}\Sigma _{ij}e_{j}=\lambda e_{i}\quad \Leftrightarrow \quad \Sigma e=\lambda e} where e = ( e 1 e 2 … e N ) T {\displaystyle e=(e_{1}~e_{2}~\ldots ~e_{N})^{T}} is an N-dimensional vector. The integral equation thus reduces to a simple matrix eigenvalue problem, which explains why the PCA has such a broad domain of applications. Since Σ is a positive definite symmetric matrix, it possesses a set of orthonormal eigenvectors forming a basis of R N {\displaystyle \mathbb {R} ^{N}} , and we write { λ i , φ i } i ∈ { 1 , … , N } {\displaystyle \{\lambda _{i},\varphi _{i}\}_{i\in \{1,\ldots ,N\}}} this set of eigenvalues and corresponding eigenvectors, listed in decreasing values of λi. Let also Φ be the orthonormal matrix consisting of these eigenvectors: Φ := ( φ 1 φ 2 … φ N ) T Φ T Φ = I {\displaystyle {\begin{aligned}\Phi &:=\left(\varphi _{1}~\varphi _{2}~\ldots ~\varphi _{N}\right)^{T}\\\Phi ^{T}\Phi &=I\end{aligned}}} === Principal component transform === It remains to perform the actual KL transformation, called the principal component transform in this case. Recall that the transform was found by expanding the process with respect to the basis spanned by the eigenvectors of the covariance function. In this case, we hence have: X = ∑ i = 1 N ⟨ φ i , X ⟩ φ i = ∑ i = 1 N φ i T X φ i {\displaystyle X=\sum _{i=1}^{N}\langle \varphi _{i},X\rangle \varphi _{i}=\sum _{i=1}^{N}\varphi _{i}^{T}X\varphi _{i}} In a more compact form, the principal component transform of X is defined by: { Y = Φ T X X = Φ Y {\displaystyle {\begin{cases}Y=\Phi ^{T}X\\X=\Phi Y\end{cases}}} The i-th component of Y is Y i = φ i T X {\displaystyle Y_{i}=\varphi _{i}^{T}X} , the projection of X on φ i {\displaystyle \varphi _{i}} and the inverse transform X = ΦY yields the expansion of X on the space spanned by the φ i {\displaystyle \varphi _{i}} : X = ∑ i = 1 N Y i φ i = ∑ i = 1 N ⟨ φ i , X ⟩ φ i {\displaystyle X=\sum _{i=1}^{N}Y_{i}\varphi _{i}=\sum _{i=1}^{N}\langle \varphi _{i},X\rangle \varphi _{i}} As in the continuous case, we may reduce the dimensionality of the problem by truncating the sum at some K ∈ { 1 , … , N } {\displaystyle K\in \{1,\ldots ,N\}} such that ∑ i = 1 K λ i ∑ i = 1 N λ i ≥ α {\displaystyle {\frac {\sum _{i=1}^{K}\lambda _{i}}{\sum _{i=1}^{N}\lambda _{i}}}\geq \alpha } where α is the explained variance threshold we wish to set. We can also reduce the dimensionality through the use of multilevel dominant eigenvector estimation (MDEE). == Examples == === The Wiener process === There are numerous equivalent characterizations of the Wiener process which is a mathematical formalization of Brownian motion. Here we regard it as the centered standard Gaussian process Wt with covariance function K W ( t , s ) = cov ⁡ ( W t , W s ) = min ( s , t ) . {\displaystyle K_{W}(t,s)=\operatorname {cov} (W_{t},W_{s})=\min(s,t).} We restrict the time domain to [a, b]=[0,1] without loss of generality. The eigenvectors of the covariance kernel are easily determined. These are e k ( t ) = 2 sin ⁡ ( ( k − 1 2 ) π t ) {\displaystyle e_{k}(t)={\sqrt {2}}\sin \left(\left(k-{\tfrac {1}{2}}\right)\pi t\right)} and the corresponding eigenvalues are λ k = 1 ( k − 1 2 ) 2 π 2 . {\displaystyle \lambda _{k}={\frac {1}{(k-{\frac {1}{2}})^{2}\pi ^{2}}}.} This gives the following representation of the Wiener process: Theorem. There is a sequence {Zi}i of independent Gaussian random variables with mean zero and variance 1 such that W t = 2 ∑ k = 1 ∞ Z k sin ⁡ ( ( k − 1 2 ) π t ) ( k − 1 2 ) π . {\displaystyle W_{t}={\sqrt {2}}\sum _{k=1}^{\infty }Z_{k}{\frac {\sin \left(\left(k-{\frac {1}{2}}\right)\pi t\right)}{\left(k-{\frac {1}{2}}\right)\pi }}.} Note that this representation is only valid for t ∈ [ 0 , 1 ] . {\displaystyle t\in [0,1].} On larger intervals, the increments are not independent. As stated in the theorem, convergence is in the L2 norm and uniform in t. === The Brownian bridge === Similarly the Brownian bridge B t = W t − t W 1 {\displaystyle B_{t}=W_{t}-tW_{1}} which is a stochastic process with covariance function K B ( t , s ) = min ( t , s ) − t s {\displaystyle K_{B}(t,s)=\min(t,s)-ts} can be represented as the series B t = ∑ k = 1 ∞ Z k 2 sin ⁡ ( k π t ) k π {\displaystyle B_{t}=\sum _{k=1}^{\infty }Z_{k}{\frac {{\sqrt {2}}\sin(k\pi t)}{k\pi }}} == Applications == Adaptive optics systems sometimes use K–L functions to reconstruct wave-front phase information (Dai 1996, JOSA A). Karhunen–Loève expansion is closely related to the Singular Value Decomposition. The latter has myriad applications in image processing, radar, seismology, and the like. If one has independent vector observations from a vector valued stochastic process then the left singular vectors are maximum likelihood estimates of the ensemble KL expansion. === Applications in signal estimation and detection === ==== Detection of a known continuous signal S(t) ==== In communication, we usually have to decide whether a signal from a noisy channel contains valuable information. The following hypothesis testing is used for detecting continuous signal s(t) from channel output X(t), N(t) is the channel noise, which is usually assumed zero mean Gaussian process with correlation function R N ( t , s ) = E [ N ( t ) N ( s ) ] {\displaystyle R_{N}(t,s)=E[N(t)N(s)]} H : X ( t ) = N ( t ) , {\displaystyle H:X(t)=N(t),} K : X ( t ) = N ( t ) + s ( t ) , t ∈ ( 0 , T ) {\displaystyle K:X(t)=N(t)+s(t),\quad t\in (0,T)} ==== Signal detection in white noise ==== When the channel noise is white, its correlation function is R N ( t ) = 1 2 N 0 δ ( t ) , {\displaystyle R_{N}(t)={\tfrac {1}{2}}N_{0}\delta (t),} and it has constant power spectrum density. In physically practical channel, the noise power is finite, so: S N ( f ) = { N 0 2 | f | < w 0 | f | > w {\displaystyle S_{N}(f)={\begin{cases}{\frac {N_{0}}{2}}&|f|<w\\0&|f|>w\end{cases}}} Then the noise correlation function is sinc function with zeros at n 2 ω , n ∈ Z . {\displaystyle {\frac {n}{2\omega }},n\in \mathbf {Z} .} Since are uncorrelated and gaussian, they are independent. Thus we can take samples from X(t) with time spacing Δ t = n 2 ω within ( 0 , ″ T ″ ) . {\displaystyle \Delta t={\frac {n}{2\omega }}{\text{ within }}(0,''T'').} Let X i = X ( i Δ t ) {\displaystyle X_{i}=X(i\,\Delta t)} . We have a total of n = T Δ t = T ( 2 ω ) = 2 ω T {\displaystyle n={\frac {T}{\Delta t}}=T(2\omega )=2\omega T} i.i.d observations { X 1 , X 2 , … , X n } {\displaystyle \{X_{1},X_{2},\ldots ,X_{n}\}} to develop the likelihood-ratio test. Define signal S i = S ( i Δ t ) {\displaystyle S_{i}=S(i\,\Delta t)} , the problem becomes, H : X i = N i , {\displaystyle H:X_{i}=N_{i},} K : X i = N i + S i , i = 1 , 2 , … , n . {\displaystyle K:X_{i}=N_{i}+S_{i},i=1,2,\ldots ,n.} The log-likelihood ratio L ( x _ ) = log ⁡ ∑ i = 1 n ( 2 S i x i − S i 2 ) 2 σ 2 ⇔ Δ t ∑ i = 1 n S i x i = ∑ i = 1 n S ( i Δ t ) x ( i Δ t ) Δ t ≷ λ ⋅ 2 {\displaystyle {\mathcal {L}}({\underline {x}})=\log {\frac {\sum _{i=1}^{n}(2S_{i}x_{i}-S_{i}^{2})}{2\sigma ^{2}}}\Leftrightarrow \Delta t\sum _{i=1}^{n}S_{i}x_{i}=\sum _{i=1}^{n}S(i\,\Delta t)x(i\,\Delta t)\,\Delta t\gtrless \lambda _{\cdot }2} As t → 0, let: G = ∫ 0 T S ( t ) x ( t ) d t . {\displaystyle G=\int _{0}^{T}S(t)x(t)\,dt.} Then G is the test statistics and the Neyman–Pearson optimum detector is G ( x _ ) > G 0 ⇒ K < G 0 ⇒ H . {\displaystyle G({\underline {x}})>G_{0}\Rightarrow K<G_{0}\Rightarrow H.} As G is Gaussian, we can characterize it by finding its mean and variances. Then we get H : G ∼ N ( 0 , 1 2 N 0 E ) {\displaystyle H:G\sim N\left(0,{\tfrac {1}{2}}N_{0}E\right)} K : G ∼ N ( E , 1 2 N 0 E ) {\displaystyle K:G\sim N\left(E,{\tfrac {1}{2}}N_{0}E\right)} where E = ∫ 0 T S 2 ( t ) d t {\displaystyle \mathbf {E} =\int _{0}^{T}S^{2}(t)\,dt} is the signal energy. The false alarm error α = ∫ G 0 ∞ N ( 0 , 1 2 N 0 E ) d G ⇒ G 0 = 1 2 N 0 E Φ − 1 ( 1 − α ) {\displaystyle \alpha =\int _{G_{0}}^{\infty }N\left(0,{\tfrac {1}{2}}N_{0}E\right)\,dG\Rightarrow G_{0}={\sqrt {{\tfrac {1}{2}}N_{0}E}}\Phi ^{-1}(1-\alpha )} And the probability of detection: β = ∫ G 0 ∞ N ( E , 1 2 N 0 E ) d G = 1 − Φ ( G 0 − E 1 2 N 0 E ) = Φ ( 2 E N 0 − Φ − 1 ( 1 − α ) ) , {\displaystyle \beta =\int _{G_{0}}^{\infty }N\left(E,{\tfrac {1}{2}}N_{0}E\right)\,dG=1-\Phi \left({\frac {G_{0}-E}{\sqrt {{\tfrac {1}{2}}N_{0}E}}}\right)=\Phi \left({\sqrt {\frac {2E}{N_{0}}}}-\Phi ^{-1}(1-\alpha )\right),} where Φ is the cdf of standard normal, or Gaussian, variable. ==== Signal detection in colored noise ==== When N(t) is colored (correlated in time) Gaussian noise with zero mean and covariance function R N ( t , s ) = E [ N ( t ) N ( s ) ] , {\displaystyle R_{N}(t,s)=E[N(t)N(s)],} we cannot sample independent discrete observations by evenly spacing the time. Instead, we can use K–L expansion to decorrelate the noise process and get independent Gaussian observation 'samples'. The K–L expansion of N(t): N ( t ) = ∑ i = 1 ∞ N i Φ i ( t ) , 0 < t < T , {\displaystyle N(t)=\sum _{i=1}^{\infty }N_{i}\Phi _{i}(t),\quad 0<t<T,} where N i = ∫ N ( t ) Φ i ( t ) d t {\displaystyle N_{i}=\int N(t)\Phi _{i}(t)\,dt} and the orthonormal bases { Φ i t } {\displaystyle \{\Phi _{i}{t}\}} are generated by kernel R N ( t , s ) {\displaystyle R_{N}(t,s)} , i.e., solution to ∫ 0 T R N ( t , s ) Φ i ( s ) d s = λ i Φ i ( t ) , var ⁡ [ N i ] = λ i . {\displaystyle \int _{0}^{T}R_{N}(t,s)\Phi _{i}(s)\,ds=\lambda _{i}\Phi _{i}(t),\quad \operatorname {var} [N_{i}]=\lambda _{i}.} Do the expansion: S ( t ) = ∑ i = 1 ∞ S i Φ i ( t ) , {\displaystyle S(t)=\sum _{i=1}^{\infty }S_{i}\Phi _{i}(t),} where S i = ∫ 0 T S ( t ) Φ i ( t ) d t {\displaystyle S_{i}=\int _{0}^{T}S(t)\Phi _{i}(t)\,dt} , then X i = ∫ 0 T X ( t ) Φ i ( t ) d t = N i {\displaystyle X_{i}=\int _{0}^{T}X(t)\Phi _{i}(t)\,dt=N_{i}} under H and N i + S i {\displaystyle N_{i}+S_{i}} under K. Let X ¯ = { X 1 , X 2 , … } {\displaystyle {\overline {X}}=\{X_{1},X_{2},\dots \}} , we have N i {\displaystyle N_{i}} are independent Gaussian r.v's with variance λ i {\displaystyle \lambda _{i}} under H: { X i } {\displaystyle \{X_{i}\}} are independent Gaussian r.v's. f H [ x ( t ) | 0 < t < T ] = f H ( x _ ) = ∏ i = 1 ∞ 1 2 π λ i exp ⁡ ( − x i 2 2 λ i ) {\displaystyle f_{H}[x(t)|0<t<T]=f_{H}({\underline {x}})=\prod _{i=1}^{\infty }{\frac {1}{\sqrt {2\pi \lambda _{i}}}}\exp \left(-{\frac {x_{i}^{2}}{2\lambda _{i}}}\right)} under K: { X i − S i } {\displaystyle \{X_{i}-S_{i}\}} are independent Gaussian r.v's. f K [ x ( t ) ∣ 0 < t < T ] = f K ( x _ ) = ∏ i = 1 ∞ 1 2 π λ i exp ⁡ ( − ( x i − S i ) 2 2 λ i ) {\displaystyle f_{K}[x(t)\mid 0<t<T]=f_{K}({\underline {x}})=\prod _{i=1}^{\infty }{\frac {1}{\sqrt {2\pi \lambda _{i}}}}\exp \left(-{\frac {(x_{i}-S_{i})^{2}}{2\lambda _{i}}}\right)} Hence, the log-LR is given by L ( x _ ) = ∑ i = 1 ∞ 2 S i x i − S i 2 2 λ i {\displaystyle {\mathcal {L}}({\underline {x}})=\sum _{i=1}^{\infty }{\frac {2S_{i}x_{i}-S_{i}^{2}}{2\lambda _{i}}}} and the optimum detector is G = ∑ i = 1 ∞ S i x i λ i > G 0 ⇒ K , < G 0 ⇒ H . {\displaystyle G=\sum _{i=1}^{\infty }S_{i}x_{i}\lambda _{i}>G_{0}\Rightarrow K,<G_{0}\Rightarrow H.} Define k ( t ) = ∑ i = 1 ∞ λ i S i Φ i ( t ) , 0 < t < T , {\displaystyle k(t)=\sum _{i=1}^{\infty }\lambda _{i}S_{i}\Phi _{i}(t),0<t<T,} then G = ∫ 0 T k ( t ) x ( t ) d t . {\displaystyle G=\int _{0}^{T}k(t)x(t)\,dt.} ===== How to find k(t) ===== Since ∫ 0 T R N ( t , s ) k ( s ) d s = ∑ i = 1 ∞ λ i S i ∫ 0 T R N ( t , s ) Φ i ( s ) d s = ∑ i = 1 ∞ S i Φ i ( t ) = S ( t ) , {\displaystyle \int _{0}^{T}R_{N}(t,s)k(s)\,ds=\sum _{i=1}^{\infty }\lambda _{i}S_{i}\int _{0}^{T}R_{N}(t,s)\Phi _{i}(s)\,ds=\sum _{i=1}^{\infty }S_{i}\Phi _{i}(t)=S(t),} k(t) is the solution to ∫ 0 T R N ( t , s ) k ( s ) d s = S ( t ) . {\displaystyle \int _{0}^{T}R_{N}(t,s)k(s)\,ds=S(t).} If N(t)is wide-sense stationary, ∫ 0 T R N ( t − s ) k ( s ) d s = S ( t ) , {\displaystyle \int _{0}^{T}R_{N}(t-s)k(s)\,ds=S(t),} which is known as the Wiener–Hopf equation. The equation can be solved by taking fourier transform, but not practically realizable since infinite spectrum needs spatial factorization. A special case which is easy to calculate k(t) is white Gaussian noise. ∫ 0 T N 0 2 δ ( t − s ) k ( s ) d s = S ( t ) ⇒ k ( t ) = C S ( t ) , 0 < t < T . {\displaystyle \int _{0}^{T}{\frac {N_{0}}{2}}\delta (t-s)k(s)\,ds=S(t)\Rightarrow k(t)=CS(t),\quad 0<t<T.} The corresponding impulse response is h(t) = k(T − t) = CS(T − t). Let C = 1, this is just the result we arrived at in previous section for detecting of signal in white noise. ===== Test threshold for Neyman–Pearson detector ===== Since X(t) is a Gaussian process, G = ∫ 0 T k ( t ) x ( t ) d t , {\displaystyle G=\int _{0}^{T}k(t)x(t)\,dt,} is a Gaussian random variable that can be characterized by its mean and variance. E [ G ∣ H ] = ∫ 0 T k ( t ) E [ x ( t ) ∣ H ] d t = 0 E [ G ∣ K ] = ∫ 0 T k ( t ) E [ x ( t ) ∣ K ] d t = ∫ 0 T k ( t ) S ( t ) d t ≡ ρ E [ G 2 ∣ H ] = ∫ 0 T ∫ 0 T k ( t ) k ( s ) R N ( t , s ) d t d s = ∫ 0 T k ( t ) ( ∫ 0 T k ( s ) R N ( t , s ) d s ) = ∫ 0 T k ( t ) S ( t ) d t = ρ var ⁡ [ G ∣ H ] = E [ G 2 ∣ H ] − ( E [ G ∣ H ] ) 2 = ρ E [ G 2 ∣ K ] = ∫ 0 T ∫ 0 T k ( t ) k ( s ) E [ x ( t ) x ( s ) ] d t d s = ∫ 0 T ∫ 0 T k ( t ) k ( s ) ( R N ( t , s ) + S ( t ) S ( s ) ) d t d s = ρ + ρ 2 var ⁡ [ G ∣ K ] = E [ G 2 | K ] − ( E [ G | K ] ) 2 = ρ + ρ 2 − ρ 2 = ρ {\displaystyle {\begin{aligned}\mathbf {E} [G\mid H]&=\int _{0}^{T}k(t)\mathbf {E} [x(t)\mid H]\,dt=0\\\mathbf {E} [G\mid K]&=\int _{0}^{T}k(t)\mathbf {E} [x(t)\mid K]\,dt=\int _{0}^{T}k(t)S(t)\,dt\equiv \rho \\\mathbf {E} [G^{2}\mid H]&=\int _{0}^{T}\int _{0}^{T}k(t)k(s)R_{N}(t,s)\,dt\,ds=\int _{0}^{T}k(t)\left(\int _{0}^{T}k(s)R_{N}(t,s)\,ds\right)=\int _{0}^{T}k(t)S(t)\,dt=\rho \\\operatorname {var} [G\mid H]&=\mathbf {E} [G^{2}\mid H]-(\mathbf {E} [G\mid H])^{2}=\rho \\\mathbf {E} [G^{2}\mid K]&=\int _{0}^{T}\int _{0}^{T}k(t)k(s)\mathbf {E} [x(t)x(s)]\,dt\,ds=\int _{0}^{T}\int _{0}^{T}k(t)k(s)(R_{N}(t,s)+S(t)S(s))\,dt\,ds=\rho +\rho ^{2}\\\operatorname {var} [G\mid K]&=\mathbf {E} [G^{2}|K]-(\mathbf {E} [G|K])^{2}=\rho +\rho ^{2}-\rho ^{2}=\rho \end{aligned}}} Hence, we obtain the distributions of H and K: H : G ∼ N ( 0 , ρ ) {\displaystyle H:G\sim N(0,\rho )} K : G ∼ N ( ρ , ρ ) {\displaystyle K:G\sim N(\rho ,\rho )} The false alarm error is α = ∫ G 0 ∞ N ( 0 , ρ ) d G = 1 − Φ ( G 0 ρ ) . {\displaystyle \alpha =\int _{G_{0}}^{\infty }N(0,\rho )\,dG=1-\Phi \left({\frac {G_{0}}{\sqrt {\rho }}}\right).} So the test threshold for the Neyman–Pearson optimum detector is G 0 = ρ Φ − 1 ( 1 − α ) . {\displaystyle G_{0}={\sqrt {\rho }}\Phi ^{-1}(1-\alpha ).} Its power of detection is β = ∫ G 0 ∞ N ( ρ , ρ ) d G = Φ ( ρ − Φ − 1 ( 1 − α ) ) {\displaystyle \beta =\int _{G_{0}}^{\infty }N(\rho ,\rho )\,dG=\Phi \left({\sqrt {\rho }}-\Phi ^{-1}(1-\alpha )\right)} When the noise is white Gaussian process, the signal power is ρ = ∫ 0 T k ( t ) S ( t ) d t = ∫ 0 T S ( t ) 2 d t = E . {\displaystyle \rho =\int _{0}^{T}k(t)S(t)\,dt=\int _{0}^{T}S(t)^{2}\,dt=E.} ===== Prewhitening ===== For some type of colored noise, a typical practise is to add a prewhitening filter before the matched filter to transform the colored noise into white noise. For example, N(t) is a wide-sense stationary colored noise with correlation function R N ( τ ) = B N 0 4 e − B | τ | {\displaystyle R_{N}(\tau )={\frac {BN_{0}}{4}}e^{-B|\tau |}} S N ( f ) = N 0 2 ( 1 + ( w B ) 2 ) {\displaystyle S_{N}(f)={\frac {N_{0}}{2(1+({\frac {w}{B}})^{2})}}} The transfer function of prewhitening filter is H ( f ) = 1 + j w B . {\displaystyle H(f)=1+j{\frac {w}{B}}.} ==== Detection of a Gaussian random signal in Additive white Gaussian noise (AWGN) ==== When the signal we want to detect from the noisy channel is also random, for example, a white Gaussian process X(t), we can still implement K–L expansion to get independent sequence of observation. In this case, the detection problem is described as follows: H 0 : Y ( t ) = N ( t ) {\displaystyle H_{0}:Y(t)=N(t)} H 1 : Y ( t ) = N ( t ) + X ( t ) , 0 < t < T . {\displaystyle H_{1}:Y(t)=N(t)+X(t),\quad 0<t<T.} X(t) is a random process with correlation function R X ( t , s ) = E { X ( t ) X ( s ) } {\displaystyle R_{X}(t,s)=E\{X(t)X(s)\}} The K–L expansion of X(t) is X ( t ) = ∑ i = 1 ∞ X i Φ i ( t ) , {\displaystyle X(t)=\sum _{i=1}^{\infty }X_{i}\Phi _{i}(t),} where X i = ∫ 0 T X ( t ) Φ i ( t ) d t {\displaystyle X_{i}=\int _{0}^{T}X(t)\Phi _{i}(t)\,dt} and Φ i ( t ) {\displaystyle \Phi _{i}(t)} are solutions to ∫ 0 T R X ( t , s ) Φ i ( s ) d s = λ i Φ i ( t ) . {\displaystyle \int _{0}^{T}R_{X}(t,s)\Phi _{i}(s)ds=\lambda _{i}\Phi _{i}(t).} So X i {\displaystyle X_{i}} 's are independent sequence of r.v's with zero mean and variance λ i {\displaystyle \lambda _{i}} . Expanding Y(t) and N(t) by Φ i ( t ) {\displaystyle \Phi _{i}(t)} , we get Y i = ∫ 0 T Y ( t ) Φ i ( t ) d t = ∫ 0 T [ N ( t ) + X ( t ) ] Φ i ( t ) = N i + X i , {\displaystyle Y_{i}=\int _{0}^{T}Y(t)\Phi _{i}(t)\,dt=\int _{0}^{T}[N(t)+X(t)]\Phi _{i}(t)=N_{i}+X_{i},} where N i = ∫ 0 T N ( t ) Φ i ( t ) d t . {\displaystyle N_{i}=\int _{0}^{T}N(t)\Phi _{i}(t)\,dt.} As N(t) is Gaussian white noise, N i {\displaystyle N_{i}} 's are i.i.d sequence of r.v with zero mean and variance 1 2 N 0 {\displaystyle {\tfrac {1}{2}}N_{0}} , then the problem is simplified as follows, H 0 : Y i = N i {\displaystyle H_{0}:Y_{i}=N_{i}} H 1 : Y i = N i + X i {\displaystyle H_{1}:Y_{i}=N_{i}+X_{i}} The Neyman–Pearson optimal test: Λ = f Y ∣ H 1 f Y ∣ H 0 = C e − ∑ i = 1 ∞ y i 2 2 λ i 1 2 N 0 ( 1 2 N 0 + λ i ) , {\displaystyle \Lambda ={\frac {f_{Y}\mid H_{1}}{f_{Y}\mid H_{0}}}=Ce^{-\sum _{i=1}^{\infty }{\frac {y_{i}^{2}}{2}}{\frac {\lambda _{i}}{{\tfrac {1}{2}}N_{0}({\tfrac {1}{2}}N_{0}+\lambda _{i})}}},} so the log-likelihood ratio is L = ln ⁡ ( Λ ) = K − ∑ i = 1 ∞ 1 2 y i 2 λ i N 0 2 ( N 0 2 + λ i ) . {\displaystyle {\mathcal {L}}=\ln(\Lambda )=K-\sum _{i=1}^{\infty }{\tfrac {1}{2}}y_{i}^{2}{\frac {\lambda _{i}}{{\frac {N_{0}}{2}}\left({\frac {N_{0}}{2}}+\lambda _{i}\right)}}.} Since X ^ i = λ i N 0 2 ( N 0 2 + λ i ) {\displaystyle {\widehat {X}}_{i}={\frac {\lambda _{i}}{{\frac {N_{0}}{2}}\left({\frac {N_{0}}{2}}+\lambda _{i}\right)}}} is just the minimum-mean-square estimate of X i {\displaystyle X_{i}} given Y i {\displaystyle Y_{i}} 's, L = K + 1 N 0 ∑ i = 1 ∞ Y i X ^ i . {\displaystyle {\mathcal {L}}=K+{\frac {1}{N_{0}}}\sum _{i=1}^{\infty }Y_{i}{\widehat {X}}_{i}.} K–L expansion has the following property: If f ( t ) = ∑ f i Φ i ( t ) , g ( t ) = ∑ g i Φ i ( t ) , {\displaystyle f(t)=\sum f_{i}\Phi _{i}(t),g(t)=\sum g_{i}\Phi _{i}(t),} where f i = ∫ 0 T f ( t ) Φ i ( t ) d t , g i = ∫ 0 T g ( t ) Φ i ( t ) d t . {\displaystyle f_{i}=\int _{0}^{T}f(t)\Phi _{i}(t)\,dt,\quad g_{i}=\int _{0}^{T}g(t)\Phi _{i}(t)\,dt.} then ∑ i = 1 ∞ f i g i = ∫ 0 T g ( t ) f ( t ) d t . {\displaystyle \sum _{i=1}^{\infty }f_{i}g_{i}=\int _{0}^{T}g(t)f(t)\,dt.} So let X ^ ( t ∣ T ) = ∑ i = 1 ∞ X ^ i Φ i ( t ) , L = K + 1 N 0 ∫ 0 T Y ( t ) X ^ ( t ∣ T ) d t . {\displaystyle {\widehat {X}}(t\mid T)=\sum _{i=1}^{\infty }{\widehat {X}}_{i}\Phi _{i}(t),\quad {\mathcal {L}}=K+{\frac {1}{N_{0}}}\int _{0}^{T}Y(t){\widehat {X}}(t\mid T)\,dt.} Noncausal filter Q(t,s) can be used to get the estimate through X ^ ( t ∣ T ) = ∫ 0 T Q ( t , s ) Y ( s ) d s . {\displaystyle {\widehat {X}}(t\mid T)=\int _{0}^{T}Q(t,s)Y(s)\,ds.} By orthogonality principle, Q(t,s) satisfies ∫ 0 T Q ( t , s ) R X ( s , t ) d s + N 0 2 Q ( t , λ ) = R X ( t , λ ) , 0 < λ < T , 0 < t < T . {\displaystyle \int _{0}^{T}Q(t,s)R_{X}(s,t)\,ds+{\tfrac {N_{0}}{2}}Q(t,\lambda )=R_{X}(t,\lambda ),0<\lambda <T,0<t<T.} However, for practical reasons, it's necessary to further derive the causal filter h(t,s), where h(t,s) = 0 for s > t, to get estimate X ^ ( t ∣ t ) {\displaystyle {\widehat {X}}(t\mid t)} . Specifically, Q ( t , s ) = h ( t , s ) + h ( s , t ) − ∫ 0 T h ( λ , t ) h ( s , λ ) d λ {\displaystyle Q(t,s)=h(t,s)+h(s,t)-\int _{0}^{T}h(\lambda ,t)h(s,\lambda )\,d\lambda } == See also == Principal component analysis Polynomial chaos Reproducing kernel Hilbert space Mercer's theorem == Notes == == References == Stark, Henry; Woods, John W. (1986). Probability, Random Processes, and Estimation Theory for Engineers. Prentice-Hall, Inc. ISBN 978-0-13-711706-2. OL 21138080M. Ghanem, Roger; Spanos, Pol (1991). Stochastic finite elements: a spectral approach. Springer-Verlag. ISBN 978-0-387-97456-9. OL 1865197M. Guikhman, I.; Skorokhod, A. (1977). Introduction a la Théorie des Processus Aléatoires. Éditions MIR. Simon, B. (1979). Functional Integration and Quantum Physics. Academic Press. Karhunen, Kari (1947). "Über lineare Methoden in der Wahrscheinlichkeitsrechnung". Ann. Acad. Sci. Fennicae. Ser. A I. Math.-Phys. 37: 1–79. Loève, M. (1978). Probability theory Vol. II. Graduate Texts in Mathematics. Vol. 46 (4 ed.). Springer-Verlag. ISBN 978-0-387-90262-3. Dai, G. (1996). "Modal wave-front reconstruction with Zernike polynomials and Karhunen–Loeve functions". JOSA A. 13 (6): 1218. Bibcode:1996JOSAA..13.1218D. doi:10.1364/JOSAA.13.001218. Wu B., Zhu J., Najm F.(2005) "A Non-parametric Approach for Dynamic Range Estimation of Nonlinear Systems". In Proceedings of Design Automation Conference(841–844) 2005 Wu B., Zhu J., Najm F.(2006) "Dynamic Range Estimation". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, Vol. 25 Issue:9 (1618–1636) 2006 Jorgensen, Palle E. T.; Song, Myung-Sin (2007). "Entropy Encoding, Hilbert Space and Karhunen–Loeve Transforms". Journal of Mathematical Physics. 48 (10): 103503. arXiv:math-ph/0701056. Bibcode:2007JMP....48j3503J. doi:10.1063/1.2793569. S2CID 17039075. == External links == Mathematica KarhunenLoeveDecomposition function. E161: Computer Image Processing and Analysis notes by Pr. Ruye Wang at Harvey Mudd College [1]
Wikipedia/Karhunen-Loève_transform
Adaptive Transform Acoustic Coding (ATRAC) is a family of proprietary audio compression algorithms developed by Sony. MiniDisc was the first commercial product to incorporate ATRAC, in 1992. ATRAC allowed a relatively small disc like MiniDisc to have the same running time as CD while storing audio information with minimal perceptible loss in quality. Improvements to the codec in the form of ATRAC3, ATRAC3plus, and ATRAC Advanced Lossless followed in 1999, 2002, and 2006 respectively. Files in ATRAC3 format originally had the .aa3 extension; however, in most cases, the files would be stored in an OpenMG Audio container using the extension .oma. Previously, files that were encrypted with OpenMG had the .omg extension, which was replaced by .oma starting in SonicStage v2.1. Encryption is no longer compulsory as of v3.2. Other MiniDisc manufacturers such as Sharp and Panasonic also implemented their own versions of the ATRAC codec. == History == ATRAC was developed for Sony's MiniDisc format. ATRAC was updated with version 2, then version 3, version 4, version 4.5, and Type R and Type S. The first major update was ATRAC3 (not to be confused with version 3 of original ATRAC) in 1999. ATRAC3 was used on MiniDisc as well as the Network Walkman and Vaio Music Clip. ATRAC3plus was launched in 2003 for Hi-MD, but was also compatible with some PlayStation, VAIO and Xplod devices. On 31 March 2008 Sony all but dropped the ATRAC-related codecs in the United States and Europe, and in their SonicStage powered Connect Music Store (Sony's equivalent of iTunes and iTunes Music Store). This was partly due to low adoption of the format, with a source claiming that 90% of European Walkman users did not use ATRAC. Walkman digital players outside Japan no longer worked with ATRAC after September 2007. Until 1 October 2012, ATRAC was the only codec available to download music from mora until they transitioned to a DRM free model and began offering FLAC files the next year. ATRAC9 was designed for PlayStation audio and debuted with the PlayStation Vita. == Bitrate quality == ATRAC's 292 kbit/s bitrate used on the original MiniDiscs was designed to be near to CD audio quality. Years later ATRAC was improved over earlier versions at similar bitrates. For comparison, CDs are encoded at 1411.2 kbit/s, and lossless encoders can encode most CDs below 1000 kbit/s, with further bitrate reduction for easier-to-encode content such as voice. == Performance == ATRAC algorithms were developed in close cooperation with LSI integrated circuit development engineers within Sony in order to deliver a product that could encode at high speeds and with minimal power consumption. This contrasts with other codecs developed on computers without regard for the constraints of portable hardware. This is reflected in the design of the ATRAC codecs, which emphasize processing smaller groups of samples at a time to save memory at the cost of compression efficiency and additional multiplies. These trade-offs are logical for DSP systems, where memory was often at a premium compared to multiplier performance. Sony Walkmans offer better battery life when playing ATRAC files than when playing MP3 files. However, as Sony only pushed ATRAC compatibility in Sony Ericsson Walkman series phones in the Japanese market, it is not supported in GSM/UMTS market phones. Sony's Xplod series of car audio CD players support ATRAC CDs. Minidiscs with ATRAC format songs have, in the past, been supported on Eclipse brand car stereos. == Formats == === ATRAC (1) (versions 1.0–4.5, Type R/S) === ATRAC1 was first used in Sony's own theater format SDDS system in the 1990s, and in this context is a direct competitor to Dolby Digital (AC3) and DTS. SDDS uses ATRAC1 with 8 channel encoding, and with a total encoding rate over all the channels of 1168 kbit/s. Two stacked quadrature mirror filters split the signal into 3 parts: 0 to 5.5125 kHz 5.5125 to 11.025 kHz 11.025 to 22.05 kHz Full stereo (i.e., independent channel) encoding with a data rate of 292 kbit/s. High-frequency lowpass depends on the complexity of the material; some encodings have content clear up to 22.05 kHz. ATRAC1 can also be used in mono (one channel) mode, doubling recording time. FFmpeg has an implementation of an ATRAC1 decoder. === ATRAC3 (LP2 and LP4 Modes) === Like ATRAC1 and MP3, ATRAC3 is also a hybrid subband-MDCT encoder, but with several differences. In ATRAC3, Three stacked QMF split the signal into 4 parts: 0 to 2.75625 kHz (DC to f/16) 2.75625 to 5.5125 kHz (f/16 to f/8) 5.5125 to 11.025 kHz (f/8 to f/4) 11.025 to 22.05 kHz (f/4 to f/2) The four subbands are then MDCT encoded using a fixed-length transform. Unlike nearly all modern formats, the transform length cannot be varied to optimize coding transients. Instead, a simpler transient encoding technique called gain control is used, in which the gain of different subbands is varied during a transient prior to MDCT and then restored during decoding after the inverse MDCT to try to smooth over transients. Additionally, prior to quantization, tonal components are subtracted from the signal and independently quantized. During decoding, they are separately reconstructed and added back to reform the original MDCT coefficients. Sony claims the major advantage of ATRAC3 is its coding efficiency, which was tuned for portable DSP which provides less computing power and battery life. However, as ATRAC is a hybrid subband-MDCT codec that is algorithmically very similar to MP3, any advantage is probably exaggerated. Compared to newer formats such as Ogg Vorbis which use a simple MDCT rather than a hybrid, ATRAC3 must perform an additional computationally expensive inverse-QMF, although the hybrid system significantly reduces memory usage, which was likely a factor given the limited memory available when ATRAC was first developed. LP2 Mode This uses a 132 kbit/s data rate, the quality of which is advertised to be similar to that of MP3 encoded at a similar bit rate. However, in an independent double-blind test (2004/05) without format encoding parameters reference against Ogg Vorbis, AAC, and LAME VBR MP3, ATRAC3 came last. LP4 Mode This reduces the data rate to 66 kbit/s (half that of LP2), partly by using joint stereo coding and a lowpass filter around 13.5 kHz. It allows 324 minutes to be recorded on an 80-minute MiniDisc, with the same padding required as LP2. Notes FFmpeg has an implementation of an ATRAC3 decoder, which was converted to fixed precision and implemented in the Rockbox series of firmware for ARM, Coldfire and MIPS processors. RealAudio8 is a high-bitrate implementation of ATRAC3 (up to 352.8kbit/s). Atracdenc is an open source implementation of ATRAC3 compatible encoder which also can use RealAudio container. The PlayStation 3 video game Race Driver: Grid uses 224 simultaneous streams of ATRAC3 compressed audio, with between one and eight channels per stream at sample rates between 24 and 48 kHz, each filtered using 512 frequency bands of adaptive equalisation, routed via six reverb units running on the same SPU co-processor (one of eight on the PS3's Cell chip), alongside 7.1 channel hybrid third-order Ambisonic mixing. === ATRAC3plus === This codec is used in Sony Hi-MD Walkman devices (e.g., "Hi-LP and Hi-SP"), Network Walkman players, Memory Stick players, VAIO Pocket, PS3 and PSP console, and ATRAC CD players. It is a hybrid subband/MDCT codec based on a 16 channel QMF followed by a 128-point MDCT. Prior to MDCT coding, Generalized Harmonic Analysis (GHA) is used to extract tonal components, an improved version of the process used in ATRAC3. As in previous ATRAC versions, gain control is used to control preecho rather than variable sized transforms, although different MDCT windows are apparently possible. SonicStage version 3.4, released in February 2006, introduced ripping CDs in bitrates 320 and 352. The available bitrates are: 48, 64, 96, 128, 160, 192, 256, 320 and 352 kbit/s. The newer bitrates are not always compatible with all older hardware decoders, however, some of the older hardware has been found to be compatible with certain newer ATRAC3plus bitrates. MiniDiscs recorded in this format are incompatible with older players. In a test conducted by an independent firm, but financed by Sony, it was concluded that ATRAC3plus at 64 kbit/s is equal in subjective sound quality to an obsolete MP3 encoder at 128 kbit/s. Performance against modern high quality MP3 encoders was not evaluated. === ATRAC Advanced Lossless === ATRAC Advanced Lossless is a "scalable" lossless audio codec that records a lossy ATRAC3 or ATRAC3plus stream, and supplements it with a stream of correction information stored within the file itself that allows the original signal to be reproduced, if desired. A player/decoder can extract and use just the ATRAC3 or ATRAC3plus data, or it can combine that with the correction stream to perfectly reproduce the original audio information. This allows the file to be decoded as either lossless or lossy. It is implemented in such a way that allows the file size to be smaller than uncompressed or compressed versions of the same file. Compression is approximately 30–80% of the original file. Benefits of scalable compression include providing backward compatibility, such that older devices that are not AAL-aware can still have the ATRAC3 stream available for playback without understanding the AAL format, and faster transfer speed between portable audio devices and PC. ATRAC Advanced Lossless is widely supported in older Walkman players and SonicStage version 4 or later. SonicStage 4 allows download of ATRAC Advanced Lossless to MiniDisc Players, PlayStation Portable, and PlayStation 3. Recent Walkman players do not support ATRAC Advanced Lossless/ATRAC. AAL's use of a "core" (lossy) and "residual" (correction) stream is similar to the idea behind Opus, MPEG-4 SLS, DTS-HD Master Audio, Dolby TrueHD and Ogg Vorbis bitrate peeling. In fact, AAL was the first to be released in the commercial market with this scheme for backward compatibility. WavPack hybrid mode and OptimFROG DualStream are in the same category, but store the correction stream in a separate file. === ATRAC9 === According to Sony ATRAC9 is a high-compression audio codec optimized for games, offering low delay (granularity) and low CPU and memory usage. It is used in the PS5, PS4 and PS Vita consoles. Audio middleware such as FMOD and Audiokinetic Wwise supports it. FFmpeg has an implementation of an ATRAC9 decoder. == See also == Lossy compression OpenMG SonicStage Walkman == References == == External links == Sony.net, ATRAC technology page.
Wikipedia/Adaptive_Transform_Acoustic_Coding
The Joint Photographic Experts Group (JPEG) is the joint committee between ISO/IEC JTC 1/SC 29 and ITU-T Study Group 16 that created and maintains the JPEG, JPEG 2000, JPEG XR, JPEG XT, JPEG XS, JPEG XL, and related digital image standards. It also has the responsibility for maintenance of the JBIG and JBIG2 standards that were developed by the former Joint Bi-level Image Experts Group. Within ISO/IEC JTC 1, JPEG is Working Group 1 (WG 1) of Subcommittee 29 (SC 29) and has the formal title JPEG Coding of digital representations of images, where it is one of eight working groups in SC 29. In the ITU-T (formerly called the CCITT), its work falls in the domain of the ITU-T Visual Coding Experts Group (VCEG), which is Question 6 of Study Group 16. JPEG has typically held meetings three or four times annually in North America, Asia and Europe. The chairman of JPEG (termed its Convenor in ISO/IEC terminology) is Prof. Touradj Ebrahimi of École Polytechnique Fédérale de Lausanne, who previously had led JPEG 2000 development within the JPEG committee and also had a leading role in MPEG-4 standardization. == History == In April 1983, ISO started to work to add photo quality graphics to text terminals. In the mid-1980s, both the CCITT (now ITU-T) and ISO had standardization groups for image coding: CCITT Study Group VIII (SG8) – Telematic Services and ISO TC97 SC2 WG8 – Coding of Audio and Picture Information. They were historically targeted on image communication. The JPEG committee was created in 1986 and the Joint (CCITT/ISO) Bi-level Image Group (JBIG) was created in 1988. Former chairs of JPEG include Greg Wallace of Digital Equipment Corporation and Daniel Lee of Yahoo. Fumitaka Ono of Tokyo Polytechnic University was chair of the former JBIG group that has since been merged into JPEG. == Standards published and under development == JPEG (Joint Photographic Experts Group) is Working Group 1 of ISO/IEC JTC 1/SC 29, titled JPEG Coding of digital representations of images (working as a joint team with ITU-T SG 16). It has developed various standards, which have been published by ITU-T and/or ISO/IEC. The standards developed by the JPEG (and former JBIG) sub-groups are referred to as a joint development of ISO/IEC JTC 1/SC 29/WG 1 and ITU-T SG16. The JPEG standards typically consist of different Parts in ISO/IEC terminology. Each Part is a separate document that covers a certain aspect of a suite of standards that share a project number, and the Parts can be adopted separately as individual standards or used together. For the JPEG standards that are published jointly with ITU-T, each ISO/IEC Part corresponds to a separate ITU-T Recommendation (i.e., a separate standard). Once published, JPEG standards have also often been revised by later amendments and/or new editions – e.g., to add optional extended capabilities or improve the editorial quality of the specifications. Standards developed and under development by JPEG are shown in the table below. == See also == Moving Picture Experts Group (MPEG) Joint Bi-level Image Experts Group (JBIG) == References == == External links == Official website
Wikipedia/Joint_Photographic_Experts_Group
Enhanced Variable Rate CODEC (EVRC) is a speech codec used in CDMA networks. It was developed in 1995 to replace the QCELP vocoder which used more bandwidth on the carrier's network, thus EVRC's primary goal was to offer the mobile carriers more capacity on their networks while not increasing the amount of bandwidth or wireless spectrum needed. EVRC uses RCELP technology. EVRC compresses each 20 milliseconds of 8000 Hz, 16-bit sampled speech input into output frames of one of three different sizes: full rate – 171 bits (8.55 kbit/s), half rate – 80 bits (4.0 kbit/s), eighth rate – 16 bits (0.8 kbit/s). A quarter rate was not included in the original EVRC specification and eventually became part of EVRC-B. EVRC was replaced by SMV. Recently, however, SMV itself has been replaced by the new CDMA2000 4GV codecs. 4GV is the next generation 3GPP2 standards-based EVRC-B codec. 4GV is designed to allow service providers to dynamically prioritize voice capacity on their network as required. EVRC can be also used in 3GPP2 container file format - 3G2. == References == == External links == 3GPP2 specification EVRC – The Savior of CDMA? RFC 4788 - Enhancements to RTP Payload Formats for EVRC Family Codecs
Wikipedia/Enhanced_Variable_Rate_Codec
Dynamic Resolution Adaptation (DRA) was an audio encoding specification developed by DigiRise Technology. It had been selected as the Chinese national audio coding standard, and declared suitable for China Multimedia Mobile Broadcasting and DVB-H as addressed in the International Journal of Digital Multimedia Broadcasting. The format was recognised by the CTA EDID Timing Extension Block standard (used by many A/V interfaces) and the Blu-ray Disc specification, introduced with Blu-ray Disc 2.3. There were no discs released with DRA audio, though there were expected to be such discs for the Chinese market but did not happen. == References == == External links == Yu-Li You and Wenhua Ma, "DRA Audio Coding Standard: An Overview". Fa-Long Luo (ed.), Mobile Multimedia Broadcasting Standards, Springer US, 2009. ISBN 978-0-387-78262-1 (Print), ISBN 978-0-387-78263-8 (Online).
Wikipedia/Dynamic_Resolution_Adaptation
Advanced Systems Format (formerly Advanced Streaming Format, Active Streaming Format) is Microsoft's proprietary digital audio/digital video container format, especially meant for streaming media. ASF is part of the Media Foundation framework. == Overview and features == ASF is based on serialized objects which are essentially byte sequences identified by a GUID marker. The format does not specify how (i.e. with which codec) the video or audio should be encoded; it just specifies the structure of the video/audio stream. This is similar to the function performed by the QuickTime File Format, AVI, or Ogg formats. One of the objectives of ASF was to support playback from digital media servers, HTTP servers, and local storage devices such as hard disk drives. The most common media contained within an ASF file are Windows Media Audio (WMA) and Windows Media Video (WMV). The most common file extensions for ASF files are extension .WMA (audio-only files using Windows Media Audio, with MIME-type audio/x-ms-wma) and .WMV (files containing video, using the Windows Media Audio and Video codecs, with MIME-type video/x-ms-asf). These files are identical to the old .ASF files but for their extension and MIME-type. The different extensions are used to make it easier to identify the content of a media file. ASF files can also contain objects representing metadata, such as the artist, title, album and genre for an audio track, or the director of a video track, much like the ID3 tags of MP3 files. It supports scalable media types and stream prioritization; as such, it is a format optimized for streaming. The ASF container provides the framework for digital rights management in Windows Media Audio and Windows Media Video. An analysis of an older scheme used in WMA reveals that it is using a combination of elliptic curve cryptography key exchange, DES block cipher, a custom block cipher, RC4 stream cipher and the SHA-1 hashing function. ASF container-based media are sometimes still streamed on the internet either through the MMS protocol or the RTSP protocol. Mostly, however, they contain material encoded for 'progressive download', which can be distributed by any webserver and then offers the same advantages as streaming: the file starts playing as soon as a minimum number of bytes is received and the rest of the download continues in the background while one is watching or listening. The Library of Congress Digital Preservation project considers ASF to be the de facto successor of RIFF. In 2010 Google picked RIFF as the container format for WebP. == License == The specification is downloadable from the Microsoft website, and the format can be implemented under a license from Microsoft that however does not allow distribution of sources and is not compatible with open source licenses. The author of the free software project VirtualDub reported that a Microsoft employee informed him that his software violated a Microsoft patent regarding ASF playback. Certain error-correcting techniques related to ASF were patented in the United States (United States Patent 6,041,345 Levi, et al. March 21, 2000) by Microsoft until August 10, 2019. == See also == Audio Video Interleave (AVI) Comparison of container formats == References == == External links == An Overview of Advanced Systems Format Overview of the ASF Format Library of Congress analysis of ASF format sustainability ASF Container Format - v2.0 (free available but unused) and v1.0 (reconstructed) MSDN How To Embed Windows Media Player in a HTML Web Page (For Webmasters) Creating A Windows Media Custom Experience (For Webmasters)
Wikipedia/Advanced_Systems_Format
Spectral methods are a class of techniques used in applied mathematics and scientific computing to numerically solve certain differential equations. The idea is to write the solution of the differential equation as a sum of certain "basis functions" (for example, as a Fourier series which is a sum of sinusoids) and then to choose the coefficients in the sum in order to satisfy the differential equation as well as possible. Spectral methods and finite-element methods are closely related and built on the same ideas; the main difference between them is that spectral methods use basis functions that are generally nonzero over the whole domain, while finite element methods use basis functions that are nonzero only on small subdomains (compact support). Consequently, spectral methods connect variables globally while finite elements do so locally. Partially for this reason, spectral methods have excellent error properties, with the so-called "exponential convergence" being the fastest possible, when the solution is smooth. However, there are no known three-dimensional single-domain spectral shock capturing results (shock waves are not smooth). In the finite-element community, a method where the degree of the elements is very high or increases as the grid parameter h increases is sometimes called a spectral-element method. Spectral methods can be used to solve differential equations (PDEs, ODEs, eigenvalue, etc) and optimization problems. When applying spectral methods to time-dependent PDEs, the solution is typically written as a sum of basis functions with time-dependent coefficients; substituting this in the PDE yields a system of ODEs in the coefficients which can be solved using any numerical method for ODEs. Eigenvalue problems for ODEs are similarly converted to matrix eigenvalue problems . Spectral methods were developed in a long series of papers by Steven Orszag starting in 1969 including, but not limited to, Fourier series methods for periodic geometry problems, polynomial spectral methods for finite and unbounded geometry problems, pseudospectral methods for highly nonlinear problems, and spectral iteration methods for fast solution of steady-state problems. The implementation of the spectral method is normally accomplished either with collocation or a Galerkin or a Tau approach . For very small problems, the spectral method is unique in that solutions may be written out symbolically, yielding a practical alternative to series solutions for differential equations. Spectral methods can be computationally less expensive and easier to implement than finite element methods; they shine best when high accuracy is sought in simple domains with smooth solutions. However, because of their global nature, the matrices associated with step computation are dense and computational efficiency will quickly suffer when there are many degrees of freedom (with some exceptions, for example if matrix applications can be written as Fourier transforms). For larger problems and nonsmooth solutions, finite elements will generally work better due to sparse matrices and better modelling of discontinuities and sharp bends. == Examples of spectral methods == === A concrete, linear example === Here we presume an understanding of basic multivariate calculus and Fourier series. If g ( x , y ) {\displaystyle g(x,y)} is a known, complex-valued function of two real variables, and g is periodic in x and y (that is, g ( x , y ) = g ( x + 2 π , y ) = g ( x , y + 2 π ) {\displaystyle g(x,y)=g(x+2\pi ,y)=g(x,y+2\pi )} ) then we are interested in finding a function f(x,y) so that ( ∂ 2 ∂ x 2 + ∂ 2 ∂ y 2 ) f ( x , y ) = g ( x , y ) for all x , y {\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+{\frac {\partial ^{2}}{\partial y^{2}}}\right)f(x,y)=g(x,y)\quad {\text{for all }}x,y} where the expression on the left denotes the second partial derivatives of f in x and y, respectively. This is the Poisson equation, and can be physically interpreted as some sort of heat conduction problem, or a problem in potential theory, among other possibilities. If we write f and g in Fourier series: f =: ∑ a j , k e i ( j x + k y ) , g =: ∑ b j , k e i ( j x + k y ) , {\displaystyle {\begin{aligned}f&=:\sum a_{j,k}e^{i(jx+ky)},\\[5mu]g&=:\sum b_{j,k}e^{i(jx+ky)},\end{aligned}}} and substitute into the differential equation, we obtain this equation: ∑ − a j , k ( j 2 + k 2 ) e i ( j x + k y ) = ∑ b j , k e i ( j x + k y ) . {\displaystyle \sum -a_{j,k}(j^{2}+k^{2})e^{i(jx+ky)}=\sum b_{j,k}e^{i(jx+ky)}.} We have exchanged partial differentiation with an infinite sum, which is legitimate if we assume for instance that f has a continuous second derivative. By the uniqueness theorem for Fourier expansions, we must then equate the Fourier coefficients term by term, giving which is an explicit formula for the Fourier coefficients aj,k. With periodic boundary conditions, the Poisson equation possesses a solution only if b0,0 = 0. Therefore, we can freely choose a0,0 which will be equal to the mean of the resolution. This corresponds to choosing the integration constant. To turn this into an algorithm, only finitely many frequencies are solved for. This introduces an error which can be shown to be proportional to h n {\displaystyle h^{n}} , where h := 1 / n {\displaystyle h:=1/n} and n {\displaystyle n} is the highest frequency treated. ==== Algorithm ==== Compute the Fourier transform (bj,k) of g. Compute the Fourier transform (aj,k) of f via the formula (*). Compute f by taking an inverse Fourier transform of (aj,k). Since we're only interested in a finite window of frequencies (of size n, say) this can be done using a fast Fourier transform algorithm. Therefore, globally the algorithm runs in time O(n log n). === Nonlinear example === We wish to solve the forced, transient, nonlinear Burgers' equation using a spectral approach. Given u ( x , 0 ) {\displaystyle u(x,0)} on the periodic domain x ∈ [ 0 , 2 π ) {\displaystyle x\in \left[0,2\pi \right)} , find u ∈ U {\displaystyle u\in {\mathcal {U}}} such that ∂ t u + u ∂ x u = ρ ∂ x x u + f ∀ x ∈ [ 0 , 2 π ) , ∀ t > 0 {\displaystyle \partial _{t}u+u\partial _{x}u=\rho \partial _{xx}u+f\quad \forall x\in \left[0,2\pi \right),\forall t>0} where ρ is the viscosity coefficient. In weak conservative form this becomes ⟨ ∂ t u , v ⟩ = ⟨ ∂ x ( − 1 2 u 2 + ρ ∂ x u ) , v ⟩ + ⟨ f , v ⟩ ∀ v ∈ V , ∀ t > 0 {\displaystyle \left\langle \partial _{t}u,v\right\rangle ={\Bigl \langle }\partial _{x}\left(-{\tfrac {1}{2}}u^{2}+\rho \partial _{x}u\right),v{\Bigr \rangle }+\left\langle f,v\right\rangle \quad \forall v\in {\mathcal {V}},\forall t>0} where following inner product notation. Integrating by parts and using periodicity grants ⟨ ∂ t u , v ⟩ = ⟨ 1 2 u 2 − ρ ∂ x u , ∂ x v ⟩ + ⟨ f , v ⟩ ∀ v ∈ V , ∀ t > 0. {\displaystyle \langle \partial _{t}u,v\rangle =\left\langle {\tfrac {1}{2}}u^{2}-\rho \partial _{x}u,\partial _{x}v\right\rangle +\left\langle f,v\right\rangle \quad \forall v\in {\mathcal {V}},\forall t>0.} To apply the Fourier–Galerkin method, choose both U N := { u : u ( x , t ) = ∑ k = − N / 2 N / 2 − 1 u ^ k ( t ) e i k x } {\displaystyle {\mathcal {U}}^{N}:={\biggl \{}u:u(x,t)=\sum _{k=-N/2}^{N/2-1}{\hat {u}}_{k}(t)e^{ikx}{\biggr \}}} and V N := span ⁡ { e i k x : k ∈ − 1 2 N , … , 1 2 N − 1 } {\displaystyle {\mathcal {V}}^{N}:=\operatorname {span} \left\{e^{ikx}:k\in -{\tfrac {1}{2}}N,\dots ,{\tfrac {1}{2}}N-1\right\}} where u ^ k ( t ) := 1 2 π ⟨ u ( x , t ) , e i k x ⟩ {\displaystyle {\hat {u}}_{k}(t):={\frac {1}{2\pi }}\langle u(x,t),e^{ikx}\rangle } . This reduces the problem to finding u ∈ U N {\displaystyle u\in {\mathcal {U}}^{N}} such that ⟨ ∂ t u , e i k x ⟩ = ⟨ 1 2 u 2 − ρ ∂ x u , ∂ x e i k x ⟩ + ⟨ f , e i k x ⟩ ∀ k ∈ { − 1 2 N , … , 1 2 N − 1 } , ∀ t > 0. {\displaystyle \langle \partial _{t}u,e^{ikx}\rangle =\left\langle {\tfrac {1}{2}}u^{2}-\rho \partial _{x}u,\partial _{x}e^{ikx}\right\rangle +\left\langle f,e^{ikx}\right\rangle \quad \forall k\in \left\{-{\tfrac {1}{2}}N,\dots ,{\tfrac {1}{2}}N-1\right\},\forall t>0.} Using the orthogonality relation ⟨ e i l x , e i k x ⟩ = 2 π δ l k {\displaystyle \langle e^{ilx},e^{ikx}\rangle =2\pi \delta _{lk}} where δ l k {\displaystyle \delta _{lk}} is the Kronecker delta, we simplify the above three terms for each k {\displaystyle k} to see ⟨ ∂ t u , e i k x ⟩ = ⟨ ∂ t ∑ l u ^ l e i l x , e i k x ⟩ = ⟨ ∑ l ∂ t u ^ l e i l x , e i k x ⟩ = 2 π ∂ t u ^ k , ⟨ f , e i k x ⟩ = ⟨ ∑ l f ^ l e i l x , e i k x ⟩ = 2 π f ^ k , and ⟨ 1 2 u 2 − ρ ∂ x u , ∂ x e i k x ⟩ = ⟨ 1 2 ( ∑ p u ^ p e i p x ) ( ∑ q u ^ q e i q x ) − ρ ∂ x ∑ l u ^ l e i l x , ∂ x e i k x ⟩ = ⟨ 1 2 ∑ p ∑ q u ^ p u ^ q e i ( p + q ) x , i k e i k x ⟩ − ⟨ ρ i ∑ l l u ^ l e i l x , i k e i k x ⟩ = − 1 2 i k ⟨ ∑ p ∑ q u ^ p u ^ q e i ( p + q ) x , e i k x ⟩ − ρ k ⟨ ∑ l l u ^ l e i l x , e i k x ⟩ = − i π k ∑ p + q = k u ^ p u ^ q − 2 π ρ k 2 u ^ k . {\displaystyle {\begin{aligned}\left\langle \partial _{t}u,e^{ikx}\right\rangle &={\biggl \langle }\partial _{t}\sum _{l}{\hat {u}}_{l}e^{ilx},e^{ikx}{\biggr \rangle }={\biggl \langle }\sum _{l}\partial _{t}{\hat {u}}_{l}e^{ilx},e^{ikx}{\biggr \rangle }=2\pi \partial _{t}{\hat {u}}_{k},\\\left\langle f,e^{ikx}\right\rangle &={\biggl \langle }\sum _{l}{\hat {f}}_{l}e^{ilx},e^{ikx}{\biggr \rangle }=2\pi {\hat {f}}_{k},{\text{ and}}\\\left\langle {\tfrac {1}{2}}u^{2}-\rho \partial _{x}u,\partial _{x}e^{ikx}\right\rangle &={\biggl \langle }{\tfrac {1}{2}}{\Bigl (}\sum _{p}{\hat {u}}_{p}e^{ipx}{\Bigr )}{\Bigl (}\sum _{q}{\hat {u}}_{q}e^{iqx}{\Bigr )}-\rho \partial _{x}\sum _{l}{\hat {u}}_{l}e^{ilx},\partial _{x}e^{ikx}{\biggr \rangle }\\&={\biggl \langle }{\tfrac {1}{2}}\sum _{p}\sum _{q}{\hat {u}}_{p}{\hat {u}}_{q}e^{i\left(p+q\right)x},ike^{ikx}{\biggr \rangle }-{\biggl \langle }\rho i\sum _{l}l{\hat {u}}_{l}e^{ilx},ike^{ikx}{\biggr \rangle }\\&=-{\tfrac {1}{2}}ik{\biggl \langle }\sum _{p}\sum _{q}{\hat {u}}_{p}{\hat {u}}_{q}e^{i\left(p+q\right)x},e^{ikx}{\biggr \rangle }-\rho k{\biggl \langle }\sum _{l}l{\hat {u}}_{l}e^{ilx},e^{ikx}{\biggr \rangle }\\&=-i\pi k\sum _{p+q=k}{\hat {u}}_{p}{\hat {u}}_{q}-2\pi \rho {}k^{2}{\hat {u}}_{k}.\end{aligned}}} Assemble the three terms for each k {\displaystyle k} to obtain 2 π ∂ t u ^ k = − i π k ∑ p + q = k u ^ p u ^ q − 2 π ρ k 2 u ^ k + 2 π f ^ k k ∈ { − 1 2 N , … , 1 2 N − 1 } , ∀ t > 0. {\displaystyle 2\pi \partial _{t}{\hat {u}}_{k}=-i\pi k\sum _{p+q=k}{\hat {u}}_{p}{\hat {u}}_{q}-2\pi \rho {}k^{2}{\hat {u}}_{k}+2\pi {\hat {f}}_{k}\quad k\in \left\{-{\tfrac {1}{2}}N,\dots ,{\tfrac {1}{2}}N-1\right\},\forall t>0.} Dividing through by 2 π {\displaystyle 2\pi } , we finally arrive at ∂ t u ^ k = − i k 2 ∑ p + q = k u ^ p u ^ q − ρ k 2 u ^ k + f ^ k k ∈ { − 1 2 N , … , 1 2 N − 1 } , ∀ t > 0. {\displaystyle \partial _{t}{\hat {u}}_{k}=-{\frac {ik}{2}}\sum _{p+q=k}{\hat {u}}_{p}{\hat {u}}_{q}-\rho {}k^{2}{\hat {u}}_{k}+{\hat {f}}_{k}\quad k\in \left\{-{\tfrac {1}{2}}N,\dots ,{\tfrac {1}{2}}N-1\right\},\forall t>0.} With Fourier transformed initial conditions u ^ k ( 0 ) {\displaystyle {\hat {u}}_{k}(0)} and forcing f ^ k ( t ) {\displaystyle {\hat {f}}_{k}(t)} , this coupled system of ordinary differential equations may be integrated in time (using, e.g., a Runge Kutta technique) to find a solution. The nonlinear term is a convolution, and there are several transform-based techniques for evaluating it efficiently. See the references by Boyd and Canuto et al. for more details. == A relationship with the spectral element method == One can show that if g {\displaystyle g} is infinitely differentiable, then the numerical algorithm using Fast Fourier Transforms will converge faster than any polynomial in the grid size h. That is, for any n>0, there is a C n < ∞ {\displaystyle C_{n}<\infty } such that the error is less than C n h n {\displaystyle C_{n}h^{n}} for all sufficiently small values of h {\displaystyle h} . We say that the spectral method is of order n {\displaystyle n} , for every n>0. Because a spectral element method is a finite element method of very high order, there is a similarity in the convergence properties. However, whereas the spectral method is based on the eigendecomposition of the particular boundary value problem, the finite element method does not use that information and works for arbitrary elliptic boundary value problems. == See also == Finite element method Gaussian grid Pseudo-spectral method Spectral element method Galerkin method Collocation method == References == Bengt Fornberg (1996) A Practical Guide to Pseudospectral Methods. Cambridge University Press, Cambridge, UK Chebyshev and Fourier Spectral Methods by John P. Boyd. Canuto C., Hussaini M. Y., Quarteroni A., and Zang T.A. (2006) Spectral Methods. Fundamentals in Single Domains. Springer-Verlag, Berlin Heidelberg Javier de Frutos, Julia Novo (2000): A Spectral Element Method for the Navier–Stokes Equations with Improved Accuracy Polynomial Approximation of Differential Equations, by Daniele Funaro, Lecture Notes in Physics, Volume 8, Springer-Verlag, Heidelberg 1992 D. Gottlieb and S. Orzag (1977) "Numerical Analysis of Spectral Methods : Theory and Applications", SIAM, Philadelphia, PA J. Hesthaven, S. Gottlieb and D. Gottlieb (2007) "Spectral methods for time-dependent problems", Cambridge UP, Cambridge, UK Steven A. Orszag (1969) Numerical Methods for the Simulation of Turbulence, Phys. Fluids Supp. II, 12, 250–257 Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 20.7. Spectral Methods". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. Jie Shen, Tao Tang and Li-Lian Wang (2011) "Spectral Methods: Algorithms, Analysis and Applications" (Springer Series in Computational Mathematics, V. 41, Springer), ISBN 354071040X Lloyd N. Trefethen (2000) Spectral Methods in MATLAB. SIAM, Philadelphia, PA Muradova A. D. (2008) "The spectral method and numerical continuation algorithm for the von Kármán problem with postbuckling behaviour of solutions", Advances in Computational Mathematics, 29, pp. 179–206, https://doi.org/10.1007/s10444-007-9050-7. Muradova A. D. (2015) "A time spectral method for solving the nonlinear dynamic equations of a rectangular elastic plate", Journal of Engineering Mathematics, 92, pp. 83–101, https://doi.org/10.1007/s10665-014-9752-z.
Wikipedia/Spectral_methods
A facial recognition system is a technology potentially capable of matching a human face from a digital image or a video frame against a database of faces. Such a system is typically employed to authenticate users through ID verification services, and works by pinpointing and measuring facial features from a given image. Development began on similar systems in the 1960s, beginning as a form of computer application. Since their inception, facial recognition systems have seen wider uses in recent times on smartphones and in other forms of technology, such as robotics. Because computerized facial recognition involves the measurement of a human's physiological characteristics, facial recognition systems are categorized as biometrics. Although the accuracy of facial recognition systems as a biometric technology is lower than iris recognition, fingerprint image acquisition, palm recognition or voice recognition, it is widely adopted due to its contactless process. Facial recognition systems have been deployed in advanced human–computer interaction, video surveillance, law enforcement, passenger screening, decisions on employment and housing and automatic indexing of images. Facial recognition systems are employed throughout the world today by governments and private companies. Their effectiveness varies, and some systems have previously been scrapped because of their ineffectiveness. The use of facial recognition systems has also raised controversy, with claims that the systems violate citizens' privacy, commonly make incorrect identifications, encourage gender norms and racial profiling, and do not protect important biometric data. The appearance of synthetic media such as deepfakes has also raised concerns about its security. These claims have led to the ban of facial recognition systems in several cities in the United States. Growing societal concerns led social networking company Meta Platforms to shut down its Facebook facial recognition system in 2021, deleting the face scan data of more than one billion users. The change represented one of the largest shifts in facial recognition usage in the technology's history. IBM also stopped offering facial recognition technology due to similar concerns. == History of facial recognition technology == Automated facial recognition was pioneered in the 1960s by Woody Bledsoe, Helen Chan Wolf, and Charles Bisson, whose work focused on teaching computers to recognize human faces. Their early facial recognition project was dubbed "man-machine" because a human first needed to establish the coordinates of facial features in a photograph before they could be used by a computer for recognition. Using a graphics tablet, a human would pinpoint facial features coordinates, such as the pupil centers, the inside and outside corners of eyes, and the widows peak in the hairline. The coordinates were used to calculate 20 individual distances, including the width of the mouth and of the eyes. A human could process about 40 pictures an hour, building a database of these computed distances. A computer would then automatically compare the distances for each photograph, calculate the difference between the distances, and return the closed records as a possible match. In 1970, Takeo Kanade publicly demonstrated a face-matching system that located anatomical features such as the chin and calculated the distance ratio between facial features without human intervention. Later tests revealed that the system could not always reliably identify facial features. Nonetheless, interest in the subject grew and in 1977 Kanade published the first detailed book on facial recognition technology. In 1993, the Defense Advanced Research Project Agency (DARPA) and the Army Research Laboratory (ARL) established the face recognition technology program FERET to develop "automatic face recognition capabilities" that could be employed in a productive real life environment "to assist security, intelligence, and law enforcement personnel in the performance of their duties." Face recognition systems that had been trialled in research labs were evaluated. The FERET tests found that while the performance of existing automated facial recognition systems varied, a handful of existing methods could viably be used to recognize faces in still images taken in a controlled environment. The FERET tests spawned three US companies that sold automated facial recognition systems. Vision Corporation and Miros Inc were founded in 1994, by researchers who used the results of the FERET tests as a selling point. Viisage Technology was established by an identification card defense contractor in 1996 to commercially exploit the rights to the facial recognition algorithm developed by Alex Pentland at MIT. Following the 1993 FERET face-recognition vendor test, the Department of Motor Vehicles (DMV) offices in West Virginia and New Mexico became the first DMV offices to use automated facial recognition systems to prevent people from obtaining multiple driving licenses using different names. Driver's licenses in the United States were at that point a commonly accepted form of photo identification. DMV offices across the United States were undergoing a technological upgrade and were in the process of establishing databases of digital ID photographs. This enabled DMV offices to deploy the facial recognition systems on the market to search photographs for new driving licenses against the existing DMV database. DMV offices became one of the first major markets for automated facial recognition technology and introduced US citizens to facial recognition as a standard method of identification. The increase of the US prison population in the 1990s prompted U.S. states to established connected and automated identification systems that incorporated digital biometric databases, in some instances this included facial recognition. In 1999, Minnesota incorporated the facial recognition system FaceIT by Visionics into a mug shot booking system that allowed police, judges and court officers to track criminals across the state. Until the 1990s, facial recognition systems were developed primarily by using photographic portraits of human faces. Research on face recognition to reliably locate a face in an image that contains other objects gained traction in the early 1990s with the principal component analysis (PCA). The PCA method of face detection is also known as Eigenface and was developed by Matthew Turk and Alex Pentland. Turk and Pentland combined the conceptual approach of the Karhunen–Loève theorem and factor analysis, to develop a linear model. Eigenfaces are determined based on global and orthogonal features in human faces. A human face is calculated as a weighted combination of a number of Eigenfaces. Because few Eigenfaces were used to encode human faces of a given population, Turk and Pentland's PCA face detection method greatly reduced the amount of data that had to be processed to detect a face. Pentland in 1994 defined Eigenface features, including eigen eyes, eigen mouths and eigen noses, to advance the use of PCA in facial recognition. In 1997, the PCA Eigenface method of face recognition was improved upon using linear discriminant analysis (LDA) to produce Fisherfaces. LDA Fisherfaces became dominantly used in PCA feature based face recognition. While Eigenfaces were also used for face reconstruction. In these approaches no global structure of the face is calculated which links the facial features or parts. Purely feature based approaches to facial recognition were overtaken in the late 1990s by the Bochum system, which used Gabor filter to record the face features and computed a grid of the face structure to link the features. Christoph von der Malsburg and his research team at the University of Bochum developed Elastic Bunch Graph Matching in the mid-1990s to extract a face out of an image using skin segmentation. By 1997, the face detection method developed by Malsburg outperformed most other facial detection systems on the market. The so-called "Bochum system" of face detection was sold commercially on the market as ZN-Face to operators of airports and other busy locations. The software was "robust enough to make identifications from less-than-perfect face views. It can also often see through such impediments to identification as mustaches, beards, changed hairstyles and glasses—even sunglasses". Real-time face detection in video footage became possible in 2001 with the Viola–Jones object detection framework for faces. Paul Viola and Michael Jones combined their face detection method with the Haar-like feature approach to object recognition in digital images to launch AdaBoost, the first real-time frontal-view face detector. By 2015, the Viola–Jones algorithm had been implemented using small low power detectors on handheld devices and embedded systems. Therefore, the Viola–Jones algorithm has not only broadened the practical application of face recognition systems but has also been used to support new features in user interfaces and teleconferencing. Ukraine is using the US-based Clearview AI facial recognition software to identify dead Russian soldiers. Ukraine has conducted 8,600 searches and identified the families of 582 deceased Russian soldiers. The IT volunteer section of the Ukrainian army using the software is subsequently contacting the families of the deceased soldiers to raise awareness of Russian activities in Ukraine. The main goal is to destabilise the Russian government. It can be seen as a form of psychological warfare. About 340 Ukrainian government officials in five government ministries are using the technology. It is used to catch spies that might try to enter Ukraine. Clearview AI's facial recognition database is only available to government agencies who may only use the technology to assist in the course of law enforcement investigations or in connection with national security. The software was donated to Ukraine by Clearview AI. Russia is thought to be using it to find anti-war activists. Clearview AI was originally designed for US law enforcement. Using it in war raises new ethical concerns. One London based surveillance expert, Stephen Hare, is concerned it might make the Ukrainians appear inhuman: "Is it actually working? Or is it making [Russians] say: 'Look at these lawless, cruel Ukrainians, doing this to our boys'?" == Techniques for face recognition == While humans can recognize faces without much effort, facial recognition is a challenging pattern recognition problem in computing. Facial recognition systems attempt to identify a human face, which is three-dimensional and changes in appearance with lighting and facial expression, based on its two-dimensional image. To accomplish this computational task, facial recognition systems perform four steps. First face detection is used to segment the face from the image background. In the second step the segmented face image is aligned to account for face pose, image size and photographic properties, such as illumination and grayscale. The purpose of the alignment process is to enable the accurate localization of facial features in the third step, the facial feature extraction. Features such as eyes, nose and mouth are pinpointed and measured in the image to represent the face. The so established feature vector of the face is then, in the fourth step, matched against a database of faces. === Traditional === Some face recognition algorithms identify facial features by extracting landmarks, or features, from an image of the subject's face. For example, an algorithm may analyze the relative position, size, and/or shape of the eyes, nose, cheekbones, and jaw. These features are then used to search for other images with matching features. Other algorithms normalize a gallery of face images and then compress the face data, only saving the data in the image that is useful for face recognition. A probe image is then compared with the face data. One of the earliest successful systems is based on template matching techniques applied to a set of salient facial features, providing a sort of compressed face representation. Recognition algorithms can be divided into two main approaches: geometric, which looks at distinguishing features, or photo-metric, which is a statistical approach that distills an image into values and compares the values with templates to eliminate variances. Some classify these algorithms into two broad categories: holistic and feature-based models. The former attempts to recognize the face in its entirety while the feature-based subdivide into components such as according to features and analyze each as well as its spatial location with respect to other features. Popular recognition algorithms include principal component analysis using eigenfaces, linear discriminant analysis, elastic bunch graph matching using the Fisherface algorithm, the hidden Markov model, the multilinear subspace learning using tensor representation, and the neuronal motivated dynamic link matching. Modern facial recognition systems make increasing use of machine learning techniques such as deep learning. === Human identification at a distance (HID) === To enable human identification at a distance (HID) low-resolution images of faces are enhanced using face hallucination. In CCTV imagery faces are often very small. But because facial recognition algorithms that identify and plot facial features require high resolution images, resolution enhancement techniques have been developed to enable facial recognition systems to work with imagery that has been captured in environments with a high signal-to-noise ratio. Face hallucination algorithms that are applied to images prior to those images being submitted to the facial recognition system use example-based machine learning with pixel substitution or nearest neighbour distribution indexes that may also incorporate demographic and age related facial characteristics. Use of face hallucination techniques improves the performance of high resolution facial recognition algorithms and may be used to overcome the inherent limitations of super-resolution algorithms. Face hallucination techniques are also used to pre-treat imagery where faces are disguised. Here the disguise, such as sunglasses, is removed and the face hallucination algorithm is applied to the image. Such face hallucination algorithms need to be trained on similar face images with and without disguise. To fill in the area uncovered by removing the disguise, face hallucination algorithms need to correctly map the entire state of the face, which may be not possible due to the momentary facial expression captured in the low resolution image. === 3-dimensional recognition === Three-dimensional face recognition technique uses 3D sensors to capture information about the shape of a face. This information is then used to identify distinctive features on the surface of a face, such as the contour of the eye sockets, nose, and chin. One advantage of 3D face recognition is that it is not affected by changes in lighting like other techniques. It can also identify a face from a range of viewing angles, including a profile view. Three-dimensional data points from a face vastly improve the precision of face recognition. 3D-dimensional face recognition research is enabled by the development of sophisticated sensors that project structured light onto the face. 3D matching technique are sensitive to expressions, therefore researchers at Technion applied tools from metric geometry to treat expressions as isometries. A new method of capturing 3D images of faces uses three tracking cameras that point at different angles; one camera will be pointing at the front of the subject, second one to the side, and third one at an angle. All these cameras will work together so it can track a subject's face in real-time and be able to face detect and recognize. === Thermal cameras === A different form of taking input data for face recognition is by using thermal cameras, by this procedure the cameras will only detect the shape of the head and it will ignore the subject accessories such as glasses, hats, or makeup. Unlike conventional cameras, thermal cameras can capture facial imagery even in low-light and nighttime conditions without using a flash and exposing the position of the camera. However, the databases for face recognition are limited. Efforts to build databases of thermal face images date back to 2004. By 2016, several databases existed, including the IIITD-PSE and the Notre Dame thermal face database. Current thermal face recognition systems are not able to reliably detect a face in a thermal image that has been taken of an outdoor environment. In 2018, researchers from the U.S. Army Research Laboratory (ARL) developed a technique that would allow them to match facial imagery obtained using a thermal camera with those in databases that were captured using a conventional camera. Known as a cross-spectrum synthesis method due to how it bridges facial recognition from two different imaging modalities, this method synthesize a single image by analyzing multiple facial regions and details. It consists of a non-linear regression model that maps a specific thermal image into a corresponding visible facial image and an optimization issue that projects the latent projection back into the image space. ARL scientists have noted that the approach works by combining global information (i.e. features across the entire face) with local information (i.e. features regarding the eyes, nose, and mouth). According to performance tests conducted at ARL, the multi-region cross-spectrum synthesis model demonstrated a performance improvement of about 30% over baseline methods and about 5% over state-of-the-art methods. == Application == === Social media === Founded in 2013, Looksery went on to raise money for its face modification app on Kickstarter. After successful crowdfunding, Looksery launched in October 2014. The application allows video chat with others through a special filter for faces that modifies the look of users. Image augmenting applications already on the market, such as Facetune and Perfect365, were limited to static images, whereas Looksery allowed augmented reality to live videos. In late 2015 SnapChat purchased Looksery, which would then become its landmark lenses function. Snapchat filter applications use face detection technology and on the basis of the facial features identified in an image a 3D mesh mask is layered over the face. A variety of technologies attempt to fool facial recognition software by the use of anti-facial recognition masks. DeepFace is a deep learning facial recognition system created by a research group at Facebook. It identifies human faces in digital images. It employs a nine-layer neural net with over 120 million connection weights, and was trained on four million images uploaded by Facebook users. The system is said to be 97% accurate, compared to 85% for the FBI's Next Generation Identification system. TikTok's algorithm has been regarded as especially effective, but many were left to wonder at the exact programming that caused the app to be so effective in guessing the user's desired content. In June 2020, TikTok released a statement regarding the "For You" page, and how they recommended videos to users, which did not include facial recognition. In February 2021, however, TikTok agreed to a $92 million settlement to a US lawsuit which alleged that the app had used facial recognition in both user videos and its algorithm to identify age, gender and ethnicity. === ID verification === The emerging use of facial recognition is in the use of ID verification services. Many companies and others are working in the market now to provide these services to banks, ICOs, and other e-businesses. Face recognition has been leveraged as a form of biometric authentication for various computing platforms and devices; Android 4.0 "Ice Cream Sandwich" added facial recognition using a smartphone's front camera as a means of unlocking devices, while Microsoft introduced face recognition login to its Xbox 360 video game console through its Kinect accessory, as well as Windows 10 via its "Windows Hello" platform (which requires an infrared-illuminated camera). In 2017, Apple's iPhone X smartphone introduced facial recognition to the product line with its "Face ID" platform, which uses an infrared illumination system. ==== Face ID ==== Apple introduced Face ID on the flagship iPhone X as a biometric authentication successor to the Touch ID, a fingerprint based system. Face ID has a facial recognition sensor that consists of two parts: a "Romeo" module that projects more than 30,000 infrared dots onto the user's face, and a "Juliet" module that reads the pattern. The pattern is sent to a local "Secure Enclave" in the device's central processing unit (CPU) to confirm a match with the phone owner's face. The facial pattern is not accessible by Apple. The system will not work with eyes closed, in an effort to prevent unauthorized access. The technology learns from changes in a user's appearance, and therefore works with hats, scarves, glasses, and many sunglasses, beard and makeup. It also works in the dark. This is done by using a "Flood Illuminator", which is a dedicated infrared flash that throws out invisible infrared light onto the user's face to get a 2d picture in addition to the 30,000 facial points. === Healthcare === Facial recognition algorithms can help in diagnosing some diseases using specific features on the nose, cheeks and other part of the human face. Relying on developed data sets, machine learning has been used to identify genetic abnormalities just based on facial dimensions. FRT has also been used to verify patients before surgery procedures. In March, 2022 according to a publication by Forbes, FDNA, an AI development company claimed that in the space of 10 years, they have worked with geneticists to develop a database of about 5,000 diseases and 1500 of them can be detected with facial recognition algorithms. === Deployment of FRT for availing government services === ==== India ==== In an interview, the National Health Authority chief Dr. R.S. Sharma said that facial recognition technology would be used in conjunction with Aadhaar to authenticate the identity of people seeking vaccines. Ten human rights and digital rights organizations and more than 150 individuals signed a statement by the Internet Freedom Foundation that raised alarm against the deployment of facial recognition technology in the central government's vaccination drive process. Implementation of an error-prone system without adequate legislation containing mandatory safeguards, would deprive citizens of essential services and linking this untested technology to the vaccination roll-out in India will only exclude persons from the vaccine delivery system. In July, 2021, a press release by the Government of Meghalaya stated that facial recognition technology (FRT) would be used to verify the identity of pensioners to issue a Digital Life Certificate using "Pensioner's Life Certification Verification" mobile application. The notice, according to the press release, purports to offer pensioners "a secure, easy and hassle-free interface for verifying their liveness to the Pension Disbursing Authorities from the comfort of their homes using smart phones". Mr. Jade Jeremiah Lyngdoh, a law student, sent a legal notice to the relevant authorities highlighting that "The application has been rolled out without any anchoring legislation which governs the processing of personal data and thus, lacks lawfulness and the Government is not empowered to process data." === Deployment in security services === ==== Commonwealth ==== The Australian Border Force and New Zealand Customs Service have set up an automated border processing system called SmartGate that uses face recognition, which compares the face of the traveller with the data in the e-passport microchip. All Canadian international airports use facial recognition as part of the Primary Inspection Kiosk program that compares a traveler face to their photo stored on the ePassport. This program first came to Vancouver International Airport in early 2017 and was rolled up to all remaining international airports in 2018–2019. Police forces in the United Kingdom have been trialing live facial recognition technology at public events since 2015. In May 2017, a man was arrested using an automatic facial recognition (AFR) system mounted on a van operated by the South Wales Police. Ars Technica reported that "this appears to be the first time [AFR] has led to an arrest". However, a 2018 report by Big Brother Watch found that these systems were up to 98% inaccurate. The report also revealed that two UK police forces, South Wales Police and the Metropolitan Police, were using live facial recognition at public events and in public spaces. In September 2019, South Wales Police use of facial recognition was ruled lawful. Live facial recognition has been trialled since 2016 in the streets of London and will be used on a regular basis from Metropolitan Police from beginning of 2020. In August 2020 the Court of Appeal ruled that the way the facial recognition system had been used by the South Wales Police in 2017 and 2018 violated human rights. However, by 2024 the Metropolitan Police were using the technique with a database of 16,000 suspects, leading to over 360 arrests, including rapists and someone wanted for grievous bodily harm for 8 years. They claim a false positive rate of only 1 in 6,000. The photos of those not identified by the system are deleted immediately. ==== United States ==== The U.S. Department of State operates one of the largest face recognition systems in the world with a database of 117 million American adults, with photos typically drawn from driver's license photos. Although it is still far from completion, it is being put to use in certain cities to give clues as to who was in the photo. The FBI uses the photos as an investigative tool, not for positive identification. As of 2016, facial recognition was being used to identify people in photos taken by police in San Diego and Los Angeles (not on real-time video, and only against booking photos) and use was planned in West Virginia and Dallas. In recent years Maryland has used face recognition by comparing people's faces to their driver's license photos. The system drew controversy when it was used in Baltimore to arrest unruly protesters after the death of Freddie Gray in police custody. Many other states are using or developing a similar system however some states have laws prohibiting its use. The FBI has also instituted its Next Generation Identification program to include face recognition, as well as more traditional biometrics like fingerprints and iris scans, which can pull from both criminal and civil databases. The federal Government Accountability Office criticized the FBI for not addressing various concerns related to privacy and accuracy. Starting in 2018, U.S. Customs and Border Protection deployed "biometric face scanners" at U.S. airports. Passengers taking outbound international flights can complete the check-in, security and the boarding process after getting facial images captured and verified by matching their ID photos stored on CBP's database. Images captured for travelers with U.S. citizenship will be deleted within up to 12-hours. The Transportation Security Administration (TSA) had expressed its intention to adopt a similar program for domestic air travel during the security check process in the future. The American Civil Liberties Union is one of the organizations against the program, concerning that the program will be used for surveillance purposes. In 2019, researchers reported that Immigration and Customs Enforcement (ICE) uses facial recognition software against state driver's license databases, including for some states that provide licenses to undocumented immigrants. In December 2022, 16 major domestic airports in the US started testing facial-recognition tech where kiosks with cameras are checking the photos on travelers' IDs to make sure that passengers are not impostors. In 2025, it was revealed that the New Orleans Police Department had rolled out what the ACLU's Freed Wessler called "the first known widespread effort by police in a major US city to use AI to identify people in live camera feeds for the purpose of making immediate arrests." in defiance of a 2022 city ordinance limiting the use of the technology. ==== China ==== In 2006, the "Skynet" (天網))Project was initiated by the Chinese government to implement CCTV surveillance nationwide and as of 2018, there have been 20 million cameras, many of which are capable of real-time facial recognition, deployed across the country for this project. Some official claim that the current Skynet system can scan the entire Chinese population in one second and the world population in two seconds. In 2017, the Qingdao police was able to identify twenty-five wanted suspects using facial recognition equipment at the Qingdao International Beer Festival, one of which had been on the run for 10 years. The equipment works by recording a 15-second video clip and taking multiple snapshots of the subject. That data is compared and analyzed with images from the police department's database and within 20 minutes, the subject can be identified with a 98.1% accuracy. In 2018, Chinese police in Zhengzhou and Beijing were using smart glasses to take photos which are compared against a government database using facial recognition to identify suspects, retrieve an address, and track people moving beyond their home areas. As of late 2017, China has deployed facial recognition and artificial intelligence technology in Xinjiang. Reporters visiting the region found surveillance cameras installed every hundred meters or so in several cities, as well as facial recognition checkpoints at areas like gas stations, shopping centers, and mosque entrances. In May 2019, Human Rights Watch reported finding Face++ code in the Integrated Joint Operations Platform (IJOP), a police surveillance app used to collect data on, and track the Uighur community in Xinjiang. Human Rights Watch released a correction to its report in June 2019 stating that the Chinese company Megvii did not appear to have collaborated on IJOP, and that the Face++ code in the app was inoperable. In February 2020, following the Coronavirus outbreak, Megvii applied for a bank loan to optimize the body temperature screening system it had launched to help identify people with symptoms of a Coronavirus infection in crowds. In the loan application Megvii stated that it needed to improve the accuracy of identifying masked individuals. Many public places in China are implemented with facial recognition equipment, including railway stations, airports, tourist attractions, expos, and office buildings. In October 2019, a professor at Zhejiang Sci-Tech University sued the Hangzhou Safari Park for abusing private biometric information of customers. The safari park uses facial recognition technology to verify the identities of its Year Card holders. An estimated 300 tourist sites in China have installed facial recognition systems and use them to admit visitors. This case is reported to be the first on the use of facial recognition systems in China. In August 2020, Radio Free Asia reported that in 2019 Geng Guanjun, a citizen of Taiyuan City who had used the WeChat app by Tencent to forward a video to a friend in the United States was subsequently convicted on the charge of the crime "picking quarrels and provoking troubles". The Court documents showed that the Chinese police used a facial recognition system to identify Geng Guanjun as an "overseas democracy activist" and that China's network management and propaganda departments directly monitor WeChat users. In 2019, Protestors in Hong Kong destroyed smart lampposts amid concerns they could contain cameras and facial recognition system used for surveillance by Chinese authorities. Human rights groups have criticized the Chinese government for using artificial intelligence facial recognition technology in its suppression against Uyghurs, Christians and Falun Gong practitioners. ==== India ==== Even though facial recognition technology (FRT) is not fully accurate, it is being increasingly deployed for identification purposes by the police in India. FRT systems generate a probability match score, or a confidence score between the suspect who is to be identified and the database of identified criminals that is available with the police. The National Automated Facial Recognition System (AFRS) is already being developed by the National Crime Records Bureau (NCRB), a body constituted under the Ministry of Home Affairs. The project seeks to develop and deploy a national database of photographs which would comport with a facial recognition technology system by the central and state security agencies. The Internet Freedom Foundation has flagged concerns regarding the project. The NGO has highlighted that the accuracy of FRT systems are "routinely exaggerated and the real numbers leave much to be desired. The implementation of such faulty FRT systems would lead to high rates of false positives and false negatives in this recognition process." Under the Supreme Court of India's decision in Justice K.S. Puttaswamy vs Union of India (22017 10 SCC 1), any justifiable intrusion by the State into people's right to privacy, which is protected as a fundamental right under Article 21 of the Constitution, must confirm to certain thresholds, namely: legality, necessity, proportionality and procedural safeguards. As per the Internet Freedom Foundation, the National Automated Facial Recognition System (AFRS) proposal fails to meet any of these thresholds, citing "absence of legality," "manifest arbitrariness," and "absence of safeguards and accountability." While the national level AFRS project is still in the works, police departments in various states in India are already deploying facial recognition technology systems, such as: TSCOP + CCTNS in Telangana, Punjab Artificial Intelligence System (PAIS) in Punjab, Trinetra in Uttar Pradesh, Police Artificial Intelligence System in Uttarakhand, AFRS in Delhi, Automated Multimodal Biometric Identification System (AMBIS) in Maharashtra, FaceTagr in Tamil Nadu. The Crime and Criminal Tracking Network and Systems (CCTNS), which is a Mission Mode Project under the National e-Governance Plan (NeGP), is viewed as a system which would connect police stations across India, and help them "talk" to each other. The project's objective is to digitize all FIR-related information, including FIRs registered, as well as cases investigated, charge sheets filed, and suspects and wanted persons in all police stations. This shall constitute a national database of crime and criminals in India. CCTNS is being implemented without a data protection law in place. CCTNS is proposed to be integrated with the AFRS, a repository of all crime and criminal related facial data which can be deployed to purportedly identify or verify a person from a variety of inputs ranging from images to videos. This has raised privacy concerns from civil society organizations and privacy experts. Both the projects have been censured as instruments of "mass surveillance" at the hands of the state. In Rajasthan, 'RajCop,' a police app has been recently integrated with a facial recognition module which can match the face of a suspect against a database of known persons in real-time. Rajasthan police is in currently working to widen the ambit of this module by making it mandatory to upload photographs of all arrested persons in CCTNS database, which will "help develop a rich database of known offenders." Helmets fixed with camera have been designed and being used by Rajasthan police in law and order situations to capture police action and activities of "the miscreants, which can later serve as evidence during the investigation of such cases." PAIS (Punjab Artificial Intelligence System), App employs deep learning, machine learning, and face recognition for the identification of criminals to assist police personnel. The state of Telangana has installed 8 lakh CCTV cameras, with its capital city Hyderabad slowly turning into a surveillance capital. A false positive happens when facial recognition technology misidentifies a person to be someone they are not, that is, it yields an incorrect positive result. They often results in discrimination and strengthening of existing biases. For example, in 2018, Delhi Police reported that its FRT system had an accuracy rate of 2%, which sank to 1% in 2019. The FRT system even failed to distinguish accurately between different sexes. The government of Delhi in collaboration with Indian Space Research Organisation (ISRO) is developing a new technology called Crime Mapping Analytics and Predictive System (CMAPS). The project aims to deploy space technology for "controlling crime and maintaining law and order." The system will be connected to a database containing data of criminals. The technology is envisaged to be deployed to collect real-time data at the crime scene. In a reply dated November 25, 2020 to a Right to Information request filed by the Internet Freedom Foundation seeking information about the facial recognition system being used by the Delhi Police (with reference number DEPOL/R/E/20/07128), the Office of the Deputy Commissioner of Police cum Public Information Officer: Crime stated that they cannot provide the information under section 8(d) of the Right to Information Act, 2005. A Right to Information (RTI) request dated July 30, 2020 was filed with the Office of the Commissioner, Kolkata Police, seeking information about the facial recognition technology that the department was using. The information sought was denied stating that the department was exempted from disclosure under section 24(4) of the RTI Act. ==== Latin America ==== In the 2000 Mexican presidential election, the Mexican government employed face recognition software to prevent voter fraud. Some individuals had been registering to vote under several different names, in an attempt to place multiple votes. By comparing new face images to those already in the voter database, authorities were able to reduce duplicate registrations. In Colombia public transport busses are fitted with a facial recognition system by FaceFirst Inc to identify passengers that are sought by the National Police of Colombia. FaceFirst Inc also built the facial recognition system for Tocumen International Airport in Panama. The face recognition system is deployed to identify individuals among the travellers that are sought by the Panamanian National Police or Interpol. Tocumen International Airport operates an airport-wide surveillance system using hundreds of live face recognition cameras to identify wanted individuals passing through the airport. The face recognition system was initially installed as part of a US$11 million contract and included a computer cluster of sixty computers, a fiber-optic cable network for the airport buildings, as well as the installation of 150 surveillance cameras in the airport terminal and at about 30 airport gates. At the 2014 FIFA World Cup in Brazil the Federal Police of Brazil used face recognition goggles. Face recognition systems "made in China" were also deployed at the 2016 Summer Olympics in Rio de Janeiro. Nuctech Company provided 145 inspection terminals for Maracanã Stadium and 55 terminals for the Deodoro Olympic Park. ==== European Union ==== Police forces in at least 21 countries of the European Union use, or plan to use, facial recognition systems, either for administrative or criminal purposes. ===== Greece ===== Greek police passed a contract with Intracom-Telecom for the provision of at least 1,000 devices equipped with live facial recognition system. The delivery is expected before the summer 2021. The total value of the contract is over 4 million euros, paid for in large part by the Internal Security Fund of the European Commission. ===== Italy ===== Italian police acquired a face recognition system in 2017, Sistema Automatico Riconoscimento Immagini (SARI). In November 2020, the Interior ministry announced plans to use it in real-time to identify people suspected of seeking asylum. ===== The Netherlands ===== The Netherlands has deployed facial recognition and artificial intelligence technology since 2016. The database of the Dutch police currently contains over 2.2 million pictures of 1.3 million Dutch citizens. This accounts for about 8% of the population. In The Netherlands, face recognition is not used by the police on municipal CCTV. ==== South Africa ==== In South Africa, in 2016, the city of Johannesburg announced it was rolling out smart CCTV cameras complete with automatic number plate recognition and facial recognition. === Deployment in retail stores === The US firm 3VR, now Identiv, is an example of a vendor which began offering facial recognition systems and services to retailers as early as 2007. In 2012, the company advertised benefits such as "dwell and queue line analytics to decrease customer wait times", "facial surveillance analytic[s] to facilitate personalized customer greetings by employees" and the ability to "[c]reate loyalty programs by combining Point of sale (POS) data with facial recognition". ==== United States ==== In 2018, the National Retail Federation Loss Prevention Research Council called facial recognition technology "a promising new tool" worth evaluating. In July 2020, the Reuters news agency reported that during the 2010s the pharmacy chain Rite Aid had deployed facial recognition video surveillance systems and components from FaceFirst, DeepCam LLC, and other vendors at some retail locations in the United States. Cathy Langley, Rite Aid's vice president of asset protection, used the phrase "feature matching" to refer to the systems and said that usage of the systems resulted in less violence and organized crime in the company's stores, while former vice president of asset protection Bob Oberosler emphasized improved safety for staff and a reduced need for the involvement of law enforcement organizations. In a 2020 statement to Reuters in response to the reporting, Rite Aid said that it had ceased using the facial recognition software and switched off the cameras. According to director Read Hayes of the National Retail Federation Loss Prevention Research Council, Rite Aid's surveillance program was either the largest or one of the largest programs in retail. The Home Depot, Menards, Walmart, and 7-Eleven are among other US retailers also engaged in large-scale pilot programs or deployments of facial recognition technology. Of the Rite Aid stores examined by Reuters in 2020, those in communities where people of color made up the largest racial or ethnic group were three times as likely to have the technology installed, raising concerns related to the substantial history of racial segregation and racial profiling in the United States. Rite Aid said that the selection of locations was "data-driven", based on the theft histories of individual stores, local and national crime data, and site infrastructure. ==== Australia ==== In 2019, facial recognition to prevent theft was in use at Sydney's Star Casino and was also deployed at gaming venues in New Zealand. In June 2022, consumer group CHOICE reported facial recognition was in use in Australia at Kmart, Bunnings, and The Good Guys. The Good Guys subsequently suspended the technology pending a legal challenge by CHOICE to the Office of the Australian Information Commissioner, while Bunnings kept the technology in use and Kmart maintained its trial of the technology. === Additional uses === At the American football championship game Super Bowl XXXV in January 2001, police in Tampa Bay, Florida used Viisage face recognition software to search for potential criminals and terrorists in attendance at the event. 19 people with minor criminal records were potentially identified. Face recognition systems have also been used by photo management software to identify the subjects of photographs, enabling features such as searching images by person, as well as suggesting photos to be shared with a specific contact if their presence were detected in a photo. By 2008 facial recognition systems were typically used as access control in security systems. The United States' popular music and country music celebrity Taylor Swift surreptitiously employed facial recognition technology at a concert in 2018. The camera was embedded in a kiosk near a ticket booth and scanned concert-goers as they entered the facility for known stalkers. On August 18, 2019, The Times reported that the UAE-owned Manchester City hired a Texas-based firm, Blink Identity, to deploy facial recognition systems in a driver program. The club has planned a single super-fast lane for the supporters at the Etihad stadium. However, civil rights groups cautioned the club against the introduction of this technology, saying that it would risk "normalising a mass surveillance tool". The policy and campaigns officer at Liberty, Hannah Couchman said that Man City's move is alarming, since the fans will be obliged to share deeply sensitive personal information with a private company, where they could be tracked and monitored in their everyday lives. In 2019, casinos in Australia and New Zealand rolled out facial recognition to prevent theft, and a representative of Sydney's Star Casino said they would also provide 'customer service' like welcoming a patron back to a bar. In August 2020, amid the COVID-19 pandemic in the United States, American football stadiums of New York and Los Angeles announced the installation of facial recognition for upcoming matches. The purpose is to make the entry process as touchless as possible. Disney's Magic Kingdom, near Orlando, Florida, likewise announced a test of facial recognition technology to create a touchless experience during the pandemic; the test was originally slated to take place between March 23 and April 23, 2021, but the limited timeframe had been removed as of late April 2021. Media companies have begun using face recognition technology to streamline their tracking, organizing, and archiving pictures and videos. == Advantages and disadvantages == === Compared to other biometric systems === In 2006, the performance of the latest face recognition algorithms was evaluated in the Face Recognition Grand Challenge (FRGC). High-resolution face images, 3-D face scans, and iris images were used in the tests. The results indicated that the new algorithms are 10 times more accurate than the face recognition algorithms of 2002 and 100 times more accurate than those of 1995. Some of the algorithms were able to outperform human participants in recognizing faces and could uniquely identify identical twins. One key advantage of a facial recognition system that it is able to perform mass identification as it does not require the cooperation of the test subject to work. Properly designed systems installed in airports, multiplexes, and other public places can identify individuals among the crowd, without passers-by even being aware of the system. However, as compared to other biometric techniques, face recognition may not be most reliable and efficient. Quality measures are very important in facial recognition systems as large degrees of variations are possible in face images. Factors such as illumination, expression, pose and noise during face capture can affect the performance of facial recognition systems. Among all biometric systems, facial recognition has the highest false acceptance and rejection rates, thus questions have been raised on the effectiveness of or bias of face recognition software in cases of railway and airport security, law enforcement and housing and employment decisions. === Weaknesses === Ralph Gross, a researcher at the Carnegie Mellon Robotics Institute in 2008, describes one obstacle related to the viewing angle of the face: "Face recognition has been getting pretty good at full frontal faces and 20 degrees off, but as soon as you go towards profile, there've been problems." Besides the pose variations, low-resolution face images are also very hard to recognize. This is one of the main obstacles of face recognition in surveillance systems. It has also been suggested that camera settings can favour sharper imagery of white skin than of other skin tones. Face recognition is less effective if facial expressions vary. A big smile can render the system less effective. For instance: Canada, in 2009, allowed only neutral facial expressions in passport photos. There is also inconstancy in the datasets used by researchers. Researchers may use anywhere from several subjects to scores of subjects and a few hundred images to thousands of images. Data sets may be diverse and inclusive or mainly contain images of white males. It is important for researchers to make available the datasets they used to each other, or have at least a standard or representative dataset. Although high degrees of accuracy have been claimed for some facial recognition systems, these outcomes are not universal. The consistently worst accuracy rate is for those who are 18 to 30 years old, Black and female. === Racial bias and skin tone === Studies have shown that facial recognition algorithms tend to perform better on individuals with lighter skin tones compared to those with darker skin tones. This disparity arises primarily because training datasets often overrepresent lighter-skinned individuals, leading to higher error rates for darker-skinned people. For example, a 2018 study found that leading commercial gender classification models, which are facial recognition models, have an error rate up to 7 times higher for those with darker skin tones compared to those with lighter skin tones. Common image compression methods, such as JPEG chroma subsampling, have been found to disproportionately degrade performance for darker-skinned individuals. These methods inadequately represent color information, which adversely affects the ability of algorithms to recognize darker-skinned individuals accurately. === Cross-race effect bias === Facial recognition systems often demonstrate lower accuracy when identifying individuals with non-Eurocentric facial features. Known as the Cross-race effect, this bias occurs when systems perform better on racial or ethnic groups that are overrepresented in their training data, resulting in reduced accuracy for underrepresented groups. The overrepresented group is generally the more populous group in the location that the model is being developed. For example, models developed in Asian cultures generally perform better on Asian facial features than Eurocentric facial features due to overrepresentation in the developers training dataset. The opposite is observed in models developed in Eurocentric cultures. The systems used for facial recognition often lack the sufficient training needed to fully recognize those features not of Eurocentric descent. When the training and databases for these Machine Learning (ML) models do not contain a diverse representation, the models fail to identify the missed population, adding to their racial biases. The cross-race effect is not exclusive to machines; humans also experience difficulty recognizing faces from racial or ethnic groups different from their own. This is an example of inherent human biases being perpetuated in training datasets. === Challenges for individuals with disabilities === Facial recognition technologies encounter significant challenges when identifying individuals with disabilities. For instance, systems have been shown to perform worse when recognizing individuals with Down syndrome, often leading to increased false match rates. This is due to distinct facial structures associated with the condition that are not adequately represented in training datasets. More broadly, facial recognition systems tend to overlook diverse physical characteristics related to disabilities. The lack of representative data for individuals with varying disabilities further emphasizes the need for inclusive algorithmic designs to mitigate bias and improve accuracy. Additionally, facial expression recognition technologies often fail to accurately interpret the emotional states of individuals with intellectual disabilities. This shortcoming can hinder effective communication and interaction, underscoring the necessity for systems trained on diverse datasets that include individuals with intellectual disabilities. Furthermore, biases in facial recognition algorithms can lead to discriminatory outcomes for people with disabilities. For example, certain facial features or asymmetries may result in misidentification or exclusion, highlighting the importance of developing accessible and fair biometric systems. === Advancements in fairness and mitigation strategies === Efforts to address these biases include designing algorithms specifically for fairness. A notable study introduced a method to learn fair face representations by using a progressive cross-transformer model. This approach highlights the importance of balancing accuracy across demographic groups while avoiding performance drops in specific populations. Additionally, targeted dataset collection has been shown to improve racial equity in facial recognition systems. By prioritizing diverse data inputs, researchers demonstrated measurable reductions in performance disparities between racial groups. === Ineffectiveness === Critics of the technology complain that the London Borough of Newham scheme has, as of 2004, never recognized a single criminal, despite several criminals in the system's database living in the Borough and the system has been running for several years. "Not once, as far as the police know, has Newham's automatic face recognition system spotted a live target." This information seems to conflict with claims that the system was credited with a 34% reduction in crime (hence why it was rolled out to Birmingham also). An experiment in 2002 by the local police department in Tampa, Florida, had similarly disappointing results. A system at Boston's Logan Airport was shut down in 2003 after failing to make any matches during a two-year test period. In 2014, Facebook stated that in a standardized two-option facial recognition test, its online system scored 97.25% accuracy, compared to the human benchmark of 97.5%. Systems are often advertised as having accuracy near 100%; this is misleading as the outcomes are not universal. The studies often use samples that are smaller and less diverse than would be necessary for large scale applications. Because facial recognition is not completely accurate, it creates a list of potential matches. A human operator must then look through these potential matches and studies show the operators pick the correct match out of the list only about half the time. This causes the issue of targeting the wrong suspect. == Controversies == === Privacy violations === Civil rights organizations and privacy campaigners such as the Electronic Frontier Foundation, Big Brother Watch and the ACLU express concern that privacy is being compromised by the use of surveillance technologies. Face recognition can be used not just to identify an individual, but also to unearth other personal data associated with an individual – such as other photos featuring the individual, blog posts, social media profiles, Internet behavior, and travel patterns. Concerns have been raised over who would have access to the knowledge of one's whereabouts and people with them at any given time. Moreover, individuals have limited ability to avoid or thwart face recognition tracking unless they hide their faces. This fundamentally changes the dynamic of day-to-day privacy by enabling any marketer, government agency, or random stranger to secretly collect the identities and associated personal information of any individual captured by the face recognition system. Consumers may not understand or be aware of what their data is being used for, which denies them the ability to consent to how their personal information gets shared. In July 2015, the United States Government Accountability Office conducted a Report to the Ranking Member, Subcommittee on Privacy, Technology and the Law, Committee on the Judiciary, U.S. Senate. The report discussed facial recognition technology's commercial uses, privacy issues, and the applicable federal law. It states that previously, issues concerning facial recognition technology were discussed and represent the need for updating the privacy laws of the United States so that federal law continually matches the impact of advanced technologies. The report noted that some industry, government, and private organizations were in the process of developing, or have developed, "voluntary privacy guidelines". These guidelines varied between the stakeholders, but their overall aim was to gain consent and inform citizens of the intended use of facial recognition technology. According to the report the voluntary privacy guidelines helped to counteract the privacy concerns that arise when citizens are unaware of how their personal data gets put to use. In 2016, Russian company NtechLab caused a privacy scandal in the international media when it launched the FindFace face recognition system with the promise that Russian users could take photos of strangers in the street and link them to a social media profile on the social media platform Vkontakte (VK). In December 2017, Facebook rolled out a new feature that notifies a user when someone uploads a photo that includes what Facebook thinks is their face, even if they are not tagged. Facebook has attempted to frame the new functionality in a positive light, amidst prior backlashes. Facebook's head of privacy, Rob Sherman, addressed this new feature as one that gives people more control over their photos online. "We've thought about this as a really empowering feature," he says. "There may be photos that exist that you don't know about." Facebook's DeepFace has become the subject of several class action lawsuits under the Biometric Information Privacy Act, with claims alleging that Facebook is collecting and storing face recognition data of its users without obtaining informed consent, in direct violation of the 2008 Biometric Information Privacy Act (BIPA). The most recent case was dismissed in January 2016 because the court lacked jurisdiction. In the US, surveillance companies such as Clearview AI are relying on the First Amendment to the United States Constitution to data scrape user accounts on social media platforms for data that can be used in the development of facial recognition systems. In 2019, the Financial Times first reported that facial recognition software was in use in the King's Cross area of London. The development around London's King's Cross mainline station includes shops, offices, Google's UK HQ and part of St Martin's College. According to the UK Information Commissioner's Office: "Scanning people's faces as they lawfully go about their daily lives, in order to identify them, is a potential threat to privacy that should concern us all." The UK Information Commissioner Elizabeth Denham launched an investigation into the use of the King's Cross facial recognition system, operated by the company Argent. In September 2019 it was announced by Argent that facial recognition software would no longer be used at King's Cross. Argent claimed that the software had been deployed between May 2016 and March 2018 on two cameras covering a pedestrian street running through the centre of the development. In October 2019, a report by the deputy London mayor Sophie Linden revealed that in a secret deal the Metropolitan Police had passed photos of seven people to Argent for use in their King's cross facial recognition system. Automated Facial Recognition was trialled by the South Wales Police on multiple occasions between 2017 and 2019. The use of the technology was challenged in court by a private individual, Edward Bridges, with support from the charity Liberty (case known as R (Bridges) v Chief Constable South Wales Police). The case was heard in the Court of Appeal and a judgement was given in August 2020. The case argued that the use of Facial Recognition was a privacy violation on the basis that there was insufficient legal framework or proportionality in the use of Facial Recognition and that its use was in violation of the Data Protection Acts 1998 and 2018. The case was decided in favour of Bridges and did not award damages. The case was settled via a declaration of wrongdoing. In response to the case, the British Government has repeatedly attempted to pass a Bill regulating the use of Facial Recognition in public spaces. The proposed Bills have attempted to appoint a Commissioner with the ability to regulate Facial Recognition use by Government Services in a similar manner to the Commissioner for CCTV. Such a Bill has yet to come into force [correct as of September 2021]. In January 2023, New York Attorney General Letitia James asked for more information on the use of facial recognition technology from Madison Square Garden Entertainment following reports that the firm used it to block lawyers involved in litigation against the company from entering Madison Square Garden. She noted such a move would could go against federal, state, and local human rights laws. === Imperfect technology in law enforcement === As of 2018, it is still contested as to whether or not facial recognition technology works less accurately on people of color. One study by Joy Buolamwini (MIT Media Lab) and Timnit Gebru (Microsoft Research) found that the error rate for gender recognition for women of color within three commercial facial recognition systems ranged from 23.8% to 36%, whereas for lighter-skinned men it was between 0.0 and 1.6%. Overall accuracy rates for identifying men (91.9%) were higher than for women (79.4%), and none of the systems accommodated a non-binary understanding of gender. It also showed that the datasets used to train commercial facial recognition models were unrepresentative of the broader population and skewed toward lighter-skinned males. However, another study showed that several commercial facial recognition software sold to law enforcement offices around the country had a lower false non-match rate for black people than for white people. Experts fear that face recognition systems may actually be hurting citizens the police claims they are trying to protect. It is considered an imperfect biometric, and in a study conducted by Georgetown University researcher Clare Garvie, she concluded that "there's no consensus in the scientific community that it provides a positive identification of somebody." It is believed that with such large margins of error in this technology, both legal advocates and facial recognition software companies say that the technology should only supply a portion of the case – no evidence that can lead to an arrest of an individual. The lack of regulations holding facial recognition technology companies to requirements of racially biased testing can be a significant flaw in the adoption of use in law enforcement. CyberExtruder, a company that markets itself to law enforcement said that they had not performed testing or research on bias in their software. CyberExtruder did note that some skin colors are more difficult for the software to recognize with current limitations of the technology. "Just as individuals with very dark skin are hard to identify with high significance via facial recognition, individuals with very pale skin are the same," said Blake Senftner, a senior software engineer at CyberExtruder. The United States' National Institute of Standards and Technology (NIST) carried out extensive testing of FRT system 1:1 verification and 1:many identification. It also tested for the differing accuracy of FRT across different demographic groups. The independent study concluded at present, no FRT system has 100% accuracy. === Data protection === In 2010, Peru passed the Law for Personal Data Protection, which defines biometric information that can be used to identify an individual as sensitive data. In 2012, Colombia passed a comprehensive Data Protection Law which defines biometric data as senstivite information. According to Article 9(1) of the EU's 2016 General Data Protection Regulation (GDPR) the processing of biometric data for the purpose of "uniquely identifying a natural person" is sensitive and the facial recognition data processed in this way becomes sensitive personal data. In response to the GDPR passing into the law of EU member states, EU based researchers voiced concern that if they were required under the GDPR to obtain individual's consent for the processing of their facial recognition data, a face database on the scale of MegaFace could never be established again. In September 2019 the Swedish Data Protection Authority (DPA) issued its first ever financial penalty for a violation of the EU's General Data Protection Regulation (GDPR) against a school that was using the technology to replace time-consuming roll calls during class. The DPA found that the school illegally obtained the biometric data of its students without completing an impact assessment. In addition the school did not make the DPA aware of the pilot scheme. A 200,000 SEK fine (€19,000/$21,000) was issued. In the United States of America several U.S. states have passed laws to protect the privacy of biometric data. Examples include the Illinois Biometric Information Privacy Act (BIPA) and the California Consumer Privacy Act (CCPA). In March 2020 California residents filed a class action against Clearview AI, alleging that the company had illegally collected biometric data online and with the help of face recognition technology built up a database of biometric data which was sold to companies and police forces. At the time Clearview AI already faced two lawsuits under BIPA and an investigation by the Privacy Commissioner of Canada for compliance with the Personal Information Protection and Electronic Documents Act (PIPEDA). == Bans on the use of facial recognition technology == === United States of America === In May 2019, San Francisco, California became the first major United States city to ban the use of facial recognition software for police and other local government agencies' usage. San Francisco Supervisor, Aaron Peskin, introduced regulations that will require agencies to gain approval from the San Francisco Board of Supervisors to purchase surveillance technology. The regulations also require that agencies publicly disclose the intended use for new surveillance technology. In June 2019, Somerville, Massachusetts became the first city on the East Coast to ban face surveillance software for government use, specifically in police investigations and municipal surveillance. In July 2019, Oakland, California banned the usage of facial recognition technology by city departments. The American Civil Liberties Union ("ACLU") has campaigned across the United States for transparency in surveillance technology and has supported both San Francisco and Somerville's ban on facial recognition software. The ACLU works to challenge the secrecy and surveillance with this technology. During the George Floyd protests, use of facial recognition by city government was banned in Boston, Massachusetts. As of June 10, 2020, municipal use has been banned in: Berkeley, California Oakland, California Boston, Massachusetts – June 30, 2020 Brookline, Massachusetts Cambridge, Massachusetts Northampton, Massachusetts Springfield, Massachusetts Somerville, Massachusetts Portland, Oregon – September 2020 The West Lafayette, Indiana City Council passed an ordinance banning facial recognition surveillance technology. On October 27, 2020, 22 human rights groups called upon the University of Miami to ban facial recognition technology. This came after the students accused the school of using the software to identify student protesters. The allegations were, however, denied by the university. A state police reform law in Massachusetts will take effect in July 2021; a ban passed by the legislature was rejected by governor Charlie Baker. Instead, the law requires a judicial warrant, limit the personnel who can perform the search, record data about how the technology is used, and create a commission to make recommendations about future regulations. Reports in 2024 revealed that some police departments, including San Francisco Police Department, had skirted bans on facial recognition technology that had been enacted in their respective cities. === European Union === In January 2020, the European Union suggested, but then quickly scrapped, a proposed moratorium on facial recognition in public spaces. The European "Reclaim Your Face" coalition launched in October 2020. The coalition calls for a ban on facial recognition and launched a European Citizens' Initiative in February 2021. More than 60 organizations call on the European Commission to strictly regulate the use of biometric surveillance technologies. == Emotion recognition == In the 18th and 19th century, the belief that facial expressions revealed the moral worth or true inner state of a human was widespread and physiognomy was a respected science in the Western world. From the early 19th century onwards photography was used in the physiognomic analysis of facial features and facial expression to detect insanity and dementia. In the 1960s and 1970s the study of human emotions and its expressions was reinvented by psychologists, who tried to define a normal range of emotional responses to events. The research on automated emotion recognition has since the 1970s focused on facial expressions and speech, which are regarded as the two most important ways in which humans communicate emotions to other humans. In the 1970s the Facial Action Coding System (FACS) categorization for the physical expression of emotions was established. Its developer Paul Ekman maintains that there are six emotions that are universal to all human beings and that these can be coded in facial expressions. Research into automatic emotion specific expression recognition has in the past decades focused on frontal view images of human faces. Facial thermography can be considered as a promising tool of emotion recognition. In 2016, facial feature emotion recognition algorithms were among the new technologies, alongside high-definition CCTV, high resolution 3D face recognition and iris recognition, that found their way out of university research labs. In 2016, Facebook acquired FacioMetrics, a facial feature emotion recognition corporate spin-off by Carnegie Mellon University. In the same year Apple Inc. acquired the facial feature emotion recognition start-up Emotient. By the end of 2016, commercial vendors of facial recognition systems offered to integrate and deploy emotion recognition algorithms for facial features. The MIT's Media Lab spin-off Affectiva by late 2019 offered a facial expression emotion detection product that can recognize emotions in humans while driving. == Anti-facial recognition systems == The development of anti-facial recognition technology is effectively an arms race between privacy researchers and big data companies. Big data companies increasingly use convolutional AI technology to create ever more advanced facial recognition models. Solutions to block facial recognition may not work on newer software, or on different types of facial recognition models. One popular cited example of facial-recognition blocking is the CVDazzle makeup and haircut system, but the creators note on their website that it has been outdated for quite some time as it was designed to combat a particular facial recognition algorithm and may not work. Another example is the emergence of facial recognition that can identify people wearing facemasks and sunglasses, especially after the COVID-19 pandemic. Given that big data companies have much more funding than privacy researchers, it is very difficult for anti-facial recognition systems to keep up. There is also no guarantee that obfuscation techniques that were used for images taken in the past and stored, such as masks or software obfuscation, would protect users from facial-recognition analysis of those images by future technology. In January 2013, Japanese researchers from the National Institute of Informatics created 'privacy visor' glasses that use nearly infrared light to make the face underneath it unrecognizable to face recognition software that use infrared. The latest version uses a titanium frame, light-reflective material and a mask which uses angles and patterns to disrupt facial recognition technology through both absorbing and bouncing back light sources. However, these methods are used to prevent infrared facial recognition and would not work on AI facial recognition of plain images. Some projects use adversarial machine learning to come up with new printed patterns that confuse existing face recognition software. One method that may work to protect from facial recognition systems are specific haircuts and make-up patterns that prevent the used algorithms to detect a face, known as computer vision dazzle. Incidentally, the makeup styles popular with Juggalos may also protect against facial recognition. Facial masks that are worn to protect from contagious viruses can reduce the accuracy of facial recognition systems. A 2020 NIST study, tested popular one-to-one matching systems and found a failure rate between five and fifty percent on masked individuals. The Verge speculated that the accuracy rate of mass surveillance systems, which were not included in the study, would be even less accurate than the accuracy of one-to-one matching systems. The facial recognition of Apple Pay can work through many barriers, including heavy makeup, thick beards and even sunglasses, but fails with masks. However, facial recognition of masked faces is increasingly getting more reliable. Another solution is the application of obfuscation to images that may fool facial recognition systems while still appearing normal to a human user. These could be used for when images are posted online or on social media. However, as it is hard to remove images once they are on the internet, the obfuscation on these images may be defeated and the face of the user identified by future advances in technology. Two examples of this technique, developed in 2020, are the ANU's 'Camera Adversaria' camera app, and the University of Chicago's Fawkes image cloaking software algorithm which applies obfuscation to already taken photos. However, by 2021 the Fawkes obfuscation algorithm had already been specifically targeted by Microsoft Azure which changed its algorithm to lower Fawkes' effectiveness. == See also == Lists List of computer vision topics List of emerging technologiesOutline of artificial intelligence == References == == Further reading == Farokhi, Sajad; Shamsuddin, Siti Mariyam; Flusser, Jan; Sheikh, U.U; Khansari, Mohammad; Jafari-Khouzani, Kourosh (2014). "Near infrared face recognition by combining Zernike moments and undecimated discrete wavelet transform". Digital Signal Processing. 31 (1): 13–27. Bibcode:2014DSP....31...13F. doi:10.1016/j.dsp.2014.04.008. "The Face Detection Algorithm Set to Revolutionize Image Search" (Feb. 2015), MIT Technology Review Garvie, Clare; Bedoya, Alvaro; Frankle, Jonathan (October 18, 2016). Perpetual Line Up: Unregulated Police Face Recognition in America. Center on Privacy & Technology at Georgetown Law. Retrieved October 22, 2016. "Facial Recognition Software 'Sounds Like Science Fiction,' but May Affect Half of Americans". As It Happens. Canadian Broadcasting Corporation. October 20, 2016. Retrieved October 22, 2016. Interview with Alvaro Bedoya, executive director of the Center on Privacy & Technology at Georgetown Law and co-author of Perpetual Line Up: Unregulated Police Face Recognition in America. Press, Eyal, "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", The New Yorker, 20 November 2023, pp. 20–26. == External links == Media related to Facial recognition system at Wikimedia Commons A Photometric Stereo Approach to Face Recognition (master's thesis). The University of the West of England, Bristol.
Wikipedia/Facial_recognition_systems
In control theory, a proper transfer function is a transfer function in which the degree of the numerator does not exceed the degree of the denominator. A strictly proper transfer function is a transfer function where the degree of the numerator is less than the degree of the denominator. The difference between the degree of the denominator (number of poles) and degree of the numerator (number of zeros) is the relative degree of the transfer function. == Example == The following transfer function: G ( s ) = N ( s ) D ( s ) = s 4 + n 1 s 3 + n 2 s 2 + n 3 s + n 4 s 4 + d 1 s 3 + d 2 s 2 + d 3 s + d 4 {\displaystyle {\textbf {G}}(s)={\frac {{\textbf {N}}(s)}{{\textbf {D}}(s)}}={\frac {s^{4}+n_{1}s^{3}+n_{2}s^{2}+n_{3}s+n_{4}}{s^{4}+d_{1}s^{3}+d_{2}s^{2}+d_{3}s+d_{4}}}} is proper, because deg ⁡ ( N ( s ) ) = 4 ≤ deg ⁡ ( D ( s ) ) = 4 {\displaystyle \deg({\textbf {N}}(s))=4\leq \deg({\textbf {D}}(s))=4} . is biproper, because deg ⁡ ( N ( s ) ) = 4 = deg ⁡ ( D ( s ) ) = 4 {\displaystyle \deg({\textbf {N}}(s))=4=\deg({\textbf {D}}(s))=4} . but is not strictly proper, because deg ⁡ ( N ( s ) ) = 4 ≮ deg ⁡ ( D ( s ) ) = 4 {\displaystyle \deg({\textbf {N}}(s))=4\nless \deg({\textbf {D}}(s))=4} . The following transfer function is not proper (or strictly proper) G ( s ) = N ( s ) D ( s ) = s 4 + n 1 s 3 + n 2 s 2 + n 3 s + n 4 d 1 s 3 + d 2 s 2 + d 3 s + d 4 {\displaystyle {\textbf {G}}(s)={\frac {{\textbf {N}}(s)}{{\textbf {D}}(s)}}={\frac {s^{4}+n_{1}s^{3}+n_{2}s^{2}+n_{3}s+n_{4}}{d_{1}s^{3}+d_{2}s^{2}+d_{3}s+d_{4}}}} because deg ⁡ ( N ( s ) ) = 4 ≰ deg ⁡ ( D ( s ) ) = 3 {\displaystyle \deg({\textbf {N}}(s))=4\nleq \deg({\textbf {D}}(s))=3} . A not proper transfer function can be made proper by using the method of long division. The following transfer function is strictly proper G ( s ) = N ( s ) D ( s ) = n 1 s 3 + n 2 s 2 + n 3 s + n 4 s 4 + d 1 s 3 + d 2 s 2 + d 3 s + d 4 {\displaystyle {\textbf {G}}(s)={\frac {{\textbf {N}}(s)}{{\textbf {D}}(s)}}={\frac {n_{1}s^{3}+n_{2}s^{2}+n_{3}s+n_{4}}{s^{4}+d_{1}s^{3}+d_{2}s^{2}+d_{3}s+d_{4}}}} because deg ⁡ ( N ( s ) ) = 3 < deg ⁡ ( D ( s ) ) = 4 {\displaystyle \deg({\textbf {N}}(s))=3<\deg({\textbf {D}}(s))=4} . == Implications == A proper transfer function will never grow unbounded as the frequency approaches infinity: | G ( ± j ∞ ) | < ∞ {\displaystyle |{\textbf {G}}(\pm j\infty )|<\infty } A strictly proper transfer function will approach zero as the frequency approaches infinity (which is true for all physical processes): G ( ± j ∞ ) = 0 {\displaystyle {\textbf {G}}(\pm j\infty )=0} Also, the integral of the real part of a strictly proper transfer function is zero. == References == Transfer functions - ECE 486: Control Systems Spring 2015, University of Illinois ELEC ENG 4CL4: Control System Design Notes for Lecture #9, 2004, Dr. Ian C. Bruce, McMaster University
Wikipedia/Proper_transfer_function
In the IEEE 802 reference model of computer networking, the logical link control (LLC) data communication protocol layer is the upper sublayer of the data link layer (layer 2) of the seven-layer OSI model. The LLC sublayer acts as an interface between the medium access control (MAC) sublayer and the network layer. The LLC sublayer provides multiplexing mechanisms that make it possible for several network protocols (e.g. IP, IPX and DECnet) to coexist within a multipoint network and to be transported over the same network medium. It can also provide flow control and automatic repeat request (ARQ) error management mechanisms. == Operation == The LLC sublayer is primarily concerned with multiplexing protocols transmitted over the MAC layer (when transmitting) and demultiplexing them (when receiving). It can also provide node-to-node flow control and error management. The flow control and error management capabilities of the LLC sublayer are used by protocols such as the NetBIOS Frames protocol. However, most protocol stacks running atop 802.2 do not use LLC sublayer flow control and error management. In these cases flow control and error management are taken care of by a transport layer protocol such as TCP or by some application layer protocol. These higher layer protocols work in an end-to-end fashion, i.e. re-transmission is done from the original source to the final destination, rather than on individual physical segments. For these protocol stacks only the multiplexing capabilities of the LLC sublayer are used. == Application examples == === X.25 and LAPB === An LLC sublayer was a key component in early packet switching networks such as X.25 networks with the LAPB data link layer protocol, where flow control and error management were carried out in a node-to-node fashion, meaning that if an error was detected in a frame, the frame was retransmitted from one switch to next instead. This extensive handshaking between the nodes made the networks slow. === Local area network === The IEEE 802.2 standard specifies the LLC sublayer for all IEEE 802 local area networks, such as IEEE 802.3/Ethernet (when Ethernet II frame format is not used), IEEE 802.5, and IEEE 802.11. IEEE 802.2 is also used in some non-IEEE 802 networks such as FDDI. ==== Ethernet ==== Since bit errors are very rare in wired networks, Ethernet does not provide flow control or automatic repeat request (ARQ), meaning that incorrect packets are detected but only cancelled, not retransmitted (except in case of collisions detected by the CSMA/CD MAC layer protocol). Instead, retransmissions rely on higher-layer protocols. As the EtherType in an Ethernet frame using Ethernet II framing is used to multiplex different protocols on top of the Ethernet MAC header it can be seen as an LLC identifier. However, Ethernet frames lacking an EtherType have no LLC identifier in the Ethernet header, and, instead, use an IEEE 802.2 LLC header after the Ethernet header to provide the protocol multiplexing function. ==== Wireless LAN ==== In wireless communications, bit errors are very common. In wireless networks such as IEEE 802.11, flow control and error management is part of the CSMA/CA MAC protocol, and not part of the LLC layer. The LLC sublayer follows the IEEE 802.2 standard. === HDLC === Some non-IEEE 802 protocols can be thought of as being split into MAC and LLC layers. For example, while HDLC specifies both MAC functions (framing of packets) and LLC functions (protocol multiplexing, flow control, detection, and error control through retransmission of dropped packets when indicated), some protocols such as Cisco HDLC can use HDLC-like packet framing and their own LLC protocol. === PPP and modems === Over telephone network modems, PPP link layer protocols can be considered as a LLC protocol, providing multiplexing, but it does not provide flow control and error management. In a telephone network, bit errors might be common, meaning that error management is crucial, but that is today provided by modern protocols. Today's modem protocols have inherited LLC features from the older LAPM link layer protocol, made for modem communication in old X.25 networks. === Cellular systems === The GPRS LLC layer also does ciphering and deciphering of SN-PDU (SNDCP) packets. === Power lines === Another example of a data link layer which is split between LLC (for flow and error control) and MAC (for multiple access) is the ITU-T G.hn standard, which provides high-speed local area networking over existing home wiring (power lines, phone lines and coaxial cables). == See also == Subnetwork Access Protocol (SNAP) Virtual Circuit Multiplexing (VC-MUX) == References ==
Wikipedia/Logical_Link_Control
The Radio Resource Control (RRC) protocol is used in UMTS, LTE and 5G on the Air interface. It is a layer 3 (Network Layer) protocol used between UE and Base Station. This protocol is specified by 3GPP in TS 25.331 for UMTS, in TS 36.331 for LTE and in TS 38.331 for 5G New Radio. RRC messages are transported via the PDCP-Protocol. The major functions of the RRC protocol include connection establishment and release functions, broadcast of system information, radio bearer establishment, reconfiguration and release, RRC connection mobility procedures, paging notification and release and outer loop power control. By means of the signalling functions the RRC configures the user and control planes according to the network status and allows for Radio Resource Management strategies to be implemented. The operation of the RRC is guided by a state machine which defines certain specific states that a UE may be present in. The different states in this state machine have different amounts of radio resources associated with them and these are the resources that the UE may use when it is present in a given specific state. Since different amounts of resources are available at different states the quality of the service that the user experiences and the energy consumption of the UE are influenced by this state machine. == RRC inactivity timers == The configuration of RRC inactivity timers in a W-CDMA network has considerable impact on the battery life of a phone when a packet data connection is open. The RRC idle mode (no connection) has the lowest energy consumption. The states in the RRC connected mode, in order of decreasing power consumption, are CELL_DCH (Dedicated Channel), CELL_FACH (Forward Access Channel), CELL_PCH (Cell Paging Channel) and URA_PCH (URA Paging Channel). The power consumption in the CELL_FACH is roughly 50 percent of that in CELL_DCH, and the PCH states use about 1-2 percent of the power consumption of the CELL_DCH state. The transitions to lower energy consuming states occur when inactivity timers trigger. The T1 timer controls transition from DCH to FACH, the T2 timer controls transition from FACH to PCH, and the T3 timer controls transition from PCH to idle. Different operators have different configurations for the inactivity timers, which leads to differences in energy consumption. Another factor is that not all operators use the PCH states. == See also == Radio Resource Management Mobility management Radio Network Controller UMTS WCDMA == References ==
Wikipedia/Radio_Resource_Control
The Signalling Connection Control Part (SCCP) is a network layer protocol that provides extended routing, flow control, segmentation, connection-orientation, and error correction facilities in Signaling System 7 telecommunications networks. SCCP relies on the services of MTP for basic routing and error detection. == Published specification == The base SCCP specification is defined by the ITU-T, in recommendations Q.711 to Q.714, with additional information to implementors provided by Q.715 and Q.716. There are, however, regional variations defined by local standards bodies. In the United States, ANSI publishes its modifications to Q.713 as ANSI T1.112. The TTC publishes as JT-Q.711 to JT-Q.714, and Europe ETSI publishes ETSI EN 300-009-1: both of which document their modifications to the ITU-T specifications. == Routing facilities beyond MTP == Although MTP provides routing capabilities based on the Point Code, SCCP allows routing using a Point Code and Subsystem number or a Global Title. A Point Code is used to address a particular node on the network, whereas a Subsystem number addresses a specific application available on that node. SCCP employs a process called Global Title Translation to determine Point Codes from Global Titles so as to instruct MTP on where to route messages. SCCP messages contain parameters which describe the type of addressing used, and how the message should be routed: Address Indicator Routing indicator Route on Global Title Route on Point Code/Subsystem Number Global title indicator No Global Title Global Title includes Translation Type (TT), Numbering Plan Indicator (NPI) and Type of Number (TON) Global Title includes Translation Type only Subsystem indicator Subsystem Number present Subsystem Number not present Point Code indicator Point Code present Point Code not present Global Title Address Indicator Coding Address Indicator coded as national (the Address Indicator is treated as international if not specified) == Protocol classes == SCCP provides 4 classes of protocol for its applications: Class 0: Basic connectionless. Class 1: Sequenced connectionless. Class 2: Basic connection-oriented. Class 3: Flow control connection oriented. The connectionless protocol classes provide the capabilities needed to transfer one Network Service Data Unit (NSDU) in the "data" field of an XUDT, LUDT or UDT message. When one connectionless message is not sufficient to convey the user data contained in one NSDU, a segmenting/reassembly function for protocol classes 0 and 1 is provided. In this case, the SCCP at the originating node or in a relay node provides segmentation of the information into multiple segments prior to transfer in the "data" field of XUDT (or as a network option LUDT) messages. At the destination node, the NSDU is reassembled. The connection-oriented protocol classes (protocol classes 2 and 3) provide the means to set up signalling connections in order to exchange a number of related NSDUs. The connection-oriented protocol classes also provide a segmenting and reassembling capability. If an NSDU is longer than 255 octets, it is split into multiple segments at the originating node, prior to transfer in the "data" field of DT messages. Each segment is less than or equal to 255 octets. At the destination node, the NSDU is reassembled. === Class 0: Basic connectionless === The SCCP Class 0 protocol class is the most basic of the SCCP protocol classes. Network Service Data Units passed by higher layers in the originating node are delivered by the SCCP to higher layers in the destination node. They are transferred independently of each other. Therefore, they may be delivered to the SCCP user out-of-sequence. Thus, this protocol class corresponds to a pure connectionless network service. As a connectionless protocol, no network connection is established between the sender and the receiver. === Class 1: Sequenced connectionless === SCCP Class 1 builds on the capabilities of Class 0, with the addition of a sequence control parameter in the NSDU which allows the SCCP User to instruct the SCCP that a given stream of messages should be delivered in sequence. Therefore, Protocol Class 1 corresponds to an enhanced connectionless protocol with assurances of in-sequence delivery. === Class 2: Basic connection-oriented === SCCP Class 2 provides the facilities of Class 1, but also allows for an entity to establish a two-way dialog with another entity using SCCP. === Class 3: Flow control connection oriented === Class 3 service builds upon Class 2, but also allows for expedited (urgent) messages to be sent and received, and for errors in sequencing (segment re-assembly) to be detected and for SCCP to restart a connection should this occur. == Transport over IP networks == In the SIGTRAN suite of protocols, there are two primary methods of transporting SCCP applications across Internet Protocol networks: SCCP can be transported indirectly using the MTP level 3 User Adaptation protocol (M3UA), a protocol which provides support for users of MTP-3—including SCCP. Alternatively, SCCP applications can operate directly over the SCCP User Adaptation protocol (SUA) which is a form of modified SCCP designed specifically for use in IP networking. ITU-T also provides for the transport of SCCP users over Internet Protocol using the Generic Signalling Transport service specified in Q.2150.0, the signalling transport converter for SCTP specified in Q.2150.3 and a specialized Transport-Independent Signalling Connection Control Part (TI-SCCP) specified in T-REC-Q.2220. TI-SCCP can also be used with the Generic Signalling Transport adapted for MTP3 and MTP3b as described in Q.2150.1, or adapted for SSCOP or SSCOPMCE as described in Q.2150.2. == References ==
Wikipedia/Signalling_Connection_Control_Part
The Mobile Application Part (MAP) is an SS7 protocol that provides an application layer for the various nodes in GSM and UMTS mobile core networks and GPRS core networks to communicate with each other in order to provide services to users. The Mobile Application Part is the application-layer protocol used to access the Home Location Register, Visitor Location Register, Mobile Switching Center, Equipment Identity Register, Authentication Centre, Short message service center and Serving GPRS Support Node (SGSN). == Facilities provided == The primary facilities provided by MAP are: Mobility Services: location management (to support roaming), authentication, managing service subscription information, fault recovery, Operation and Maintenance: subscriber tracing, retrieving a subscriber's IMSI Call Handling: routing, managing calls whilst roaming, checking that a subscriber is available to receive calls Supplementary Services Short Message Service Packet Data Protocol (PDP) services for GPRS: providing routing information for GPRS connections Location Service Management Services: obtaining the location of subscriber == Published specification == The Mobile Application Part specifications were originally defined by the GSM Association, but are now controlled by ETSI/3GPP. MAP is defined by two different standards, depending upon the mobile network type: MAP for GSM (prior to Release 4) is specified by 3GPP TS 09.02 (MAP v1, MAP v2) MAP for UMTS ("3G") and GSM (Release 99 and later) is specified by 3GPP TS 29.002 (MAP v3) In cellular networks based on ANSI standards (currently CDMA2000, in the past AMPS, IS-136 and cdmaOne) plays the role of the MAP a similar protocol usually called IS-41 or ANSI-41 (ANSI MAP). Since 2000 it is maintained by 3GPP2 as N.S0005 and since 2004 it is named 3GPP2 X.S0004. == Implementation == MAP is a Transaction Capabilities Application Part (TCAP) user, and as such can be transported using 'traditional' SS7 protocols or over IP using Transport Independent Signalling Connection Control Part (TI-SCCP); or using SIGTRAN. Yate is a partial open source implementation of MAP. == MAP signaling == In mobile cellular telephony networks like GSM and UMTS the SS7 application MAP is used. Voice connections are Circuit Switched (CS) and data connections are Packet Switched (PS) applications. Some of the GSM/UMTS Circuit Switched interfaces in the Mobile Switching Center (MSC) transported over SS7 include the following: B -> VLR (uses MAP/B). Most MSCs are associated with a Visitor Location Register (VLR), making the B interface "internal". C -> HLR (uses MAP/C) Messages between MSC to HLR handled by C Interface D -> HLR (uses MAP/D) for attaching to the CS network and location update E -> MSC (uses MAP/E) for inter-MSC handover F -> EIR (uses MAP/F) for equipment identity check H -> SMS-G (uses MAP/H) for Short Message Service (SMS) over CS I -> ME (uses MAP/I) Messages between MSC to ME handled by I Interface J -> SCF (uses MAP/J) Messages between HLR to gsmSCF handled by J Interface There are also several GSM/UMTS PS interfaces in the Serving GPRS Support Node (SGSN) transported over SS7: Gr -> HLR for attaching to the PS network and location update Gd -> SMS-C for SMS over PS Gs -> MSC for combined CS+PS signaling over PS Ge -> Charging for Customised Applications for Mobile networks Enhanced Logic (CAMEL) prepaid charging Gf -> EIR for equipment identity check == References ==
Wikipedia/Mobile_Application_Part
In data networking, telecommunications, and computer buses, an acknowledgement (ACK) is a signal that is passed between communicating processes, computers, or devices to signify acknowledgment, or receipt of message, as part of a communications protocol. Correspondingly a negative-acknowledgement (NAK or NACK) is a signal that is sent to reject a previously received message or to indicate some kind of error. Acknowledgments and negative acknowledgments inform a sender of the receiver's state so that it can adjust its own state accordingly. == Acknowledgment signal types == The ASCII code point for ACK is 0x06 (binary 0000 0110). By convention a receiving device sends an ACK to indicate it successfully received a message. ASCII also provides a NAK code point (0x15, binary 0001 0101) which can be used to indicate the receiving device cannot, or will not, comply with the message. Unicode provides visible symbols for these ASCII characters, U+2406 (␆) and U+2415 (␕). ACK and NAK symbols may also take the form of single bits or bit fields depending on the protocol data link layer definition or even as a dedicated wire at physical layer. == Protocol usage == Many protocols are acknowledgement-based, meaning that they positively acknowledge receipt of messages. The internet's Transmission Control Protocol (TCP) is an example of an acknowledgement-based protocol. When computers communicate via TCP, received packets are acknowledged by sending a return packet with an ACK bit set. While some protocols send an acknowledgement per each packet received, other protocols such as TCP and ZMODEM allow many packets to be transmitted before sending an acknowledgement for the set of them, a procedure necessary to fill high bandwidth-delay product links with a large number of bytes in flight. Some protocols are NAK-based, meaning that they only respond to messages if there is a problem. Examples include many reliable multicast protocols which send a NAK when the receiver detects missing packets or protocols that use checksums to verify the integrity of the payload and header. Still other protocols make use of both NAKs and ACKs. Binary Synchronous Communications (Bisync) and Adaptive Link Rate (for Energy-Efficient Ethernet) are examples. The acknowledgement function is used in the automatic repeat request (ARQ) function. Acknowledgement frames are numbered in coordination with the frames that have been received and then sent to the transmitter. This allows the transmitter to avoid overflow or underrun at the receiver, and to become aware of any missed frames. In IBM Binary Synchronous Communications, the NAK is used to indicate that a transmission error was detected in the previously received block and that the receiver is ready to accept retransmission of that block. Bisync does not use a single ACK character but has two control sequences for alternate even/odd block acknowledgement. ACK and NAK based methodologies are not the only protocol design paradigms. Some protocols such as the RC-5, User Datagram Protocol (UDP), and X10 protocols perform blind transmission with no acknowledgement, often transmitting the same message multiple times in hopes that at least one copy of the message gets through. == Hardware acknowledgment == Some computer buses have a dedicated acknowledge wire in the control bus used to acknowledge bus operations: DACK used for ISA DMA; DATACK used in the STEbus, the data transfer acknowledge pin of the Motorola 68000 that inspired the title of DTACK Grounded, etc. Some computer buses do not wait for acknowledgement of every transmission, see for instance posted write. The I²C serial bus has a time slot for an acknowledgment bit after each byte. == See also == C0 and C1 control codes Flow control (data) NACK-Oriented Reliable Multicast == References == == External links == Peter Rukavina. "ACK vs. NAK". Retrieved 2020-03-04.
Wikipedia/Acknowledgement_(data_networks)
The Broadcast/Multicast control (BMC) is a sublayer of layer 2 protocol of Radio Interface Protocol Architecture as per BMC-STD. It exists in the user plane only. It is located above the Radio Link Control (RLC), a layer 2 responsible for mapping logical channels. It is similar to 802.11's LLC layer which supports multimode operations and it works in three different modes: a) Transparent b) Unacknowledged data transfer c) Acknowledged data transfer. Its main function is to deliver "Cell Broadcast" messages to its upper layer such as NAS. Other functions specified in [3GPP TS 25.301] are: Storage of Cell Broadcast Messages. Traffic volume monitoring and radio resource request for CBS. Scheduling of BMC messages. Transmission of BMC messages to UE. Delivery of Cell Broadcast messages to upper layer (NAS). Except for broadcast/multicast it operates in transparent mode as per [BMC-STD]. On the uplink BMC it requires Unacknowledged mode of data transfer from RLC. == References == [BMC-STD] 3GPP TS 25.324 V8.0.0 (2007–12); 3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Broadcast/Multicast Control BMC;
Wikipedia/Broadcast/Multicast_Control
Network planning and design is an iterative process, encompassing topological design, network-synthesis, and network-realization, and is aimed at ensuring that a new telecommunications network or service meets the needs of the subscriber and operator. The process can be tailored according to each new network or service. == A network planning methodology == A traditional network planning methodology in the context of business decisions involves five layers of planning, namely: need assessment and resource assessment short-term network planning IT resource long-term and medium-term network planning operations and maintenance. Each of these layers incorporates plans for different time horizons, i.e. the business planning layer determines the planning that the operator must perform to ensure that the network will perform as required for its intended life-span. The Operations and Maintenance layer, however, examines how the network will run on a day-to-day basis. The network planning process begins with the acquisition of external information. This includes: forecasts of how the new network/service will operate; the economic information concerning costs, and the technical details of the network’s capabilities. Planning a new network/service involves implementing the new system across the first four layers of the OSI Reference Model. Choices must be made for the protocols and transmission technologies. The network planning process involves three main steps: Topological design: This stage involves determining where to place the components and how to connect them. The (topological) optimization methods that can be used in this stage come from an area of mathematics called graph theory. These methods involve determining the costs of transmission and the cost of switching, and thereby determining the optimum connection matrix and location of switches and concentrators. Network-synthesis: This stage involves determining the size of the components used, subject to performance criteria such as the grade of service (GOS). The method used is known as "Nonlinear Optimisation", and involves determining the topology, required GoS, cost of transmission, etc., and using this information to calculate a routing plan, and the size of the components. Network realization: This stage involves determining how to meet capacity requirements, and ensure reliability within the network. The method used is known as "Multicommodity Flow Optimisation", and involves determining all information relating to demand, costs, and reliability, and then using this information to calculate an actual physical circuit plan. These steps are performed iteratively in parallel with one another. == The role of forecasting == During the process of Network Planning and Design, estimates are made of the expected traffic intensity and traffic load that the network must support. If a network of a similar nature already exists, traffic measurements of such a network can be used to calculate the exact traffic load. If there are no similar networks, then the network planner must use telecommunications forecasting methods to estimate the expected traffic intensity. The forecasting process involves several steps: Definition of a problem; Data acquisition; Choice of forecasting method; Analysis/Forecasting; Documentation and analysis of results. == Dimensioning == Dimensioning a new network determines the minimum capacity requirements that will still allow the Teletraffic Grade of Service (GoS) requirements to be met. To do this, dimensioning involves planning for peak-hour traffic, i.e. that hour during the day during which traffic intensity is at its peak. The dimensioning process involves determining the network’s topology, routing plan, traffic matrix, and GoS requirements, and using this information to determine the maximum call handling capacity of the switches, and the maximum number of channels required between the switches. This process requires a complex model that simulates the behavior of the network equipment and routing protocols. A dimensioning rule is that the planner must ensure that the traffic load should never approach a load of 100 percent. To calculate the correct dimensioning to comply with the above rule, the planner must take on-going measurements of the network’s traffic, and continuously maintain and upgrade resources to meet the changing requirements. Another reason for overprovisioning is to make sure that traffic can be rerouted in case a failure occurs in the network. Because of its complexity, network dimensioning is typically done using specialized software tools. Whereas researchers typically develop custom software to study a particular problem, network operators typically make use of commercial network planning software. == Traffic engineering == Compared to network engineering, which adds resources such as links, routers, and switches into the network, traffic engineering targets changing traffic paths on the existing network to alleviate traffic congestion or accommodate more traffic demand. This technology is critical when the cost of network expansion is prohibitively high and the network load is not optimally balanced. The first part provides financial motivation for traffic engineering while the second part grants the possibility of deploying this technology. == Survivability == Network survivability enables the network to maintain maximum network connectivity and quality of service under failure conditions. It has been one of the critical requirements in network planning and design. It involves design requirements on topology, protocol, bandwidth allocation, etc.. Topology requirement can be maintaining a minimum two-connected network against any failure of a single link or node. Protocol requirements include using a dynamic routing protocol to reroute traffic against network dynamics during the transition of network dimensioning or equipment failures. Bandwidth allocation requirements pro-actively allocate extra bandwidth to avoid traffic loss under failure conditions. This topic has been actively studied in conferences, such as the International Workshop on Design of Reliable Communication Networks (DRCN). == Data-driven network design == More recently, with the increasing role of Artificial Intelligence technologies in engineering, the idea of using data to create data-driven models of existing networks has been proposed. By analyzing large network data, also the less desired behaviors that may occur in real-world networks can be understood, worked around, and avoided in future designs. Both the design and management of networked systems can be improved by data-driven paradigm. Data-driven models can also be used at various phases of service and network management life cycle such as service instantiation, service provision, optimization, monitoring, and diagnostic. == See also == Core-and-pod Network Partition for Optimization Optimal network design - an optimization problem of constructing a network which minimizes the total travel cost. == References ==
Wikipedia/Network_design
The Open Systems Interconnection (OSI) model is a reference model developed by the International Organization for Standardization (ISO) that "provides a common basis for the coordination of standards development for the purpose of systems interconnection." In the OSI reference model, the components of a communication system are distinguished in seven abstraction layers: Physical, Data Link, Network, Transport, Session, Presentation, and Application. The model describes communications from the physical implementation of transmitting bits across a transmission medium to the highest-level representation of data of a distributed application. Each layer has well-defined functions and semantics and serves a class of functionality to the layer above it and is served by the layer below it. Established, well-known communication protocols are decomposed in software development into the model's hierarchy of function calls. The Internet protocol suite as defined in RFC 1122 and RFC 1123 is a model of networking developed contemporarily to the OSI model, and was funded primarily by the U.S. Department of Defense. It was the foundation for the development of the Internet. It assumed the presence of generic physical links and focused primarily on the software layers of communication, with a similar but much less rigorous structure than the OSI model. In comparison, several networking models have sought to create an intellectual framework for clarifying networking concepts and activities, but none have been as successful as the OSI reference model in becoming the standard model for discussing and teaching networking in the field of information technology. The model allows transparent communication through equivalent exchange of protocol data units (PDUs) between two parties, through what is known as peer-to-peer networking (also known as peer-to-peer communication). As a result, the OSI reference model has not only become an important piece among professionals and non-professionals alike, but also in all networking between one or many parties, due in large part to its commonly accepted user-friendly framework. == History == The development of the OSI model started in the late 1970s to support the emergence of the diverse computer networking methods that were competing for application in the large national networking efforts in the world (see OSI protocols and Protocol Wars). In the 1980s, the model became a working product of the Open Systems Interconnection group at the International Organization for Standardization (ISO). While attempting to provide a comprehensive description of networking, the model failed to garner reliance during the design of the Internet, which is reflected in the less prescriptive Internet Protocol Suite, principally sponsored under the auspices of the Internet Engineering Task Force (IETF). In the early- and mid-1970s, networking was largely either government-sponsored (NPL network in the UK, ARPANET in the US, CYCLADES in France) or vendor-developed with proprietary standards, such as IBM's Systems Network Architecture and Digital Equipment Corporation's DECnet. Public data networks were only just beginning to emerge, and these began to use the X.25 standard in the late 1970s. The Experimental Packet Switched System in the UK c. 1973–1975 identified the need for defining higher-level protocols. The UK National Computing Centre publication, Why Distributed Computing, which came from considerable research into future configurations for computer systems, resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977. Beginning in 1977, the ISO initiated a program to develop general standards and methods of networking. A similar process evolved at the International Telegraph and Telephone Consultative Committee (CCITT, from French: Comité Consultatif International Téléphonique et Télégraphique). Both bodies developed documents that defined similar networking models. The British Department of Trade and Industry acted as the secretariat, and universities in the United Kingdom developed prototypes of the standards. The OSI model was first defined in raw form in Washington, D.C., in February 1978 by French software engineer Hubert Zimmermann, and the refined but still draft standard was published by the ISO in 1980. The drafters of the reference model had to contend with many competing priorities and interests. The rate of technological change made it necessary to define standards that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards. Although not a standard itself, it was a framework in which future standards could be defined. In May 1983, the CCITT and ISO documents were merged to form The Basic Reference Model for Open Systems Interconnection, usually referred to as the Open Systems Interconnection Reference Model, OSI Reference Model, or simply OSI model. It was published in 1984 by both the ISO, as standard ISO 7498, and the renamed CCITT (now called the Telecommunications Standardization Sector of the International Telecommunication Union or ITU-T) as standard X.200. OSI had two major components: an abstract model of networking, called the Basic Reference Model or seven-layer model, and a set of specific protocols. The OSI reference model was a major advance in the standardisation of network concepts. It promoted the idea of a consistent model of protocol layers, defining interoperability between network devices and software. The concept of a seven-layer model was provided by the work of Charles Bachman at Honeywell Information Systems. Various aspects of OSI design evolved from experiences with the NPL network, ARPANET, CYCLADES, EIN, and the International Network Working Group (IFIP WG6.1). In this model, a networking system was divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacted directly only with the layer immediately beneath it and provided facilities for use by the layer above it. The OSI standards documents are available from the ITU-T as the X.200 series of recommendations. Some of the protocol specifications were also available as part of the ITU-T X series. The equivalent ISO/IEC standards for the OSI model were available from ISO. Not all are free of charge. OSI was an industry effort, attempting to get industry participants to agree on common network standards to provide multi-vendor interoperability. It was common for large networks to support multiple network protocol suites, with many devices unable to interoperate with other devices because of a lack of common protocols. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. However, while OSI developed its networking standards in the late 1980s, TCP/IP came into widespread use on multi-vendor networks for internetworking. The OSI model is still used as a reference for teaching and documentation; however, the OSI protocols originally conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing. Others say the original OSI model does not fit today's networking protocols and have suggested instead a simplified approach. == Definitions == Communication protocols enable an entity in one host to interact with a corresponding entity at the same layer in another host. Service definitions, like the OSI model, abstractly describe the functionality provided to a layer N by a layer N−1, where N is one of the seven layers of protocols operating in the local host. At each level N, two entities at the communicating devices (layer N peers) exchange protocol data units (PDUs) by means of a layer N protocol. Each PDU contains a payload, called the service data unit (SDU), along with protocol-related headers or footers. Data processing by two communicating OSI-compatible devices proceeds as follows: The data to be transmitted is composed at the topmost layer of the transmitting device (layer N) into a protocol data unit (PDU). The PDU is passed to layer N−1, where it is known as the service data unit (SDU). At layer N−1 the SDU is concatenated with a header, a footer, or both, producing a layer N−1 PDU. It is then passed to layer N−2. The process continues until reaching the lowermost level, from which the data is transmitted to the receiving device. At the receiving device the data is passed from the lowest to the highest layer as a series of SDUs while being successively stripped from each layer's header or footer until reaching the topmost layer, where the last of the data is consumed. === Standards documents === The OSI model was defined in ISO/IEC 7498 which consists of the following parts: ISO/IEC 7498-1 The Basic Model ISO/IEC 7498-2 Security Architecture ISO/IEC 7498-3 Naming and addressing ISO/IEC 7498-4 Management framework ISO/IEC 7498-1 is also published as ITU-T Recommendation X.200. == Layer architecture == The recommendation X.200 describes seven layers, labelled 1 to 7. Layer 1 is the lowest layer in this model. === Layer 1: Physical layer === The physical layer is responsible for the transmission and reception of unstructured raw data between a device, such as a network interface controller, Ethernet hub, or network switch, and a physical transmission medium. It converts the digital bits into electrical, radio, or optical signals (analogue signals). Layer specifications define characteristics such as voltage levels, the timing of voltage changes, physical data rates, maximum transmission distances, modulation scheme, channel access method and physical connectors. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing and frequency for wireless devices. Bit rate control is done at the physical layer and may define transmission mode as simplex, half duplex, and full duplex. The components of a physical layer can be described in terms of the network topology. Physical layer specifications are included in the specifications for the ubiquitous Bluetooth, Ethernet, and USB standards. An example of a less well-known physical layer specification would be for the CAN standard. The physical layer also specifies how encoding occurs over a physical signal, such as electrical voltage or a light pulse. For example, a 1 bit might be represented on a copper wire by the transition from a 0-volt to a 5-volt signal, whereas a 0 bit might be represented by the transition from a 5-volt to a 0-volt signal. As a result, common problems occurring at the physical layer are often related to the incorrect media termination, EMI or noise scrambling, and NICs and hubs that are misconfigured or do not work correctly. === Layer 2: Data link layer === The data link layer provides node-to-node data transfer—a link between two directly connected nodes. It detects and possibly corrects errors that may occur in the physical layer. It defines the protocol to establish and terminate a connection between two physically connected devices. It also defines the protocol for flow control between them. IEEE 802 divides the data link layer into two sublayers: Medium access control (MAC) layer – responsible for controlling how devices in a network gain access to a medium and permission to transmit data. Logical link control (LLC) layer – responsible for identifying and encapsulating network layer protocols, and controls error checking and frame synchronization. The MAC and LLC layers of IEEE 802 networks such as 802.3 Ethernet, 802.11 Wi-Fi, and 802.15.4 Zigbee operate at the data link layer. The Point-to-Point Protocol (PPP) is a data link layer protocol that can operate over several different physical layers, such as synchronous and asynchronous serial lines. The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer that provides both error correction and flow control by means of a selective-repeat sliding-window protocol. Security, specifically (authenticated) encryption, at this layer can be applied with MACsec. === Layer 3: Network layer === The network layer provides the functional and procedural means of transferring packets from one node to another connected in "different networks". A network is a medium to which many nodes can be connected, on which every node has an address and which permits nodes connected to it to transfer messages to other nodes connected to it by merely providing the content of a message and the address of the destination node and letting the network find the way to deliver the message to the destination node, possibly routing it through intermediate nodes. If the message is too large to be transmitted from one node to another on the data link layer between those nodes, the network may implement message delivery by splitting the message into several fragments at one node, sending the fragments independently, and reassembling the fragments at another node. It may, but does not need to, report delivery errors. Message delivery at the network layer is not necessarily guaranteed to be reliable; a network layer protocol may provide reliable message delivery, but it does not need to do so. A number of layer-management protocols, a function defined in the management annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them. === Layer 4: Transport layer === The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source host to a destination host from one application to another across a network while maintaining the quality-of-service functions. Transport protocols may be connection-oriented or connectionless. This may require breaking large protocol data units or long data streams into smaller chunks called "segments", since the network layer imposes a maximum packet size called the maximum transmission unit (MTU), which depends on the maximum packet size imposed by all data link layers on the network path between the two hosts. The amount of data in a data segment must be small enough to allow for a network-layer header and a transport-layer header. For example, for data being transferred across Ethernet, the MTU is 1500 bytes, the minimum size of a TCP header is 20 bytes, and the minimum size of an IPv4 header is 20 bytes, so the maximum segment size is 1500−(20+20) bytes, or 1460 bytes. The process of dividing data into segments is called segmentation; it is an optional function of the transport layer. Some connection-oriented transport protocols, such as TCP and the OSI connection-oriented transport protocol (COTP), perform segmentation and reassembly of segments on the receiving side; connectionless transport protocols, such as UDP and the OSI connectionless transport protocol (CLTP), usually do not. The transport layer also controls the reliability of a given link between a source and destination host through flow control, error control, and acknowledgments of sequence and existence. Some protocols are state- and connection-oriented. This means that the transport layer can keep track of the segments and retransmit those that fail delivery through the acknowledgment hand-shake system. The transport layer will also provide the acknowledgement of the successful data transmission and sends the next data if no errors occurred. Reliability, however, is not a strict requirement within the transport layer. Protocols like UDP, for example, are used in applications that are willing to accept some packet loss, reordering, errors or duplication. Streaming media, real-time multiplayer games and voice over IP (VoIP) are examples of applications in which loss of packets is not usually a fatal problem. The OSI connection-oriented transport protocol defines five classes of connection-mode transport protocols, ranging from class 0 (which is also known as TP0 and provides the fewest features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of TP0–4 classes are shown in the following table: An easy way to visualize the transport layer is to compare it with a post office, which deals with the dispatch and classification of mail and parcels sent. A post office inspects only the outer envelope of mail to determine its delivery. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunnelling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer protocol, if the encapsulation of the payload takes place only at the endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete Layer 2 frames or Layer 3 packets to deliver to the endpoint. L2TP carries PPP frames inside transport segments. Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer 4 protocols within OSI. Transport Layer Security (TLS) does not strictly fit inside the model either. It contains characteristics of the transport and presentation layers. === Layer 5: Session layer === The session layer creates the setup, controls the connections, and ends the teardown, between two or more computers, which is called a "session". Common functions of the session layer include user logon (establishment) and user logoff (termination) functions. Including this matter, authentication methods are also built into most client software, such as FTP Client and NFS Client for Microsoft Networks. Therefore, the session layer establishes, manages and terminates the connections between the local and remote applications. The session layer also provides for full-duplex, half-duplex, or simplex operation, and establishes procedures for checkpointing, suspending, restarting, and terminating a session between two related streams of data, such as an audio and a video stream in a web-conferencing application. Therefore, the session layer is commonly implemented explicitly in application environments that use remote procedure calls. === Layer 6: Presentation layer === The presentation layer establishes data formatting and data translation into a format specified by the application layer during the encapsulation of outgoing messages while being passed down the protocol stack, and possibly reversed during the deencapsulation of incoming messages when being passed up the protocol stack. For this very reason, outgoing messages during encapsulation are converted into a format specified by the application layer, while the conversion for incoming messages during deencapsulation are reversed. The presentation layer handles protocol conversion, data encryption, data decryption, data compression, data decompression, incompatibility of data representation between operating systems, and graphic commands. The presentation layer transforms data into the form that the application layer accepts, to be sent across a network. Since the presentation layer converts data and graphics into a display format for the application layer, the presentation layer is sometimes called the syntax layer. For this reason, the presentation layer negotiates the transfer of syntax structure through the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML. === Layer 7: Application layer === The application layer is the layer of the OSI model that is closest to the end user, which means both the OSI application layer and the user interact directly with a software application that implements a component of communication between the client and server, such as File Explorer and Microsoft Word. Such application programs fall outside the scope of the OSI model unless they are directly integrated into the application layer through the functions of communication, as is the case with applications such as web browsers and email programs. Other examples of software are Microsoft Network Software for File and Printer Sharing and Unix/Linux Network File System Client for access to shared file resources. Application-layer functions typically include file sharing, message handling, and database access, through the most common protocols at the application layer, known as HTTP, FTP, SMB/CIFS, TFTP, and SMTP. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. The most important distinction in the application layer is the distinction between the application entity and the application. For example, a reservation website might have two application entities: one using HTTP to communicate with its users, and one for a remote database protocol to record reservations. Neither of these protocols have anything to do with reservations. That logic is in the application itself. The application layer has no means to determine the availability of resources in the network. == Cross-layer functions == Cross-layer functions are services that are not tied to a given layer, but may affect more than one layer. Some orthogonal aspects, such as management and security, involve all of the layers (See ITU-T X.800 Recommendation). These services are aimed at improving the CIA triad—confidentiality, integrity, and availability—of the transmitted data. Cross-layer functions are the norm, in practice, because the availability of a communication service is determined by the interaction between network design and network management protocols. Specific examples of cross-layer functions include the following: Security service (telecommunication) as defined by ITU-T X.800 recommendation. Management functions, i.e. functions that permit to configure, instantiate, monitor, terminate the communications of two or more entities: there is a specific application-layer protocol, Common Management Information Protocol (CMIP) and its corresponding service, Common Management Information Service (CMIS), they need to interact with every layer in order to deal with their instances. Multiprotocol Label Switching (MPLS), ATM, and X.25 are 3a protocols. OSI subdivides the Network Layer into three sublayers: 3a) Subnetwork Access, 3b) Subnetwork Dependent Convergence and 3c) Subnetwork Independent Convergence. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram-based service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. Sometimes one sees reference to a Layer 2.5. Cross MAC and PHY Scheduling is essential in wireless networks because of the time-varying nature of wireless channels. By scheduling packet transmission only in favourable channel conditions, which requires the MAC layer to obtain channel state information from the PHY layer, network throughput can be significantly improved and energy waste can be avoided. == Programming interfaces == Neither the OSI Reference Model, nor any OSI protocol specifications, outline any programming interfaces, other than deliberately abstract service descriptions. Protocol specifications define a methodology for communication between peers, but the software interfaces are implementation-specific. For example, the Network Driver Interface Specification (NDIS) and Open Data-Link Interface (ODI) are interfaces between the media (layer 2) and the network protocol (layer 3). == Comparison to other networking suites == The table below presents a list of OSI layers, the original OSI protocols, and some approximate modern matches. This correspondence is rough: the OSI model contains idiosyncrasies not found in later systems such as the IP stack in modern Internet. === Comparison with TCP/IP model === The design of protocols in the TCP/IP model of the Internet does not concern itself with strict hierarchical encapsulation and layering. RFC 3439 contains a section entitled "Layering considered harmful". TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols: the scope of the software application; the host-to-host transport path; the internetworking range; and the scope of the direct links to other nodes on the local network. Despite using a different concept for layering than the OSI model, these layers are often compared with the OSI layering scheme in the following manner: The Internet application layer maps to the OSI application layer, presentation layer, and most of the session layer. The TCP/IP transport layer maps to the graceful close function of the OSI session layer as well as the OSI transport layer. The internet layer performs functions as those in a subset of the OSI network layer. The link layer corresponds to the OSI data link layer and may include similar functions as the physical layer, as well as some protocols of the OSI's network layer. These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in the internal organization of the network layer. The OSI protocol suite that was specified as part of the OSI project was considered by many as too complicated and inefficient, and to a large extent unimplementable. Taking the "forklift upgrade" approach to networking, it specified eliminating all existing networking protocols and replacing them at all layers of the stack. This made implementation difficult and was resisted by many vendors and users with significant investments in other network technologies. In addition, the protocols included so many optional features that many vendors' implementations were not interoperable. Although the OSI model is often still referenced, the Internet protocol suite has become the standard for networking. TCP/IP's pragmatic approach to computer networking and to independent implementations of simplified protocols made it a practical methodology. Some protocols and specifications in the OSI stack remain in use, one example being IS-IS, which was specified for OSI as ISO/IEC 10589:2002 and adapted for Internet use with TCP/IP as RFC 1142. == See also == == References == == Further reading == Day, John D. (2008). Patterns in Network Architecture: A Return to Fundamentals. Upper Saddle River, N.J.: Pearson Education. ISBN 978-0-13-225242-3. OCLC 213482801. Dickson, Gary; Lloyd, Alan (1992). Open Systems Interconnection. New York: Prentice Hall. ISBN 978-0-13-640111-7. OCLC 1245634475 – via Internet Archive. Piscitello, David M.; Chapin, A. Lyman (1993). Open systems networking : TCP/IP and OSI. Reading, Mass.: Addison-Wesley Pub. Co. ISBN 978-0-201-56334-4. OCLC 624431223 – via Internet Archive. Rose, Marshall T. (1990). The Open Book: A Practical Perspective on OSI. Englewood Cliffs, N.J.: Prentice Hall. ISBN 978-0-13-643016-2. OCLC 1415988401 – via Internet Archive. Russell, Andrew L. (2014). Open Standards and the Digital Age: History, Ideology, and Networks. Cambridge University Press. ISBN 978-1-139-91661-5. OCLC 881237495. Partial preview at Google Books. Zimmermann, Hubert (April 1980). "OSI Reference Model — The ISO Model of Architecture for Open Systems Interconnection". IEEE Transactions on Communications. 28 (4): 425–432. CiteSeerX 10.1.1.136.9497. doi:10.1109/TCOM.1980.1094702. ISSN 0090-6778. OCLC 5858668034. S2CID 16013989. == External links == "Windows network architecture and the OSI model". Microsoft Learn. 2 February 2024. Retrieved 12 July 2024. "ISO/IEC standard 7498-1:1994 - Service definition for the association control service element". ISO Standards Maintenance Portal. Retrieved 12 July 2024. (PDF document inside ZIP archive) (requires HTTP cookies in order to accept licence agreement) "ITU Recommendation X.200". International Telecommunication Union. 2 June 1998. Retrieved 12 July 2024. "INFormation CHanGe Architectures and Flow Charts powered by Google App Engine". infchg.appspot.com. Archived from the original on 26 May 2012. "Internetworking Technology Handbook". docwiki.cisco.com. 10 July 2015. Archived from the original on 6 September 2015. EdXD; Saikot, Mahmud Hasan (25 November 2021). "7 Layers of OSI Model Explained". ByteXD. Retrieved 12 July 2024.
Wikipedia/Open_Systems_Interconnection
Systems Network Architecture (SNA) is IBM's proprietary networking architecture, created in 1974. It is a complete protocol stack for interconnecting computers and their resources. SNA describes formats and protocols but, in itself, is not a piece of software. The implementation of SNA takes the form of various communications packages, most notably Virtual Telecommunications Access Method (VTAM), the mainframe software package for SNA communications. == History == SNA was made public as part of IBM's "Advanced Function for Communications" announcement in September, 1974, which included the implementation of the SNA/SDLC (Synchronous Data Link Control) protocols on new communications products: IBM 3767 communication terminal (printer) IBM 3770 data communication system They were supported by IBM 3704/3705 communication controllers and their Network Control Program (NCP), and by System/370 and their VTAM and other software such as CICS and IMS. This announcement was followed by another announcement in July, 1975, which introduced the IBM 3760 data entry station, the IBM 3790 communication system, and the new models of the IBM 3270 display system. SNA was designed in the era when the computer industry had not fully adopted the concept of layered communication. Applications, databases, and communication functions were mingled into the same protocol or product, which made it difficult to maintain and manage. SNA was mainly designed by the IBM Systems Development Division laboratory in Research Triangle Park, North Carolina, USA, helped by other laboratories that implemented SNA/SDLC. IBM later made the details public in its System Reference Library manuals and IBM Systems Journal. It is still used extensively in banks and other financial transaction networks, as well as in many government agencies. In 1999 there were an estimated 3,500 companies "with 11,000 SNA mainframes." One of the primary pieces of hardware, the 3745/3746 communications controller, has been withdrawn from the market by IBM. IBM continues to provide hardware maintenance service and microcode features to support users. A robust market of smaller companies continues to provide the 3745/3746, features, parts, and service. VTAM is also supported by IBM, as is the NCP required by the 3745/3746 controllers. In 2008 an IBM publication said: with the popularity and growth of TCP/IP, SNA is changing from being a true network architecture to being what could be termed an "application and application access architecture." In other words, there are many applications that still need to communicate in SNA, but the required SNA protocols are carried over the network by IP. == Objectives of SNA == IBM in the mid-1970s saw itself mainly as a hardware vendor and hence all its innovations in that period aimed to increase hardware sales. SNA's objective was to reduce the costs of operating large numbers of terminals and thus induce customers to develop or expand interactive terminal-based systems as opposed to batch systems. An expansion of interactive terminal-based systems would increase sales of terminals and more importantly of mainframe computers and peripherals - partly because of the simple increase in the volume of work done by the systems and partly because interactive processing requires more computing power per transaction than batch processing. Hence SNA aimed to reduce the main non-computer costs and other difficulties in operating large networks using earlier communications protocols. The difficulties included: Often a communications line could not be shared by terminals of different types, as they used different "dialects" of the existing communications protocols. Up to the early 1970s, computer components were so expensive and bulky that it was not feasible to include all-purpose communications interface cards in terminals. Every type of terminal had a hard-wired communications card which supported only the operation of one type of terminal without compatibility with other types of terminals on the same line. The protocols which the primitive communications cards could handle were not efficient. Each communications line used more time transmitting data than modern lines do. Telecommunications lines at the time were of much lower quality. For example, it was almost impossible to run a dial-up line at more than 19,200 bits per second because of the overwhelming error rate, as compared with 56,000 bits per second today on dial-up lines; and in the early 1970s few leased lines were run at more than 2400 bits per second (these low speeds are a consequence of Shannon's Law in a relatively low-technology environment). As a result, running a large number of terminals required a lot more communications lines than the number required today, especially if different types of terminals needed to be supported, or the users wanted to use different types of applications (.e.g. under CICS or TSO) from the same location. In purely financial terms SNA's objectives were to increase customers' spending on terminal-based systems and at the same time to increase IBM's share of that spending, mainly at the expense of the telecommunications companies. SNA also aimed to overcome a limitation of the architecture which IBM's System/370 mainframes inherited from System/360. Each CPU could connect to at most 16 I/O channels and each channel could handle up to 256 peripherals - i.e. there was a maximum of 4096 peripherals per CPU. At the time when SNA was designed, each communications line counted as a peripheral. Thus the number of terminals with which powerful mainframes could otherwise communicate was limited. == Principal components and technologies == Improvements in computer component technology made it feasible to build terminals that included more powerful communications cards which could operate a single standard communications protocol rather than a very stripped-down protocol which suited only a specific type of terminal. As a result, several multi-layer communications protocols were proposed in the 1970s, of which IBM's SNA and ITU-T's X.25 became dominant later. The most important elements of SNA include: IBM Network Control Program (NCP) is a communications program running on the 3705 and subsequent 37xx communications processors that, among other things, implements the packet switching protocol defined by SNA. The protocol performed two main functions: It is a packet forwarding protocol, acting like modern switch - forwarding data packages to the next node, which might be a mainframe, a terminal or another 3705. The communications processors supported only hierarchical networks with a mainframe at the center, unlike modern routers which support peer-to-peer networks in which a machine at the end of the line can be both a client and a server at the same time. It is a multiplexer that connected multiple terminals into one communication line to the CPU, thus relieved the constraints on the maximum number of communication lines per CPU. A 3705 could support a larger number of lines (352 initially) but only counted as one peripheral by the CPUs and channels. Since the launch of SNA IBM has introduced improved communications processors, of which the latest is the 3745. Synchronous Data Link Control (SDLC), a protocol which greatly improved the efficiency of data transfer over a single link: It is a sliding window protocol, which enables terminals and 3705 communications processors to send frames of data one after the other without waiting for an acknowledgement of the previous frame – the communications cards had sufficient memory and processing capacity to remember the last 7 frames sent or received, request re-transmission of only those frames which contained errors, and slot the re-transmitted frames into the right place in the sequence before forwarding them to the next stage. These frames all had the same type of envelope (frame header and trailer) which contained enough information for data packages from different types of terminal to be sent along the same communications line, leaving the mainframe to deal with any differences in the formatting of the content or in the rules governing dialogs with different types of terminal. Remote terminals (e.g., those connected to the mainframe by telephone lines) and 3705 communications processors would have SDLC-capable communications cards. This is the precursor of the packet communication that eventually evolved into today's TCP/IP technology. SDLC itself evolved into HDLC, one of the base technologies for dedicated telecommunication circuits. VTAM, a software package to provide log-in, session keeping, and routing services within the mainframe. A terminal user would log-in via VTAM to a specific application or application environment (e.g. CICS, IMS, DB2, or TSO/ISPF). A VTAM device would then route data from that terminal to the appropriate application or application environment until the user logged out and possibly logged into another application. The original versions of IBM hardware could only keep one session per terminal. In the 1980s further software (mainly from third-party vendors) made it possible for a terminal to have simultaneous sessions with different applications or application environments. == Advantages and disadvantages == SNA removed link control from the application program and placed it in the NCP. This had the following advantages and disadvantages: === Advantages === Localization of problems in the telecommunications network was easier because a relatively small amount of software actually dealt with communication links. There was a single error reporting system. Adding communication capability to an application program was much easier because the formidable area of link control software that typically requires interrupt processors and software timers was relegated to system software and NCP. With the advent of Advanced Peer-to-Peer Networking (APPN), routing functionality was the responsibility of the computer as opposed to the router (as with TCP/IP networks). Each computer maintained a list of Nodes that defined the forwarding mechanisms. A centralized node type known as a Network Node maintained Global tables of all other node types. APPN stopped the need to maintain Advanced Program-to-Program Communication (APPC) routing tables that explicitly defined endpoint to endpoint connectivity. APPN sessions would route to endpoints through other allowed node types until it found the destination. This is similar to the way that routers for the Internet Protocol and the Netware Internetwork Packet Exchange protocol function. (APPN is also sometimes referred to PU2.1 or Physical Unit 2.1. APPC, also sometime referred to LU6.2 or Logical Unit 6.2, was the only protocol defined to APPN networks, but was originally one of many protocols supported by VTAM/NCP, along with LU0, LU1, LU2 (3270 Terminal), and LU3. APPC was primarily used between CICS environments, as well as database services, because it contact protocols for 2-phase commit processing). Physical Units were PU5 (VTAM), PU4 (37xx), PU2 (Cluster Controller). A PU5 was the most capable and considered the primary on all communication. Other PU devices requested a connection from the PU5 and the PU5 could establish the connection or not. The other PU types could only be secondary to the PU5. A PU2.1 added the ability to a PU2.1 to connect to another PU2.1 in a peer-to-peer environment.) === Disadvantages === Connection to non-SNA networks was difficult. An application that needed access to some communication scheme not supported in the current version of SNA would have faced obstacles. Before IBM included X.25 support (NPSI) in SNA, connecting to an X.25 network would have been awkward. Conversion between X.25 and SNA protocols could have been provided either by NCP software modifications or by an external protocol converter. A sheaf of alternate pathways between every pair of nodes in a network had to be predesigned and stored centrally. Choice among these pathways by SNA was rigid and did not take advantage of current link loads for optimum speed. SNA network installation and maintenance are complicated and SNA network products are (or were) expensive. Attempts to reduce SNA network complexity by adding IBM Advanced Peer-to-Peer Networking functionality were not really successful, if only because the migration from traditional SNA to SNA/APPN was very complex, without providing much additional value, at least initially. SNA software licences (VTAM) cost as much as $10,000 a month for high-end systems. And SNA IBM 3745 Communications Controllers typically cost over $100K. TCP/IP was still seen as unfit for commercial applications e.g. in the finance industry until the late 1980s, but rapidly took over in the 1990s due to its peer-to-peer networking and packet communication technology. SNA's connection based architecture invoked huge state machine logic to keep track of everything. APPN added a new dimension to state logic with its concept of differing node types. While it was solid when everything was running correctly, there was still a need for manual intervention. Simple things like watching the Control Point sessions had to be done manually. APPN wasn't without issues; in the early days many shops abandoned it due to issues found in APPN support. Over time, however, many of the issues were worked out but not before TCP/IP became increasingly popular in the early 1990s, which marked the beginning of the end for SNA. == Security == SNA at its core was designed with the ability to wrap different layers of connections with a blanket of security. To communicate within an SNA environment you would first have to connect to a node and establish and maintain a link connection into the network. You then have to negotiate a proper session and then handle the flows within the session itself. At each level there are different security controls that can govern the connections and protect the session information. == Network Addressable Units == Network Addressable Units in a SNA network are any components that can be assigned an address and can send and receive information. They are distinguished further as follows: a System Services Control Point (SSCP) provides resource management and other session services (such as directory services) for users in a subarea network; a Physical Unit is a combination of hardware and software components that control the links to other nodes. a Logical Unit acts as the intermediary between the user and the network. === Logical Unit (LU) === SNA essentially offers transparent communication: equipment specifics that do not impose any constraints onto LU-LU communication. But eventually it serves a purpose to make a distinction between LU types, as the application must take the functionality of the terminal equipment into account (e.g. screen sizes and layout). Within SNA there are three types of data stream to connect local display terminals and printers; there is SNA Character String (SCS), used for LU1 terminals and for logging on to an SNA network with Unformatted System Services (USS), there is the 3270 data stream mainly used by mainframes such as the System/370 and successors, including the zSeries family, and the 5250 data stream mainly used by minicomputers/servers such as the System/34, System/36, System/38, and AS/400 and its successors, including System i and IBM Power Systems running IBM i. SNA defines several kinds of devices, called Logical Unit types: LU0 provides for undefined devices, or build your own protocol. This is also used for non-SNA 3270 devices supported by TCAM or VTAM. LU1 devices are printers or combinations of keyboards and printers. LU2 devices are IBM 3270 display terminals. LU3 devices are printers using 3270 protocols. LU4 devices are batch terminals. LU5 has never been defined. LU6 provides for protocols between two applications. LU7 provides for sessions with IBM 5250 terminals. The primary ones in use are LU1, LU2, and LU6.2 (an advanced protocol for application to application conversations). === Physical Unit (PU) === PU1 nodes are terminal controllers such as IBM 6670 or IBM 3767 PU2 nodes are cluster controllers running configuration support programs such as IBM 3174, IBM 3274, or the IBM 4701 or IBM 4702 Branch Controller PU2.1 nodes are peer-to-peer (APPN) nodes PU3 was never defined PU4 nodes are front-end processors running the Network Control Program (NCP) such as the IBM 37xx series PU5 nodes are host computer systems The term 37xx refers to IBM's family of SNA communications controllers. The 3745 supports up to eight high-speed T1 circuits, the 3725 is a large-scale node and front-end processor for a host, and the 3720 is a remote node that functions as a concentrator and router. == SNA over Token-Ring == VTAM/NCP PU4 nodes attached to IBM Token Ring networks can share the same Local Area Network infrastructure with workstations and servers. NCP encapsulates SNA packets into Token-Ring frames, allowing sessions to flow over a Token-Ring network. The actual encapsulation and decapsulation takes place in the 3745. == SNA over IP == As mainframe-based entities looked for alternatives to their 37XX-based networks, IBM partnered with Cisco in the mid-1990s and together they developed Data Link Switching, or DLSw. DLSw encapsulates SNA packets into IP datagrams, allowing sessions to flow over an IP network. The actual encapsulation and decapsulation takes place in Cisco routers at each end of a DLSw peer connection. At the local, or mainframe site, the router uses Token Ring topology to connect natively to VTAM. At the remote (user) end of the connection, a PU type 2 emulator (such as an SNA gateway server) connects to the peer router via the router's LAN interface. End user terminals are typically PCs with 3270 emulation software that is defined to the SNA gateway. The VTAM/NCP PU type 2 definition becomes a Switched Major Node that can be local to VTAM (without an NCP), and a "Line" connection can be defined using various possible solutions (such as a Token Ring interface on the 3745, a 3172 Lan Channel Station, or a Cisco ESCON-compatible Channel Interface Processor). == Competitors == The proprietary networking architecture for Honeywell Bull mainframes is Distributed Systems Architecture (DSA). The Communications package for DSA is VIP. DSA is also no longer supported for client access. Bull mainframes are fitted with Mainway for translating DSA to TCP/IP and VIP devices are replaced by TNVIP Terminal Emulations (GLink, Winsurf). GCOS 8 supports TNVIP SE over TCP/IP. The networking architecture for Univac mainframes was the Distributed Computing Architecture (DCA), and the networking architecture for Burroughs mainframes was the Burroughs Network Architecture (BNA); after they merged to form Unisys, both were provided by the merged company. Both were largely obsolete by 2012. International Computers Limited (ICL) provided its Information Processing Architecture (IPA). DECnet is a suite of network protocols created by Digital Equipment Corporation, originally released in 1975 to connect two PDP-11 minicomputers. It evolved into one of the first peer-to-peer network architectures, thus transforming DEC into a networking powerhouse in the 1980s. SNA also competed with ISO's Open Systems Interconnection, which was an attempt to create a vendor-neutral network architecture that failed due to the problems of "design by committee". OSI systems are very complex, and the many parties involved required extensive flexibilities that hurt the interoperability of OSI systems, which was the prime objective to start with. The TCP/IP suite for many years was not considered a serious alternative by IBM, due in part to the lack of control over the intellectual property. The 1988 publication of RFC 1041, authored by Yakov Rekhter, which defines an option to run IBM 3270 sessions over Telnet, explicitly recognizes the customer demand for interoperability in the data center. Subsequently, the IETF expanded on this work with multiple other RFCs. TN3270 (Telnet 3270), defined by those RFCs, supports direct client-server connections to the mainframe using a TN3270 server on the mainframe, and a TN3270 emulation package on the computer at the end user site. This protocol allows existing VTAM applications (CICS, TSO) to run with little or no change from traditional SNA by supporting traditional 3270 terminal protocol over the TCP/IP session. This protocol is widely used to replace legacy SNA connectivity more than Data-Link Switching (DLSw) and other SNA replacement technologies. A similar TN5250 (Telnet 5250) variant exists for the IBM 5250. == Non-IBM SNA implementations == Non-IBM SNA software allowed systems other than IBM's to communicate with IBM's mainframes and AS/400 midrange computers using the SNA protocols. Some Unix system vendors, such as Sun Microsystems with its SunLink SNA product line, including PU2.1 Server, and Hewlett-Packard/Hewlett Packard Enterprise, with their SNAplus2 product, provided SNA software. Microsoft introduced SNA Server for Windows in 1993; it is now named Microsoft Host Integration Server. Digital Equipment Corporation had VMS/SNA for VMS. Third-party SNA software packages for VMS, such as the VAX Link products from Systems Strategies, Inc., were also available. Hewlett-Packard offered SNA Server and SNA Access for its HP 3000 systems. Brixton Systems developed several SNA software packages, sold under the name "Brixton", such as Brixton BrxPU21, BrxPU5, BrxLU62, and BrxAPPC, for systems such as workstations from Hewlett-Packard, and Sun Microsystems. IBM supported using several non-IBM software implementations of APPC/PU2.1/LU6.2 to communicate with z/OS, including SNAplus2 for systems from HP, Brixton 4.1 SNA for Sun Solaris, and SunLink SNA 9.1 Support for Sun Solaris. == See also == Network Data Mover Protocol Wars TN3270 TN5250 == Explanatory notes == == Notes == == References == Friend, George E.; Fike, John L.; Baker, H. Charles; Bellamy, John C. (1988). Understanding Data Communications (2nd ed.). Indianapolis: Howard W. Sams & Company. ISBN 0-672-27270-9. Pooch, Udo W.; Greene, William H.; Moss, Gary G. (1983). Telecommunications and Networking. Boston: Little, Brown and Company. ISBN 0-316-71498-4. Schatt, Stan (1991). Linking LANs: A Micro Manager's Guide. McGraw-Hill. ISBN 0-8306-3755-9. Systems Network Architecture General Information (PDF). First Edition. IBM. January 1975. GA27-3102-0. Systems Network Architecture Concepts and Products (PDF). Second Edition. IBM. February 1984. GC30-3072-1. Systems Network Architecture Technical Overview. Fifth Edition. IBM. January 1994. GC30-3073-04. Systems Network Architecture Guide to SNA Publications. Third Edition. IBM. July 1994. GC30-3438-02. == External links == Cisco article on SNA APPN Implementers Workshop Architecture Document repository SNA protocols quite technical Related whitepapers sdsusa.com Advanced Function for Communications System Summary (PDF). Second Edition. IBM. July 1975. GA27-3099-1. Retrieved May 22, 2014. Systems Network Architecture Formats (PDF). Twenty-first Edition. IBM. March 2004. GA27-3136-20. Systems Network Architecture - Sessions Between Logical Units (PDF). Third Edition. IBM. April 1981. GC20-1868-2. Systems Network Architecture - Introduction to Sessions between Logical Units (PDF). Third Edition. IBM. December 1979. GC20-1869-2. Systems Network Architecture Format and Protocol Reference Manual: Architectural Logic (PDF). Third Edition. IBM. November 1980. SY20-3112-2. Systems Network Architecture: Transaction Programmer's Reference Manual for LU Type 6.2. Sixth Edition. IBM. June 1993. GC30-3084-05. Systems Network Architecture Type 2.1 Node Reference. Fifth Edition. IBM. December 1996. SC30-3422-04. Systems Network Architecture LU 6.2 Reference: Peer Protocols. Third Edition. IBM. October 1996. SC31-6808-02.
Wikipedia/Systems_Network_Architecture
In a digitally modulated signal or a line code, symbol rate, modulation rate or baud is the number of symbol changes, waveform changes, or signaling events across the transmission medium per unit of time. The symbol rate is measured in baud (Bd) or symbols per second. In the case of a line code, the symbol rate is the pulse rate in pulses per second. Each symbol can represent or convey one or several bits of data. The symbol rate is related to the gross bit rate, expressed in bits per second. == Symbols == A symbol may be described as either a pulse in digital baseband transmission or a tone in passband transmission using modems. A symbol is a waveform, a state or a significant condition of the communication channel that persists, for a fixed period of time. A sending device places symbols on the channel at a fixed and known symbol rate, and the receiving device has the job of detecting the sequence of symbols in order to reconstruct the transmitted data. There may be a direct correspondence between a symbol and a small unit of data. For example, each symbol may encode one or several binary digits (bits). The data may also be represented by the transitions between symbols, or even by a sequence of many symbols. The symbol duration time, also known as unit interval, can be directly measured as the time between transitions by looking into an eye diagram of an oscilloscope. The symbol duration time Ts can be calculated as: T s = 1 f s {\displaystyle T_{s}={1 \over f_{s}}} where fs is the symbol rate. For example, a baud rate of 1 kBd = 1,000 Bd is synonymous to a symbol rate of 1,000 symbols per second. In case of a modem, this corresponds to 1,000 tones per second, and in case of a line code, this corresponds to 1,000 pulses per second. The symbol duration time is 1/1,000 second = 1 millisecond. === Relationship to gross bit rate === The term baud rate has sometimes incorrectly been used to mean bit rate, since these rates are the same in old modems as well as in the simplest digital communication links using only one bit per symbol, such that binary "0" is represented by one symbol, and binary "1" by another symbol. In more advanced modems and data transmission techniques, a symbol may have more than two states, so it may represent more than one binary digit (a binary digit always represents one of exactly two states). For this reason, the baud rate value will often be lower than the gross bit rate. Example of use and misuse of "baud rate": It is correct to write "the baud rate of my COM port is 9,600" if one means that the bit rate is 9,600 bit/s, since there is one bit per symbol in this case. It is not correct to write "the baud rate of Ethernet is 100 megabaud" or "the baud rate of my modem is 56,000" if one means bit rate. See below for more details on these techniques. The difference between baud (or signaling rate) and the data rate (or bit rate) is like a man using a single semaphore flag who can move his arm to a new position once each second, so his signaling rate (baud) is one symbol per second. The flag can be held in one of eight distinct positions: Straight up, 45° left, 90° left, 135° left, straight down (which is the rest state, where he is sending no signal), 135° right, 90° right, and 45° right. Each signal (symbol) carries three bits of information. It takes three binary digits to encode eight states. The data rate is three bits per second. In the Navy, more than one flag pattern and arm can be used at once, so the combinations of these produce many symbols, each conveying several bits, a higher data rate. If N bits are conveyed per symbol, and the gross bit rate is R, inclusive of channel coding overhead, the symbol rate can be calculated as: f s = R N {\displaystyle f_{s}={R \over N}} In that case M = 2N different symbols are used. In a modem, these may be sinewave tones with unique combinations of amplitude, phase and/or frequency. For example, in a 64QAM modem, M = 64. In a line code, these may be M different voltage levels. By taking information per pulse N in bit/pulse to be the base-2-logarithm of the number of distinct messages M that could be sent, Hartley constructed a measure of the gross bit rate R as: R = f s log 2 ⁡ ( M ) {\displaystyle R=f_{s}\log _{2}(M)} where fs is the baud rate in symbols/second or pulses/second. (See Hartley's law). === Modems for passband transmission === Modulation is used in passband filtered channels such as telephone lines, radio channels and other frequency division multiplex (FDM) channels. In a digital modulation method provided by a modem, each symbol is typically a sine wave tone with a certain frequency, amplitude and phase. Symbol rate, baud rate, is the number of transmitted tones per second. One symbol can carry one or several bits of information. In voiceband modems for the telephone network, it is common for one symbol to carry up to 7 bits. Conveying more than one bit per symbol or bit per pulse has advantages. It reduces the time required to send a given quantity of data over a limited bandwidth. A high spectral efficiency in (bit/s)/Hz can be achieved; i.e., a high bit rate in bit/s although the bandwidth in hertz may be low. The maximum baud rate for a passband for common modulation methods such as QAM, PSK and OFDM is approximately equal to the passband bandwidth. Voiceband modem examples: A V.22bis modem transmits 2400 bit/s using 1200 Bd (1200 symbol/s), where each quadrature amplitude modulation symbol carries two bits of information. The modem can generate M=22=4 different symbols. It requires a bandwidth of 1200 Hz (equal to the baud rate). The carrier frequency is 1800 Hz, meaning that the lower cut off frequency is 1,800 − 1,200/2 = 1,200 Hz, and the upper cutoff frequency is 1,800 + 1,200/2 = 2,400 Hz. A V.34 modem may transmit symbols at a baud rate of 3,420 Bd, and each symbol can carry up to ten bits, resulting in a gross bit rate of 3420 × 10 = 34,200 bit/s. However, the modem is said to operate at a net bit rate of 33,800 bit/s, excluding physical layer overhead. === Line codes for baseband transmission === In case of a baseband channel such as a telegraph line, a serial cable or a Local Area Network twisted pair cable, data is transferred using line codes; i.e., pulses rather than sinewave tones. In this case, the baud rate is synonymous to the pulse rate in pulses/second. The maximum baud rate or pulse rate for a base band channel is called the Nyquist rate, and is double the bandwidth (double the cut-off frequency). The simplest digital communication links (such as individual wires on a motherboard or the RS-232 serial port/COM port) typically have a symbol rate equal to the gross bit rate. Common communication links such as 10 Mbit/s Ethernet (10BASE-T), USB, and FireWire typically have a data bit rate slightly lower than the baud rate, due to the overhead of extra non-data symbols used for self-synchronizing code and error detection. J. M. Emile Baudot (1845–1903) worked out a five-bit code for telegraphs which was standardized internationally and is commonly called Baudot code. More than two voltage levels are used in advanced techniques such as FDDI and 100/1,000 Mbit/s Ethernet LANs, and others, to achieve high data rates. 1,000 Mbit/s Ethernet LAN cables use four wire pairs in full duplex (250 Mbit/s per pair in both directions simultaneously), and many bits per symbol to encode their data payloads. === Digital television and OFDM example === In digital television transmission the symbol rate calculation is: symbol rate in symbols per second = (Data rate in bits per second × 204) / (188 × bits per symbol) The 204 is the number of bytes in a packet including the 16 trailing Reed–Solomon error correction bytes. The 188 is the number of data bytes (187 bytes) plus the leading packet sync byte (0x47). The bits per symbol is the (modulation's power of 2) × (Forward Error Correction). So for example, in 64-QAM modulation 64 = 26 so the bits per symbol is 6. The Forward Error Correction (FEC) is usually expressed as a fraction; i.e., 1/2, 3/4, etc. In the case of 3/4 FEC, for every 3 bits of data, you are sending out 4 bits, one of which is for error correction. Example: given bit rate = 18096263 Modulation type = 64-QAM FEC = 3/4 then symbol rate = 18096263 6 ⋅ 3 4 204 188 = 18096263 6 4 3 204 188 = 4363638 {\displaystyle {\text{symbol rate}}={\cfrac {18096263}{6\cdot {\frac {3}{4}}}}~{\cfrac {204}{188}}={\cfrac {18096263}{6}}~{\cfrac {4}{3}}~{\cfrac {204}{188}}=4363638} In digital terrestrial television (DVB-T, DVB-H and similar techniques) OFDM modulation is used; i.e., multi-carrier modulation. The above symbol rate should then be divided by the number of OFDM sub-carriers in view to achieve the OFDM symbol rate. See the OFDM system comparison table for further numerical details. === Relationship to chip rate === Some communication links (such as GPS transmissions, CDMA cell phones, and other spread spectrum links) have a symbol rate much higher than the data rate (they transmit many symbols called chips per data bit). Representing one bit by a chip sequence of many symbols overcomes co-channel interference from other transmitters sharing the same frequency channel, including radio jamming, and is common in military radio and cell phones. Despite the fact that using more bandwidth to carry the same bit rate gives low channel spectral efficiency in (bit/s)/Hz, it allows many simultaneous users, which results in high system spectral efficiency in (bit/s)/Hz per unit of area. In these systems, the symbol rate of the physically transmitted high-frequency signal rate is called chip rate, which also is the pulse rate of the equivalent base band signal. However, in spread spectrum systems, the term symbol may also be used at a higher layer and refer to one information bit, or a block of information bits that are modulated using for example conventional QAM modulation, before the CDMA spreading code is applied. Using the latter definition, the symbol rate is equal to or lower than the bit rate. === Relationship to bit error rate === The disadvantage of conveying many bits per symbol is that the receiver has to distinguish many signal levels or symbols from each other, which may be difficult and cause bit errors in case of a poor phone line that suffers from low signal-to-noise ratio. In that case, a modem or network adapter may automatically choose a slower and more robust modulation scheme or line code, using fewer bits per symbol, in view to reduce the bit error rate. An optimal symbol set design takes into account channel bandwidth, desired information rate, noise characteristics of the channel and the receiver, and receiver and decoder complexity. == Modulation == Many data transmission systems operate by the modulation of a carrier signal. For example, in frequency-shift keying (FSK), the frequency of a tone is varied among a small, fixed set of possible values. In a synchronous data transmission system, the tone can only be changed from one frequency to another at regular and well-defined intervals. The presence of one particular frequency during one of these intervals constitutes a symbol. (The concept of symbols does not apply to asynchronous data transmission systems.) In a modulated system, the term modulation rate may be used synonymously with symbol rate. === Binary modulation === If the carrier signal has only two states, then only one bit of data (i.e., a 0 or 1) can be transmitted in each symbol. The bit rate is in this case equal to the symbol rate. For example, a binary FSK system would allow the carrier to have one of two frequencies, one representing a 0 and the other a 1. A more practical scheme is differential binary phase-shift keying, in which the carrier remains at the same frequency, but can be in one of two phases. During each symbol, the phase either remains the same, encoding a 0, or jumps by 180°, encoding a 1. Again, only one bit of data (i.e., a 0 or 1) is transmitted by each symbol. This is an example of data being encoded in the transitions between symbols (the change in phase), rather than the symbols themselves (the actual phase). (The reason for this in phase-shift keying is that it is impractical to know the reference phase of the transmitter.) === N-ary modulation, N greater than 2 === By increasing the number of states that the carrier signal can take, the number of bits encoded in each symbol can be greater than one. The bit rate can then be greater than the symbol rate. For example, a differential phase-shift keying system might allow four possible jumps in phase between symbols. Then two bits could be encoded at each symbol interval, achieving a data rate of double the symbol rate. In a more complex scheme such as 16-QAM, four bits of data are transmitted in each symbol, resulting in a bit rate of four times the symbol rate. === Not power of 2 === Although it is common to choose the number of symbols to be a power of 2 and send an integer number of bits per baud, this is not required. Line codes such as bipolar encoding and MLT-3 use three carrier states to encode one bit per baud while maintaining DC balance. The 4B3T line code uses three 3-ary modulated bits to transmit four data bits, a rate of 1.33 bits per baud. === Data rate versus error rate === Modulating a carrier increases the frequency range, or bandwidth, it occupies. Transmission channels are generally limited in the bandwidth they can carry. The bandwidth depends on the symbol (modulation) rate (not directly on the bit rate). As the bit rate is the product of the symbol rate and the number of bits encoded in each symbol, it is clearly advantageous to increase the latter if the former is fixed. However, for each additional bit encoded in a symbol, the constellation of symbols (the number of states of the carrier) doubles in size. This makes the states less distinct from one another which in turn makes it more difficult for the receiver to detect the symbol correctly in the presence of disturbances on the channel. The history of modems is the attempt at increasing the bit rate over a fixed bandwidth (and therefore a fixed maximum symbol rate), leading to increasing bits per symbol. For example, ITU-T V.29 specifies 4 bits per symbol, at a symbol rate of 2,400 baud, giving an effective bit rate of 9,600 bits per second. The history of spread spectrum goes in the opposite direction, leading to fewer and fewer data bits per symbol in order to spread the bandwidth. In the case of GPS, we have a data rate of 50 bit/s and a symbol rate of 1.023 Mchips/s. If each chip is considered a symbol, each symbol contains far less than one bit (50 bit/s / 1,023 ksymbols/s ≈ 0.000,05 bits/symbol). The complete collection of M possible symbols over a particular channel is called a M-ary modulation scheme. Most modulation schemes transmit some integer number of bits per symbol b, requiring the complete collection to contain M = 2b different symbols. Most popular modulation schemes can be described by showing each point on a constellation diagram, although a few modulation schemes (such as MFSK, DTMF, pulse-position modulation, spread spectrum modulation) require a different description. == Significant condition == In telecommunication, concerning the modulation of a carrier, a significant condition is one of the signal's parameters chosen to represent information. A significant condition could be an electric current (voltage, or power level), an optical power level, a phase value, or a particular frequency or wavelength. The duration of a significant condition is the time interval between successive significant instants. A change from one significant condition to another is called a signal transition. Information can be transmitted either during the given time interval, or encoded as the presence or absence of a change in the received signal. Significant conditions are recognized by an appropriate device called a receiver, demodulator, or decoder. The decoder translates the actual signal received into its intended logical value such as a binary digit (0 or 1), an alphabetic character, a mark, or a space. Each significant instant is determined when the appropriate device assumes a condition or state usable for performing a specific function, such as recording, processing, or gating. == See also == Data signaling rate List of interface bit rates Pulse-code modulation == References == == External links == What is the Symbol rate? "On the origins of serial communications and data encoding". Archived from the original on December 5, 2012. Retrieved January 4, 2007. What’s The Difference Between Bit Rate And Baud Rate?, Electronic Design Magazine
Wikipedia/Symbol_rate
Peer-to-peer (P2P) computing or networking is a distributed application architecture that partitions tasks or workloads between peers. Peers are equally privileged, equipotent participants in the network, forming a peer-to-peer network of nodes. In addition, a personal area network (PAN) is also in nature a type of decentralized peer-to-peer network typically between two devices. Peers make a portion of their resources, such as processing power, disk storage, or network bandwidth, directly available to other network participants, without the need for central coordination by servers or stable hosts. Peers are both suppliers and consumers of resources, in contrast to the traditional client–server model in which the consumption and supply of resources are divided. While P2P systems had previously been used in many application domains, the architecture was popularized by the Internet file sharing system Napster, originally released in 1999. P2P is used in many protocols such as BitTorrent file sharing over the Internet and in personal networks like Miracast displaying and Bluetooth radio. The concept has inspired new structures and philosophies in many areas of human interaction. In such social contexts, peer-to-peer as a meme refers to the egalitarian social networking that has emerged throughout society, enabled by Internet technologies in general. == Development == While P2P systems had previously been used in many application domains, the concept was popularized by file sharing systems such as the music-sharing application Napster. The peer-to-peer movement allowed millions of Internet users to connect "directly, forming groups and collaborating to become user-created search engines, virtual supercomputers, and filesystems". The basic concept of peer-to-peer computing was envisioned in earlier software systems and networking discussions, reaching back to principles stated in the first Request for Comments, RFC 1. Tim Berners-Lee's vision for the World Wide Web was close to a P2P network in that it assumed each user of the web would be an active editor and contributor, creating and linking content to form an interlinked "web" of links. The early Internet was more open than the present day, where two machines connected to the Internet could send packets to each other without firewalls and other security measures. This contrasts with the broadcasting-like structure of the web as it has developed over the years. As a precursor to the Internet, ARPANET was a successful peer-to-peer network where "every participating node could request and serve content". However, ARPANET was not self-organized, and it could not "provide any means for context or content-based routing beyond 'simple' address-based routing." Therefore, Usenet, a distributed messaging system that is often described as an early peer-to-peer architecture, was established. It was developed in 1979 as a system that enforces a decentralized model of control. The basic model is a client–server model from the user or client perspective that offers a self-organizing approach to newsgroup servers. However, news servers communicate with one another as peers to propagate Usenet news articles over the entire group of network servers. The same consideration applies to SMTP email in the sense that the core email-relaying network of mail transfer agents has a peer-to-peer character, while the periphery of Email clients and their direct connections is strictly a client-server relationship. In May 1999, with millions more people on the Internet, Shawn Fanning introduced the music and file-sharing application called Napster. Napster was the beginning of peer-to-peer networks, as we know them today, where "participating users establish a virtual network, entirely independent from the physical network, without having to obey any administrative authorities or restrictions". == Architecture == A peer-to-peer network is designed around the notion of equal peer nodes simultaneously functioning as both "clients" and "servers" to the other nodes on the network. This model of network arrangement differs from the client–server model where communication is usually to and from a central server. A typical example of a file transfer that uses the client-server model is the File Transfer Protocol (FTP) service in which the client and server programs are distinct: the clients initiate the transfer, and the servers satisfy these requests. === Routing and resource discovery === Peer-to-peer networks generally implement some form of virtual overlay network on top of the physical network topology, where the nodes in the overlay form a subset of the nodes in the physical network. Data is still exchanged directly over the underlying TCP/IP network, but at the application layer peers can communicate with each other directly, via the logical overlay links (each of which corresponds to a path through the underlying physical network). Overlays are used for indexing and peer discovery, and make the P2P system independent from the physical network topology. Based on how the nodes are linked to each other within the overlay network, and how resources are indexed and located, we can classify networks as unstructured or structured (or as a hybrid between the two). ==== Unstructured networks ==== Unstructured peer-to-peer networks do not impose a particular structure on the overlay network by design, but rather are formed by nodes that randomly form connections to each other. (Gnutella, Gossip, and Kazaa are examples of unstructured P2P protocols). Because there is no structure globally imposed upon them, unstructured networks are easy to build and allow for localized optimizations to different regions of the overlay. Also, because the role of all peers in the network is the same, unstructured networks are highly robust in the face of high rates of "churn"—that is, when large numbers of peers are frequently joining and leaving the network. However, the primary limitations of unstructured networks also arise from this lack of structure. In particular, when a peer wants to find a desired piece of data in the network, the search query must be flooded through the network to find as many peers as possible that share the data. Flooding causes a very high amount of signaling traffic in the network, uses more CPU/memory (by requiring every peer to process all search queries), and does not ensure that search queries will always be resolved. Furthermore, since there is no correlation between a peer and the content managed by it, there is no guarantee that flooding will find a peer that has the desired data. Popular content is likely to be available at several peers and any peer searching for it is likely to find the same thing. But if a peer is looking for rare data shared by only a few other peers, then it is highly unlikely that the search will be successful. ==== Structured networks ==== In structured peer-to-peer networks the overlay is organized into a specific topology, and the protocol ensures that any node can efficiently search the network for a file/resource, even if the resource is extremely rare. The most common type of structured P2P networks implement a distributed hash table (DHT), in which a variant of consistent hashing is used to assign ownership of each file to a particular peer. This enables peers to search for resources on the network using a hash table: that is, (key, value) pairs are stored in the DHT, and any participating node can efficiently retrieve the value associated with a given key. However, in order to route traffic efficiently through the network, nodes in a structured overlay must maintain lists of neighbors that satisfy specific criteria. This makes them less robust in networks with a high rate of churn (i.e. with large numbers of nodes frequently joining and leaving the network). More recent evaluation of P2P resource discovery solutions under real workloads have pointed out several issues in DHT-based solutions such as high cost of advertising/discovering resources and static and dynamic load imbalance. Notable distributed networks that use DHTs include Tixati, an alternative to BitTorrent's distributed tracker, the Kad network, the Storm botnet, and the YaCy. Some prominent research projects include the Chord project, Kademlia, PAST storage utility, P-Grid, a self-organized and emerging overlay network, and CoopNet content distribution system. DHT-based networks have also been widely utilized for accomplishing efficient resource discovery for grid computing systems, as it aids in resource management and scheduling of applications. ==== Hybrid models ==== Hybrid models are a combination of peer-to-peer and client–server models. A common hybrid model is to have a central server that helps peers find each other. Spotify was an example of a hybrid model [until 2014]. There are a variety of hybrid models, all of which make trade-offs between the centralized functionality provided by a structured server/client network and the node equality afforded by the pure peer-to-peer unstructured networks. Currently, hybrid models have better performance than either pure unstructured networks or pure structured networks because certain functions, such as searching, do require a centralized functionality but benefit from the decentralized aggregation of nodes provided by unstructured networks. ==== CoopNet content distribution system ==== CoopNet (Cooperative Networking) was a proposed system for off-loading serving to peers who have recently downloaded content, proposed by computer scientists Venkata N. Padmanabhan and Kunwadee Sripanidkulchai, working at Microsoft Research and Carnegie Mellon University. When a server experiences an increase in load it redirects incoming peers to other peers who have agreed to mirror the content, thus off-loading balance from the server. All of the information is retained at the server. This system makes use of the fact that the bottleneck is most likely in the outgoing bandwidth than the CPU, hence its server-centric design. It assigns peers to other peers who are 'close in IP' to its neighbors [same prefix range] in an attempt to use locality. If multiple peers are found with the same file it designates that the node choose the fastest of its neighbors. Streaming media is transmitted by having clients cache the previous stream, and then transmit it piece-wise to new nodes. === Security and trust === Peer-to-peer systems pose unique challenges from a computer security perspective. Like any other form of software, P2P applications can contain vulnerabilities. What makes this particularly dangerous for P2P software, however, is that peer-to-peer applications act as servers as well as clients, meaning that they can be more vulnerable to remote exploits. ==== Routing attacks ==== Since each node plays a role in routing traffic through the network, malicious users can perform a variety of "routing attacks", or denial of service attacks. Examples of common routing attacks include "incorrect lookup routing" whereby malicious nodes deliberately forward requests incorrectly or return false results, "incorrect routing updates" where malicious nodes corrupt the routing tables of neighboring nodes by sending them false information, and "incorrect routing network partition" where when new nodes are joining they bootstrap via a malicious node, which places the new node in a partition of the network that is populated by other malicious nodes. ==== Corrupted data and malware ==== The prevalence of malware varies between different peer-to-peer protocols. Studies analyzing the spread of malware on P2P networks found, for example, that 63% of the answered download requests on the gnutella network contained some form of malware, whereas only 3% of the content on OpenFT contained malware. In both cases, the top three most common types of malware accounted for the large majority of cases (99% in gnutella, and 65% in OpenFT). Another study analyzing traffic on the Kazaa network found that 15% of the 500,000 file sample taken were infected by one or more of the 365 different computer viruses that were tested for. Corrupted data can also be distributed on P2P networks by modifying files that are already being shared on the network. For example, on the FastTrack network, the RIAA managed to introduce faked chunks into downloads and downloaded files (mostly MP3 files). Files infected with the RIAA virus were unusable afterwards and contained malicious code. The RIAA is also known to have uploaded fake music and movies to P2P networks in order to deter illegal file sharing. Consequently, the P2P networks of today have seen an enormous increase of their security and file verification mechanisms. Modern hashing, chunk verification and different encryption methods have made most networks resistant to almost any type of attack, even when major parts of the respective network have been replaced by faked or nonfunctional hosts. === Resilient and scalable computer networks === The decentralized nature of P2P networks increases robustness because it removes the single point of failure that can be inherent in a client–server based system. As nodes arrive and demand on the system increases, the total capacity of the system also increases, and the likelihood of failure decreases. If one peer on the network fails to function properly, the whole network is not compromised or damaged. In contrast, in a typical client–server architecture, clients share only their demands with the system, but not their resources. In this case, as more clients join the system, fewer resources are available to serve each client, and if the central server fails, the entire network is taken down. === Distributed storage and search === There are both advantages and disadvantages in P2P networks related to the topic of data backup, recovery, and availability. In a centralized network, the system administrators are the only forces controlling the availability of files being shared. If the administrators decide to no longer distribute a file, they simply have to remove it from their servers, and it will no longer be available to users. Along with leaving the users powerless in deciding what is distributed throughout the community, this makes the entire system vulnerable to threats and requests from the government and other large forces. For example, YouTube has been pressured by the RIAA, MPAA, and entertainment industry to filter out copyrighted content. Although server-client networks are able to monitor and manage content availability, they can have more stability in the availability of the content they choose to host. A client should not have trouble accessing obscure content that is being shared on a stable centralized network. P2P networks, however, are more unreliable in sharing unpopular files because sharing files in a P2P network requires that at least one node in the network has the requested data, and that node must be able to connect to the node requesting the data. This requirement is occasionally hard to meet because users may delete or stop sharing data at any point. In a P2P network, the community of users is entirely responsible for deciding which content is available. Unpopular files eventually disappear and become unavailable as fewer people share them. Popular files, however, are highly and easily distributed. Popular files on a P2P network are more stable and available than files on central networks. In a centralized network, a simple loss of connection between the server and clients can cause a failure, but in P2P networks, the connections between every node must be lost to cause a data-sharing failure. In a centralized system, the administrators are responsible for all data recovery and backups, while in P2P systems, each node requires its backup system. Because of the lack of central authority in P2P networks, forces such as the recording industry, RIAA, MPAA, and the government are unable to delete or stop the sharing of content on P2P systems. == Applications == === Content delivery === In P2P networks, clients both provide and use resources. This means that unlike client–server systems, the content-serving capacity of peer-to-peer networks can actually increase as more users begin to access the content (especially with protocols such as BitTorrent that require users to share, refer a performance measurement study). This property is one of the major advantages of using P2P networks because it makes the setup and running costs very small for the original content distributor. === File-sharing networks === Peer-to-peer file sharing networks such as Gnutella, G2, and the eDonkey network have been useful in popularizing peer-to-peer technologies. These advancements have paved the way for Peer-to-peer content delivery networks and services, including distributed caching systems like Correli Caches to enhance performance. Furthermore, peer-to-peer networks have made possible the software publication and distribution, enabling efficient sharing of Linux distribution and various games through file sharing networks. ==== Copyright infringements ==== Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, over conflicts with copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd.. In the last case, the Court unanimously held that defendant peer-to-peer file sharing companies Grokster and Streamcast could be sued for inducing copyright infringement. === Multimedia === The P2PTV and PDTP protocols are used in various peer-to-peer applications. Some proprietary multimedia applications leverage a peer-to-peer network in conjunction with streaming servers to stream audio and video to their clients. Peercasting is employed for multicasting streams. Additionally, a project called LionShare, undertaken by Pennsylvania State University, MIT, and Simon Fraser University, aims to facilitate file sharing among educational institutions globally. Another notable program, Osiris, enables users to create anonymous and autonomous web portals that are distributed via a peer-to-peer network. === Other P2P applications === Dat is a distributed version-controlled publishing platform. I2P, is an overlay network used to browse the Internet anonymously. Unlike the related I2P, the Tor network is not itself peer-to-peer; however, it can enable peer-to-peer applications to be built on top of it via onion services. The InterPlanetary File System (IPFS) is a protocol and network designed to create a content-addressable, peer-to-peer method of storing and sharing hypermedia distribution protocol, with nodes in the IPFS network forming a distributed file system. Jami is a peer-to-peer chat and SIP app. JXTA is a peer-to-peer protocol designed for the Java platform. Netsukuku is a Wireless community network designed to be independent from the Internet. Open Garden is a connection-sharing application that shares Internet access with other devices using Wi-Fi or Bluetooth. Resilio Sync is a directory-syncing app. Research includes projects such as the Chord project, the PAST storage utility, the P-Grid, and the CoopNet content distribution system. Secure Scuttlebutt is a peer-to-peer gossip protocol capable of supporting many different types of applications, primarily social networking. Syncthing is also a directory-syncing app. Tradepal l and M-commerce applications are designed to power real-time marketplaces. The U.S. Department of Defense is conducting research on P2P networks as part of its modern network warfare strategy. In May 2003, Anthony Tether, then director of DARPA, testified that the United States military uses P2P networks. WebTorrent is a P2P streaming torrent client in JavaScript for use in web browsers, as well as in the WebTorrent Desktop standalone version that bridges WebTorrent and BitTorrent serverless networks. Microsoft, in Windows 10, uses a proprietary peer-to-peer technology called "Delivery Optimization" to deploy operating system updates using end-users' PCs either on the local network or other PCs. According to Microsoft's Channel 9, this led to a 30%-50% reduction in Internet bandwidth usage. Artisoft's LANtastic was built as a peer-to-peer operating system where machines can function as both servers and workstations simultaneously. Hotline Communications Hotline Client was built with decentralized servers and tracker software dedicated to any type of files and continues to operate today. Cryptocurrencies are peer-to-peer-based digital currencies that use blockchains List of cryptocurrencies List of blockchains == Social implications == === Incentivizing resource sharing and cooperation === Cooperation among a community of participants is key to the continued success of P2P systems aimed at casual human users; these reach their full potential only when large numbers of nodes contribute resources. But in current practice, P2P networks often contain large numbers of users who utilize resources shared by other nodes, but who do not share anything themselves (often referred to as the "freeloader problem"). Freeloading can have a profound impact on the network and in some cases can cause the community to collapse. In these types of networks "users have natural disincentives to cooperate because cooperation consumes their own resources and may degrade their own performance". Studying the social attributes of P2P networks is challenging due to large populations of turnover, asymmetry of interest and zero-cost identity. A variety of incentive mechanisms have been implemented to encourage or even force nodes to contribute resources. Some researchers have explored the benefits of enabling virtual communities to self-organize and introduce incentives for resource sharing and cooperation, arguing that the social aspect missing from today's P2P systems should be seen both as a goal and a means for self-organized virtual communities to be built and fostered. Ongoing research efforts for designing effective incentive mechanisms in P2P systems, based on principles from game theory, are beginning to take on a more psychological and information-processing direction. ==== Privacy and anonymity ==== Some peer-to-peer networks (e.g. Freenet) place a heavy emphasis on privacy and anonymity—that is, ensuring that the contents of communications are hidden from eavesdroppers, and that the identities/locations of the participants are concealed. Public key cryptography can be used to provide encryption, data validation, authorization, and authentication for data/messages. Onion routing and other mix network protocols (e.g. Tarzan) can be used to provide anonymity. Perpetrators of live streaming sexual abuse and other cybercrimes have used peer-to-peer platforms to carry out activities with anonymity. == Political implications == === Intellectual property law and illegal sharing === Although peer-to-peer networks can be used for legitimate purposes, rights holders have targeted peer-to-peer over the involvement with sharing copyrighted material. Peer-to-peer networking involves data transfer from one user to another without using an intermediate server. Companies developing P2P applications have been involved in numerous legal cases, primarily in the United States, primarily over issues surrounding copyright law. Two major cases are Grokster vs RIAA and MGM Studios, Inc. v. Grokster, Ltd. In both of the cases the file sharing technology was ruled to be legal as long as the developers had no ability to prevent the sharing of the copyrighted material. To establish criminal liability for the copyright infringement on peer-to-peer systems, the government must prove that the defendant infringed a copyright willingly for the purpose of personal financial gain or commercial advantage. Fair use exceptions allow limited use of copyrighted material to be downloaded without acquiring permission from the rights holders. These documents are usually news reporting or under the lines of research and scholarly work. Controversies have developed over the concern of illegitimate use of peer-to-peer networks regarding public safety and national security. When a file is downloaded through a peer-to-peer network, it is impossible to know who created the file or what users are connected to the network at a given time. Trustworthiness of sources is a potential security threat that can be seen with peer-to-peer systems. A study ordered by the European Union found that illegal downloading may lead to an increase in overall video game sales because newer games charge for extra features or levels. The paper concluded that piracy had a negative financial impact on movies, music, and literature. The study relied on self-reported data about game purchases and use of illegal download sites. Pains were taken to remove effects of false and misremembered responses. === Network neutrality === Peer-to-peer applications present one of the core issues in the network neutrality controversy. Internet service providers (ISPs) have been known to throttle P2P file-sharing traffic due to its high-bandwidth usage. Compared to Web browsing, e-mail or many other uses of the internet, where data is only transferred in short intervals and relative small quantities, P2P file-sharing often consists of relatively heavy bandwidth usage due to ongoing file transfers and swarm/network coordination packets. In October 2007, Comcast, one of the largest broadband Internet providers in the United States, started blocking P2P applications such as BitTorrent. Their rationale was that P2P is mostly used to share illegal content, and their infrastructure is not designed for continuous, high-bandwidth traffic. Critics point out that P2P networking has legitimate legal uses, and that this is another way that large providers are trying to control use and content on the Internet, and direct people towards a client–server-based application architecture. The client–server model provides financial barriers-to-entry to small publishers and individuals, and can be less efficient for sharing large files. As a reaction to this bandwidth throttling, several P2P applications started implementing protocol obfuscation, such as the BitTorrent protocol encryption. Techniques for achieving "protocol obfuscation" involves removing otherwise easily identifiable properties of protocols, such as deterministic byte sequences and packet sizes, by making the data look as if it were random. The ISP's solution to the high bandwidth is P2P caching, where an ISP stores the part of files most accessed by P2P clients in order to save access to the Internet. == Current research == Researchers have used computer simulations to aid in understanding and evaluating the complex behaviors of individuals within the network. "Networking research often relies on simulation in order to test and evaluate new ideas. An important requirement of this process is that results must be reproducible so that other researchers can replicate, validate, and extend existing work." If the research cannot be reproduced, then the opportunity for further research is hindered. "Even though new simulators continue to be released, the research community tends towards only a handful of open-source simulators. The demand for features in simulators, as shown by our criteria and survey, is high. Therefore, the community should work together to get these features in open-source software. This would reduce the need for custom simulators, and hence increase repeatability and reputability of experiments." Popular simulators that were widely used in the past are NS2, OMNeT++, SimPy, NetLogo, PlanetLab, ProtoPeer, QTM, PeerSim, ONE, P2PStrmSim, PlanetSim, GNUSim, and Bharambe. Besides all the above stated facts, there has also been work done on ns-2 open source network simulators. One research issue related to free rider detection and punishment has been explored using ns-2 simulator here. == See also == == References == == External links ==
Wikipedia/Peer-to-peer_networking
In the OSI networking model, Data Link Control (DLC) is the service provided by the data link layer. Network interface cards have a DLC address that identifies each card; for instance, Ethernet and other types of cards have a 48-bit MAC address built into the cards' firmware when they are manufactured. There is also a network transport protocol with the name Data Link Control, comparable to better-known protocols like TCP/IP and AppleTalk. DLC is a transport protocol used by IBM SNA mainframe computers and peripherals and compatible equipment. In computer networking, it is typically used for communications between network-attached printers, workstations and servers, for example by HP in their JetDirect print servers. While it was widely used up until the time of Windows 2000, versions from Windows XP onward do not include support for DLC. == External links == Generic DLC Environment Overview at the Wayback Machine (archived 2021-06-15) Microsoft DLC protocol in Windows 2000 at the Wayback Machine (archived 2008-05-06) Microsoft TechNet: The Data Link Control Interface at the Wayback Machine (archived 2017-08-26), 30.3.2013 == References ==
Wikipedia/Data_Link_Control
The Recursive InterNetwork Architecture (RINA) is a new computer network architecture proposed as an alternative to the architecture of the currently mainstream Internet protocol suite. The principles behind RINA were first presented by John Day in his 2008 book Patterns in Network Architecture: A return to Fundamentals. This work is a fresh start, taking into account lessons learned in the 35 years of TCP/IP’s existence, as well as the lessons of OSI’s failure and the lessons of other network technologies of the past few decades, such as CYCLADES, DECnet, and Xerox Network Systems. RINA's fundamental principles are that computer networking is just Inter-Process Communication or IPC, and that layering should be done based on scope/scale, with a single recurring set of protocols, rather than based on function, with specialized protocols. The protocol instances in one layer interface with the protocol instances on higher and lower layers via new concepts and entities that effectively reify networking functions currently specific to protocols like BGP, OSPF and ARP. In this way, RINA claims to support features like mobility, multihoming and quality of service without the need for additional specialized protocols like RTP and UDP, as well as to allow simplified network administration without the need for concepts like autonomous systems and NAT. == Overview == RINA is the result of an effort to work out general principles in computer networking that apply in all situations. RINA is the specific architecture, implementation, testing platform and ultimately deployment of the model informally known as the IPC model, although it also deals with concepts and results that apply to any distributed application, not just to networking. Coming from distributed applications, most of the terminology comes from application development instead of networking, which is understandable, given that RINA's fundamental principle is to reduce networking to IPC. The basic entity of RINA is the Distributed Application Process or DAP, which frequently corresponds to a process on a host. Two or more DAPs constitute a Distributed Application Facility or DAF, as illustrated in Figure 1. These DAPs communicate using the Common Distributed Application Protocol or CDAP, exchanging structured data in the form of objects. These objects are structured in a Resource Information Base or RIB, which provides a naming schema and a logical organization to them. CDAP provides six basic operations on a remote DAP's objects: create, delete, read, write, start and stop. In order to exchange information, DAPs need an underlying facility whose task is to provide and manage IPC services over a certain scope. This facility is another DAF, called a Distributed IPC Facility or DIF. A DIF enables a DAP to allocate flows to one or more DAPs, by just providing the names of the targeted DAPs and the desired QoS parameters such as bounds on data loss and latency, ordered or out-of-order delivery, reliability, and so forth. For example, DAPs may not trust the DIF they are using and may therefore protect their data before writing it to the flow via a SDU protection module, for example by encrypting it. The DAPs of a DIF are called IPC Processes or IPCPs. They have the same generic DAP structure shown in Figure 3, plus some specific tasks to provide and manage IPC. These tasks, as shown in Figure 4, can be divided into three categories, in order of increasing complexity and decreasing frequency: data transfer, data transfer control and layer management. A DAF thus corresponds to the application layer, and a DIF to the layer immediately below, in most contemporary network models, and the three previous task categories correspond to the vast majority of tasks of not just network operations, but network management and even authentication (with some adjustments in responsibility as will be seen below). DIFs, being DAFs, in turn use other underlying DIFs themselves, going all the way down to the physical layer DIF controlling the wires and jacks. This is where the recursion of RINA comes from. All RINA layers have the same structure and components and provide the same functions; they differ only in their scopes, configurations or policies (mirroring the separation of mechanism and policy in operating systems). As shown in Figure 2, RINA networks are usually structured in DIFs of increasing scope. Figure 3 shows an example of how the Web could be structured with RINA: the highest layer is the one closest to applications, corresponding to email or websites; the lowest layers aggregate and multiplex the traffic of the higher layers, corresponding to ISP backbones. Multi-provider DIFs (such as the public Internet or others) float on top of the ISP layers. In this model, three types of systems are distinguished: hosts, which contain DAPs; interior routers, internal to a layer; and border routers, at the edges of a layer, where packets go up or down one layer. In short, RINA keeps the concepts of PDU and SDU, but instead of layering by function, it layers by scope. Layers correspond not to different responsibilities, but different scales, and the model is specifically designed to be applicable from a single point-to-point Ethernet connection all the way up to the Web. RINA is therefore an attempt to reuse as much theory as possible and eliminate the need for ad-hoc protocol design, and thus reduce the complexity of network construction, management and operation in the process. === Naming, addressing, routing, mobility and multihoming === As explained above, the IP address is too low-level an identifier on which to base multihoming and mobility efficiently, as well as requiring routing tables to be bigger than necessary. RINA literature follows the general theory of Jerry Saltzer on addressing and naming. According to Saltzer, four elements need to be identified: applications, nodes, attachment points and paths. An application can run in one or more nodes and should be able to move from one node to another without losing its identity in the network. A node can be connected to a pair of attachment points and should be able to move between them without losing its identity in the network. A directory maps an application name to a node address, and routes are sequences of node addresses and attachment points. These points are illustrated in Figure 4. Saltzer took his model from operating systems, but the RINA authors concluded it could not be applied cleanly to internetworks, which can have more than one path between the same pair of nodes (let alone whole networks). Their solution is to model routes as sequences of nodes: at each hop, the respective node chooses the most appropriate attachment point to forward the packet to the next node. Therefore, RINA routes in a two-step process: first, the route as a sequence of node addresses is calculated, and then, for each hop, an appropriate attachment point is selected. These are the steps to generate the forwarding table: forwarding is still performed with a single lookup. Moreover, the last step can be performed more frequently to exploit multihoming for load balancing. With this naming structure, mobility and multihoming are inherently supported if the names have carefully chosen properties: application names are location-independent to allow an application to move around; node addresses are location-dependent but route-independent; and attachment points are by nature route-dependent. Applying this naming scheme to RINA with its recursive layers allows the conclusion that mapping application names to node addresses is analogous to mapping node addresses to attachment points. Put simply, at any layer, nodes in the layer above can be seen as applications while nodes in the layer below can be seen as attachment points. === Protocol design === The Internet protocol suite also generally dictates that protocols be designed in isolation, without regard to whether aspects have been duplicated in other protocols and, therefore, whether these can be made into a policy. RINA tries to avoid this by applying the separation of mechanism and policy in operating systems to protocol design. Each DIF uses different policies to provide different classes of quality of service and adapt to the characteristics of either the physical media, if the DIF is low-level, or the applications, if the DIF is high-level. RINA uses the theory of the Delta-T protocol developed by Richard Watson in 1981. Watson's research suggests that sufficient conditions for reliable transfer are to bound three timers. Delta-T is an example of how this should work: it does not have a connection setup or tear-down. The same research also notes that TCP already uses these timers in its operation, making Delta-T comparatively simpler. Watson's research also suggests that synchronization and port allocation should be distinct functions, port allocation being part of layer management, and synchronization being part of data transfer. === Security === To accommodate security, RINA requires each DIF/DAF to specify a security policy, whose functions are shown in Figure 5. This allows securing not just applications, but backbones and switching fabrics themselves. A public network is simply a special case where the security policy does nothing. This may introduce overhead for smaller networks, but it scales better with larger networks because layers do not need to coordinate their security mechanisms: the current Internet is estimated as requiring around 5 times more distinct security entities than RINA. Among other things, the security policy can also specify an authentication mechanism; this obsoletes firewalls and blacklists because a DAP or IPCP that can't join a DAF or DIF can't transmit or receive. DIFs also do not expose their IPCP addresses to higher layers, preventing a wide class of man-in-the-middle attacks. The design of the Delta-T protocol itself, with its emphasis on simplicity, is also a factor. For example, since the protocol has no handshake, it has no corresponding control messages that can be forged or state that can be misused, like that in a SYN flood. The synchronization mechanism also makes aberrant behavior more correlated with intrusion attempts, making attacks far easier to detect. == Background == The starting point for a radically new and different network architecture like RINA is an attempt to solve or a response to the following problems which do not appear to have practical or compromise-free solutions with current network architectures, especially the Internet protocol suite and its functional layering as depicted in Figure 6: Transmission complexity: the separation of IP and TCP results in inefficiency, with the MTU discovery performed to prevent IP fragmentation being the clearest symptom. Performance: TCP itself carries rather high overhead with its handshake, which also causes vulnerabilities such as SYN floods. Also, TCP relies on packet dropping to throttle itself and avoid congestion, meaning its congestion control is purely reactive, not proactive or preventive. This interacts badly with large buffers, leading to bufferbloat. Multihoming: the IP address and port number are too low-level to identify an application in two different networks. DNS doesn't solve this because hostnames must resolve to a single IP address and port number combination, making them aliases instead of identities. Neither does LISP, because i) it still uses the locator, which is an IP address, for routing, and ii) it is based on a false distinction, in that all entities in a scope are located by their identifiers to begin with; in addition, it also introduces scalability problems of its own. Mobility: the IP address and port number are also too low-level to identify an application as it moves between networks, resulting in complications for mobile devices such as smartphones. Though a solution, Mobile IP in reality shifts the problem entirely to the Care-of address and introduces an IP tunnel, with attendant complexity. Management: the same low-level nature of the IP address encourages multiple addresses or even address ranges to be allocated to single hosts, putting pressure on allocation and accelerating exhaustion. NAT only delays address exhaustion and potentially introduces even more problems. At the same time, functional layering of the Internet protocol suite's architecture leaves room for only two scopes, complicating subdivision of administration of the Internet and requiring the artificial notion of autonomous systems. OSPF and IS-IS have relatively few problems, but do not scale well, forcing usage of BGP for larger networks and inter-domain routing. Security: the nature of the IP address space itself results in frail security, since there is no true configurable policy for adding or removing IP addresses other than physically preventing attachment. TLS and IPSec provide solutions, but with accompanying complexity. Firewalls and blacklists are vulnerable to overwhelming, ergo not scalable. "[...] experience has shown that it is difficult to add security to a protocol suite unless it is built into the architecture from the beginning." Though these problems are far more acutely visible today, there have been precedents and cases almost right from the beginning of the ARPANET, the environment in which the Internet protocol suite was designed: === 1972: Multihoming not supported by the ARPANET === In 1972, Tinker Air Force Base wanted connections to two different IMPs for redundancy. ARPANET designers realized that they couldn't support this feature because host addresses were the addresses of the IMP port number the host was connected to (borrowing from telephony). To the ARPANET, two interfaces of the same host had different addresses; in other words, the address was too low-level to identify a host. === 1978: TCP split from IP === Initial TCP versions performed the error and flow control (current TCP) and relaying and multiplexing (IP) functions in the same protocol. In 1978 TCP was split from IP even though the two layers had the same scope. By 1987, the networking community was well aware of IP fragmentation's problems, to the point of considering it harmful. However, it was not understood as a symptom that TCP and IP were interdependent. === 1981: Watson's fundamental results ignored === Richard Watson in 1981 provided a fundamental theory of reliable transport whereby connection management requires only timers bounded by a small factor of the Maximum Packet Lifetime (MPL). Based on this theory, Watson et al. developed the Delta-t protocol which allows a connection's state to be determined simply by bounding three timers, with no handshaking. On the other hand, TCP uses both explicit handshaking as well as more limited timer-based management of the connection's state. === 1983: Internetwork layer lost === Early in 1972 the International Network Working Group (INWG) was created to bring together the nascent network research community. One of the early tasks it accomplished was voting an international network transport protocol, which was approved in 1976. Remarkably, the selected option, as well as all the other candidates, had an architecture composed of three layers of increasing scope: data link (to handle different types of physical media), network (to handle different types of networks) and internetwork (to handle a network of networks), each layer with its own address space. When TCP/IP was introduced it ran at the internetwork layer on top of the Host-IMP Protocol, when running over the ARPANET. But when NCP was shut down, TCP/IP took the network role and the internetwork layer was lost. This explains the need for autonomous systems and NAT today, to partition and reuse ranges of the IP address space to facilitate administration. === 1983: First opportunity to fix addressing missed === The need for an address higher-level than the IP address was well understood since the mid-1970s. However, application names were not introduced and DNS was designed and deployed, continuing to use well-known ports to identify applications. The advent of the web and HTTP created a need for application names, leading to URLs. URLs, however, tie each application instance to a physical interface of a computer and a specific transport connection, since the URL contains the DNS name of an IP interface and TCP port number, spilling the multihoming and mobility problems to applications. === 1986: Congestion collapse takes the Internet by surprise === Though the problem of congestion control in datagram networks had been known since the 1970s and early 80s, the congestion collapse in 1986 caught the Internet by surprise. What is worse, the adopted congestion control - the Ethernet congestion avoidance scheme, with a few modifications - was put in TCP. === 1988: Network management takes a step backward === In 1988 IAB recommended using SNMP as the initial network management protocol for the Internet to later transition to the object-oriented approach of CMIP. SNMP was a step backwards in network management, justified as a temporary measure while the required more sophisticated approaches were implemented, but the transition never happened. === 1992: Second opportunity to fix addressing missed === In 1992 the IAB produced a series of recommendations to resolve the scaling problems of the IPv4-based Internet: address space consumption and routing information explosion. Three options were proposed: introduce CIDR to mitigate the problem; design the next version of IP (IPv7) based on CLNP; or continue the research into naming, addressing and routing. CLNP was an OSI-based protocol that addressed nodes instead of interfaces, solving the old multihoming problem dating back to the ARPANET, and allowing for better routing information aggregation. CIDR was introduced, but the IETF didn't accept an IPv7 based on CLNP. IAB reconsidered its decision and the IPng process started, culminating with IPv6. One of the rules for IPng was not to change the semantics of the IP address, which continues to name the interface, perpetuating the multihoming problem. == Research projects == From the publishing of the PNA book in 2008 to 2014, a lot of RINA research and development work has been done. An informal group known as the Pouzin Society, named after Louis Pouzin, coordinates several international efforts. === BU Research Team === The RINA research team at Boston University is led by Professors Abraham Matta, John Day and Lou Chitkushev, and has been awarded a number of grants from the National Science Foundation and EC in order to continue investigating the fundamentals of RINA, develop an open source prototype implementation over UDP/IP for Java and experiment with it on top of the GENI infrastructure. BU is also a member of the Pouzin Society and an active contributor to the FP7 IRATI and PRISTINE projects. In addition to this, BU has incorporated RINA concepts and theory in their computer networking courses. === FP7 IRATI === IRATI is an FP7-funded project with 5 partners: i2CAT, Nextworks, iMinds, Interoute and Boston University. It has produced an open source RINA implementation for the Linux OS on top of Ethernet. === FP7 PRISTINE === PRISTINE is an FP7-funded project with 15 partners: WIT-TSSG, i2CAT, Nextworks, Telefónica I+D, Thales, Nexedi, B-ISDN, Atos, University of Oslo, Juniper Networks, Brno University, IMT-TSP, CREATE-NET, iMinds and UPC. Its main goal is to explore the programmability aspects of RINA to implement innovative policies for congestion control, resource allocation, routing, security and network management. === GÉANT3+ Open Call winner IRINA === IRINA was funded by the GÉANT3+ open call, and is a project with four partners: iMinds, WIT-TSSG, i2CAT and Nextworks. The main goal of IRINA is to study the use of the Recursive InterNetwork Architecture (RINA) as the foundation of the next generation NREN and GÉANT network architectures. IRINA builds on the IRATI prototype and will compare RINA against current networking state of the art and relevant clean-slate architecture under research; perform a use-case study of how RINA could be better used in the NREN scenarios; and showcase a laboratory trial of the study. == See also == Protocol Wars == References == == External links == The Pouzin Society website: http://pouzinsociety.org RINA Education page at the IRATI website, available online at http://irati.eu/education/ RINA document repository run by the TSSG, available online at http://rina.tssg.org RINA tutorial at the IEEE Globecom 2014 conference, available online at http://www.slideshare.net/irati-project/rina-tutorial-ieee-globecom-2014
Wikipedia/Recursive_Internetwork_Architecture
The Hierarchical internetworking model is a three-layer model for network design first proposed by Cisco in 1998. The hierarchical design model divides enterprise networks into three layers: core, distribution, and access. == Access layer == End-stations and servers connect to the enterprise at the access layer. Access layer devices are usually commodity switching platforms, and may or may not provide layer 3 switching services. The traditional focus at the access layer is minimizing "cost-per-port": the amount of investment the enterprise must make for each provisioned Ethernet port. This layer is also called the desktop layer because it focuses on connecting client nodes, such as workstations to the network. == Distribution layer == The distribution layer is the smart layer in the three-layer model. Routing, filtering, and QoS policies are managed at the distribution layer. Distribution layer devices also often manage individual branch-office WAN connections. This layer is also called the Workgroup layer. == Core layer == The core layer is the backbone of a network, where the internet(internetwork) gateways are located. The core network provides high-speed, highly redundant forwarding services to move packets between distribution-layer devices in different regions of the network. Core switches and routers are usually the most powerful, in terms of raw forwarding power, in the enterprise; core network devices manage the highest-speed connections, such as 10 Gigabit Ethernet or 100 Gigabit Ethernet. == See also == Service layer "Hierarchical Network Design", Connecting Networks Companion Guide, Cisco Press, 2014, retrieved 2023-12-16 PDF Khalid Raza, Mark Turner (1998), "Chapter 4. Network Topology and Design", Large-Scale IP Network Solutions, Cisco Press, ISBN 978-1-57870-084-4 High Availability Campus Network Design, Cisco, 2008, retrieved 2022-04-05 PDF == References ==
Wikipedia/Hierarchical_internetworking_model
The Network Driver Interface Specification (NDIS) is an application programming interface (API) for network interface controllers (NICs). == Specification == It was jointly developed by Microsoft and 3Com Corporation and is mostly used in Microsoft Windows. However, the open-source NDISwrapper and Project Evil driver wrapper projects allow many NDIS-compliant NICs to be used with Linux, FreeBSD and NetBSD. magnussoft ZETA, a derivative of BeOS, supports a number of NDIS drivers. The NDIS forms the logical link control (LLC) sublayer, which is the upper sublayer of the OSI data link layer (layer 2). Therefore, the NDIS acts as the interface between the media access control (MAC) sublayer, which is the lower sublayer of the data link layer, and the network layer (layer 3). The NDIS is a library of functions often referred to as a "wrapper" that hides the underlying complexity of the NIC hardware and serves as a standard interface for level 3 network protocol drivers and hardware level MAC drivers. The NDIS versions supported by various Windows versions are as follows: NDIS 2.0: MS-DOS, Windows for Workgroups 3.1, OS/2 NDIS 3.0: Windows for Workgroups 3.11 NDIS 3.1: Windows 95 NDIS 4.0: Windows 95 OSR2, NT 4.0, Windows CE 3.0 NDIS 4.1: Windows 98 NDIS 5.0: Windows 98 SE, Me, 2000 NDIS 5.1: Windows XP, Server 2003, Windows CE 4.x, 5.0, 6.0 NDIS 5.2: Windows Server 2003 SP2 NDIS 6.0: Windows Vista NDIS 6.1: Windows Vista SP1, Server 2008, Windows Embedded Compact 7, Windows Embedded Compact 2013 NDIS 6.20: Windows 7, Server 2008 R2 NDIS 6.30: Windows 8, Windows Server 2012 NDIS 6.40: Windows 8.1, Windows Server 2012 R2 NDIS 6.50: Windows 10, version 1507 NDIS 6.51: Windows 10, version 1511 NDIS 6.60: Windows 10, version 1607 and Windows Server 2016 NDIS 6.70: Windows 10, version 1703 NDIS 6.80: Windows 10, version 1709 NDIS 6.81: Windows 10, version 1803 NDIS 6.82: Windows 10, version 1809 and Windows Server 2019 NDIS 6.83: Windows 10, version 1903 and Windows Server 2022 NDIS 6.84: Windows 10, version 2004 NDIS 6.85: Windows 10, version 21H2 NDIS 6.86: Windows 11, version 21H2 NDIS 6.87: Windows 11, version 22H2 NDIS 6.88: Windows Server 2022, version 23H2 NDIS 6.89: Windows 11, version 24H2 The traffic accepted by the NIC is controlled by an NDIS Miniport Driver while various protocols, such as TCP/IP, are implemented by NDIS Protocol Drivers. A single miniport may be associated with one or more protocols. This means that traffic coming into the miniport may be received in parallel by several protocol drivers. For example, Winpcap adds a second protocol driver on the selected miniport in order to capture incoming packets. Furthermore, it is possible to simulate several virtual NICs by implementing virtual miniport drivers that send and receive traffic from a single physical NIC. One example of virtual miniport driver usage is to add virtual NICs, each with a different VLAN. Because implementations cannot assume that other drivers received the same buffers, one must treat the incoming buffers as read-only and a driver that changes the packet content must allocate its own buffers. NDIS Miniport drivers can also use Windows Driver Model interfaces to control network hardware. Another driver type is NDIS Intermediate Driver. Intermediate drivers sit in-between the MAC and IP layers and can control all traffic being accepted by the NIC. In practice, intermediate drivers implement both miniport and protocol interfaces. The miniport driver and protocol driver actually communicate with the corresponding miniport and protocol interfaces that reside in the intermediate driver. This design enables adding several chained intermediate drivers between the miniport and protocol drivers. Therefore, driver vendors cannot assume that the interface that they send traffic to is implemented by the last driver in the chain. In order to write applications using NDIS, one can use samples that accompany Microsoft's Windows Driver Kit (WDK). The "PassThru" sample is a good starting point for intermediate drivers as it implements all the necessary details required in this driver type, but just passes the traffic through to the next driver in the chain. NDIS 4.1 has implemented WDM features. NDIS 5.0 has implemented TCP/IP offload features. Since Windows 10 version 2004, a new driver framework for network adapters was created called Network Adapter WDF Class Extension (NetAdapterCx) which is meant to simplify the driver development process. == See also == Open Data-Link Interface (ODI) Uniform Driver Interface (UDI) Universal Network Device Interface (UNDI) New API PC/TCP Packet Driver == References == == External links == Windows Core Networking NDIS Drivers Microsoft MSDN Design Guide
Wikipedia/Network_Driver_Interface_Specification
The Address Resolution Protocol (ARP) is a communication protocol for discovering the link layer address, such as a MAC address, associated with a internet layer address, typically an IPv4 address. The protocol, part of the Internet protocol suite, was defined in 1982 by RFC 826, which is Internet Standard STD 37. ARP enables a host to send an IPv4 packet to another node in the local network by providing a protocol to get the MAC address associated with an IP address. The host broadcasts a request containing the node's IP address, and the node with that IP address replies with its MAC address. ARP has been implemented with many combinations of network and data link layer technologies, such as IPv4, Chaosnet, DECnet and Xerox PARC Universal Packet (PUP) using IEEE 802 standards, FDDI, X.25, Frame Relay and Asynchronous Transfer Mode (ATM). In Internet Protocol Version 6 (IPv6) networks, the functionality of ARP is provided by the Neighbor Discovery Protocol (NDP). == Operating scope == The Address Resolution Protocol is a request-response protocol. Its messages are directly encapsulated by a link layer protocol. It is communicated within the boundaries of a single subnetwork and is never routed. == Packet structure == The Address Resolution Protocol uses a simple message format containing one address resolution request or response. The packets are carried at the data link layer of the underlying network as raw payload. In the case of Ethernet, a 0x0806 EtherType value is used to identify ARP frames. The size of the ARP message depends on the link layer and network layer address sizes. The message header specifies the types of network in use at each layer as well as the size of addresses of each. The message header is completed with the operation code for request (1) and reply (2). The payload of the packet consists of four addresses, the hardware and protocol address of the sender and receiver hosts. The principal packet structure of ARP packets is shown in the following table which illustrates the case of IPv4 networks running on Ethernet. In this scenario, the packet has 48-bit fields for the sender hardware address (SHA) and target hardware address (THA), and 32-bit fields for the corresponding sender and target protocol addresses (SPA and TPA). The ARP packet size in this case is 28 bytes. Hardware Type (HTYPE): 16 bits This field specifies the network link protocol type. In this example, a value of 1 indicates Ethernet. Protocol Type (PTYPE): 16 bits This field specifies the internetwork protocol for which the ARP request is intended. For IPv4, this has the value 0x0800. The permitted PTYPE values share a numbering space with those for EtherType. Hardware Length (HLEN): 8 bits Length (in octets) of a hardware address. For Ethernet, the address length is 6. Protocol Length (PLEN): 8 bits Length (in octets) of internetwork addresses. The internetwork protocol is specified in PTYPE. In this example: IPv4 address length is 4. Operation (OPER): 16 bits Specifies the operation that the sender is performing: 1 for request, 2 for reply. Sender Hardware Address (SHA): 48 bits Media address of the sender. In an ARP request this field is used to indicate the address of the host sending the request. In an ARP reply this field is used to indicate the address of the host that the request was looking for. Sender protocol address (SPA): 32 bits Internetwork address of the sender. Target hardware address (THA): 48 bits Media address of the intended receiver. In an ARP request this field is ignored. In an ARP reply this field is used to indicate the address of the host that originated the ARP request. Target protocol address (TPA): 32 bits Internetwork address of the intended receiver. ARP parameter values have been standardized and are maintained by the Internet Assigned Numbers Authority (IANA). The EtherType for ARP is 0x0806. This appears in the Ethernet frame header when the payload is an ARP packet and is not to be confused with PTYPE, which appears within this encapsulated ARP packet. == Layering == ARP's placement within the Internet protocol suite and the OSI model may be a matter of confusion or even of dispute. RFC 826 places it into the Link Layer and characterizes it as a tool to inquire about the "higher level layer", such as the Internet layer. RFC 1122 also discusses ARP in its link layer section. Richard Stevens places ARP in OSI's data link layer while newer editions associate it with the network layer or introduce an intermediate OSI layer 2.5. == Example == Two computers, A and B, are connected to the same local area network with no intervening gateway or router. A has a packet to send to IP address 192.168.0.55 which happens to be the address of B. Before sending the packet to B, A broadcasts an ARP request message – addressed with the broadcast MAC address FF:FF:FF:FF:FF:FF and requesting response from the node with IP address 192.168.0.55. All nodes of the network receive the message, but only B replies since it has the requested IP address. B responds with an ARP response message containing its MAC addresses which A receives. A sends the data packet on the link addressed with B's MAC address. Typically, network nodes maintain a lookup cache that associates IP and MAC addressees. In this example, if A had the lookup cached, then it would not need to broadcast the ARP request. Also, when B received the request, it could cache the lookup to A so that if B needs to send a packet to A later, it does not need to use ARP to lookup its MAC address. Finally, when A receives the ARP response, it can cache the lookup for future messages addressed to the same IP address. == ARP probe == An ARP probe in IPv4 is an ARP request constructed with the SHA of the probing host, an SPA of all 0s, a THA of all 0s, and a TPA set to the IPv4 address being probed for. If some host on the network regards the IPv4 address (in the TPA) as its own, it will reply to the probe (via the SHA of the probing host) thus informing the probing host of the address conflict. If instead there is no host which regards the IPv4 address as its own, then there will be no reply. When several such probes have been sent, with slight delays, and none receive replies, it can reasonably be expected that no conflict exists. As the original probe packet contains neither a valid SHA/SPA nor a valid THA/TPA pair, there is no risk of any host using the packet to update its cache with problematic data. Before beginning to use an IPv4 address (whether received from manual configuration, DHCP, or some other means), a host implementing this specification must test to see if the address is already in use, by broadcasting ARP probe packets. == ARP announcements == ARP may also be used as a simple announcement protocol. This is useful for updating other hosts' mappings of a hardware address when the sender's IP address or MAC address changes. Such an announcement, also called a gratuitous ARP (GARP) message, is usually broadcast as an ARP request containing the SPA in the target field (TPA=SPA), with THA set to zero. An alternative way is to broadcast an ARP reply with the sender's SHA and SPA duplicated in the target fields (TPA=SPA, THA=SHA). The ARP request and ARP reply announcements are both standards-based methods,: §4.6  but the ARP request method is preferred.: §3  Some devices may be configured for the use of either of these two types of announcements. An ARP announcement is not intended to solicit a reply; instead, it updates any cached entries in the ARP tables of other hosts that receive the packet. The operation code in the announcement may be either request or reply; the ARP standard specifies that the opcode is only processed after the ARP table has been updated from the address fields.: §4.6 : §4.4.1  Many operating systems issue an ARP announcement during startup. This helps to resolve problems that would otherwise occur if, for example, a network card was recently changed (changing the IP-address-to-MAC-address mapping) and other hosts still have the old mapping in their ARP caches. ARP announcements are also used by some network interfaces to provide load balancing for incoming traffic. In a team of network cards, it is used to announce a different MAC address within the team that should receive incoming packets. ARP announcements can be used in the Zeroconf protocol to allow automatic assignment of a link-local address to an interface where no other IP address configuration is available. The announcements are used to ensure an address chosen by a host is not in use by other hosts on the network link. This function can be dangerous from a cybersecurity viewpoint since an attacker can obtain information about the other hosts of its subnet to save in their ARP cache (ARP spoofing) an entry where the attacker MAC is associated, for instance, to the IP of the default gateway, thus allowing them to intercept all the traffic to external networks. == ARP mediation == ARP mediation refers to the process of resolving Layer-2 addresses through a virtual private wire service (VPWS) when different resolution protocols are used on the connected circuits, e.g., Ethernet on one end and Frame Relay on the other. In IPv4, each provider edge (PE) device discovers the IP address of the locally attached customer edge (CE) device and distributes that IP address to the corresponding remote PE device. Then each PE device responds to local ARP requests using the IP address of the remote CE device and the hardware address of the local PE device. In IPv6, each PE device discovers the IP address of both local and remote CE devices and then intercepts local Neighbor Discovery (ND) and Inverse Neighbor Discovery (IND) packets and forwards them to the remote PE device. == Inverse ARP and Reverse ARP == Inverse Address Resolution Protocol (Inverse ARP or InARP) is used to obtain network layer addresses (for example, IP addresses) of other nodes from data link layer (Layer 2) addresses. Since ARP translates layer-3 addresses to layer-2 addresses, InARP may be described as its inverse. In addition, InARP is implemented as a protocol extension to ARP: it uses the same packet format as ARP, but different operation codes. InARP is primarily used in Frame Relay (DLCI) and ATM networks, in which layer-2 addresses of virtual circuits are sometimes obtained from layer-2 signaling, and the corresponding layer-3 addresses must be available before those virtual circuits can be used. The Reverse Address Resolution Protocol (Reverse ARP or RARP), like InARP, translates layer-2 addresses to layer-3 addresses. However, in InARP the requesting station queries the layer-3 address of another node, whereas RARP is used to obtain the layer-3 address of the requesting station itself for address configuration purposes. RARP is obsolete; it was replaced by BOOTP, which was later superseded by the Dynamic Host Configuration Protocol (DHCP). == ARP spoofing and proxy ARP == Because ARP does not provide methods for authenticating ARP replies on a network, ARP replies can come from systems other than the one with the required Layer 2 address. An ARP proxy is a system that answers the ARP request on behalf of another system for which it will forward traffic, normally as a part of the network's design, such as for a dialup internet service. By contrast, in ARP spoofing the answering system, or spoofer, replies to a request for another system's address with the aim of intercepting data bound for that system. A malicious user may use ARP spoofing to perform a man-in-the-middle or denial-of-service attack on other users on the network. Various software exists to both detect and perform ARP spoofing attacks, though ARP itself does not provide any methods of protection from such attacks. == Alternatives == IPv6 uses the Neighbor Discovery Protocol and its extensions such as Secure Neighbor Discovery, rather than ARP. Computers can maintain lists of known addresses, rather than using an active protocol. In this model, each computer maintains a database of the mapping of Layer 3 addresses (e.g., IP addresses) to Layer 2 addresses (e.g., Ethernet MAC addresses). This data is maintained primarily by interpreting ARP packets from the local network link. Thus, it is often called the ARP cache. Since at least the 1980s, networked computers have a utility called arp for interrogating or manipulating this database. Historically, other methods were used to maintain the mapping between addresses, such as static configuration files, or centrally maintained lists. == ARP stuffing == Embedded systems such as networked cameras and networked power distribution devices, which lack a user interface, can use so-called ARP stuffing to make an initial network connection, although this is a misnomer, as ARP is not involved. ARP stuffing is accomplished as follows: The user's computer has an IP address stuffed manually into its address table (normally with the arp command with the MAC address taken from a label on the device) The computer sends special packets to the device, typically a ping packet with a non-default size. The device then adopts this IP address The user then communicates with it by telnet or web protocols to complete the configuration. Such devices typically have a method to disable this process once the device is operating normally, as the capability can make it vulnerable to attack. == Standards documents == RFC 826  – An Ethernet Address Resolution Protocol, Internet Standard 37. RFC 903  – A Reverse Address Resolution Protocol, Internet Standard 38. RFC 2390 – Inverse Address Resolution Protocol, Draft Standard. RFC 5227 – IPv4 Address Conflict Detection, Proposed Standard. == See also == Arping – Software utility for discovering and probing hosts on a computer network Arptables – Network administrator's tool Arpwatch – Computer networking software tool Bonjour Sleep Proxy – Open source component of zero configuration networking Cisco HDLC – Extension to the High-Level Data Link Control (HDLC) network protocol == References == == External links == "ARP Sequence Diagram (pdf)" (PDF). Archived from the original (PDF) on 2021-03-01. Gratuitous ARP Information and sample capture from Wireshark ARP-SK ARP traffic generation tools
Wikipedia/Address_resolution_protocol
In circuit design, the Y-Δ transform, also written wye-delta and also known by many other names, is a mathematical technique to simplify the analysis of an electrical network. The name derives from the shapes of the circuit diagrams, which look respectively like the letter Y and the Greek capital letter Δ. This circuit transformation theory was published by Arthur Edwin Kennelly in 1899. It is widely used in analysis of three-phase electric power circuits. The Y-Δ transform can be considered a special case of the star-mesh transform for three resistors. In mathematics, the Y-Δ transform plays an important role in theory of circular planar graphs. == Names == The Y-Δ transform is known by a variety of other names, mostly based upon the two shapes involved, listed in either order. The Y, spelled out as wye, can also be called T or star; the Δ, spelled out as delta, can also be called triangle, Π (spelled out as pi), or mesh. Thus, common names for the transformation include wye-delta or delta-wye, star-delta, star-mesh, or T-Π. == Basic Y-Δ transformation == The transformation is used to establish equivalence for networks with three terminals. Where three elements terminate at a common node and none are sources, the node is eliminated by transforming the impedances. For equivalence, the impedance between any pair of terminals must be the same for both networks. The equations given here are valid for complex as well as real impedances. Complex impedance is a quantity measured in ohms which represents resistance as positive real numbers in the usual manner, and also represents reactance as positive and negative imaginary values. === Equations for the transformation from Δ to Y === The general idea is to compute the impedance R Y {\displaystyle R_{\text{Y}}} at a terminal node of the Y circuit with impedances R ′ {\displaystyle R'} , R ″ {\displaystyle R''} to adjacent nodes in the Δ circuit by R Y = R ′ R ″ ∑ R Δ {\displaystyle R_{\text{Y}}={\frac {R'R''}{\sum R_{\Delta }}}} where R Δ {\displaystyle R_{\Delta }} are all impedances in the Δ circuit. This yields the specific formula R 1 = R b R c R a + R b + R c R 2 = R a R c R a + R b + R c R 3 = R a R b R a + R b + R c {\displaystyle {\begin{aligned}R_{1}&={\frac {R_{\text{b}}R_{\text{c}}}{R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}}\\[3pt]R_{2}&={\frac {R_{\text{a}}R_{\text{c}}}{R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}}\\[3pt]R_{3}&={\frac {R_{\text{a}}R_{\text{b}}}{R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}}\end{aligned}}} === Equations for the transformation from Y to Δ === The general idea is to compute an impedance R Δ {\displaystyle R_{\Delta }} in the Δ circuit by R Δ = R P R opposite {\displaystyle R_{\Delta }={\frac {R_{P}}{R_{\text{opposite}}}}} where R P = R 1 R 2 + R 2 R 3 + R 3 R 1 {\displaystyle R_{P}=R_{1}R_{2}+R_{2}R_{3}+R_{3}R_{1}} is the sum of the products of all pairs of impedances in the Y circuit and R opposite {\displaystyle R_{\text{opposite}}} is the impedance of the node in the Y circuit which is opposite the edge with R Δ {\displaystyle R_{\Delta }} . The formulae for the individual edges are thus R a = R 1 R 2 + R 2 R 3 + R 3 R 1 R 1 = R 2 + R 3 + R 2 R 3 R 1 R b = R 1 R 2 + R 2 R 3 + R 3 R 1 R 2 = R 1 + R 3 + R 1 R 3 R 2 R c = R 1 R 2 + R 2 R 3 + R 3 R 1 R 3 = R 1 + R 2 + R 1 R 2 R 3 {\displaystyle {\begin{aligned}R_{\text{a}}&={\frac {R_{1}R_{2}+R_{2}R_{3}+R_{3}R_{1}}{R_{1}}}=R_{2}+R_{3}+{\frac {R_{2}R_{3}}{R_{1}}}\\[3pt]R_{\text{b}}&={\frac {R_{1}R_{2}+R_{2}R_{3}+R_{3}R_{1}}{R_{2}}}=R_{1}+R_{3}+{\frac {R_{1}R_{3}}{R_{2}}}\\[3pt]R_{\text{c}}&={\frac {R_{1}R_{2}+R_{2}R_{3}+R_{3}R_{1}}{R_{3}}}=R_{1}+R_{2}+{\frac {R_{1}R_{2}}{R_{3}}}\end{aligned}}} Or, if using admittance instead of resistance: Y a = Y 3 Y 2 ∑ Y Y Y b = Y 3 Y 1 ∑ Y Y Y c = Y 1 Y 2 ∑ Y Y {\displaystyle {\begin{aligned}Y_{\text{a}}&={\frac {Y_{3}Y_{2}}{\sum Y_{\text{Y}}}}\\[3pt]Y_{\text{b}}&={\frac {Y_{3}Y_{1}}{\sum Y_{\text{Y}}}}\\[3pt]Y_{\text{c}}&={\frac {Y_{1}Y_{2}}{\sum Y_{\text{Y}}}}\end{aligned}}} Note that the general formula in Y to Δ using admittance is similar to Δ to Y using resistance. == A proof of the existence and uniqueness of the transformation == The feasibility of the transformation can be shown as a consequence of the superposition theorem for electric circuits. A short proof, rather than one derived as a corollary of the more general star-mesh transform, can be given as follows. The equivalence lies in the statement that for any external voltages ( V 1 , V 2 {\displaystyle V_{1},V_{2}} and V 3 {\displaystyle V_{3}} ) applying at the three nodes ( N 1 , N 2 {\displaystyle N_{1},N_{2}} and N 3 {\displaystyle N_{3}} ), the corresponding currents ( I 1 , I 2 {\displaystyle I_{1},I_{2}} and I 3 {\displaystyle I_{3}} ) are exactly the same for both the Y and Δ circuit, and vice versa. In this proof, we start with given external currents at the nodes. According to the superposition theorem, the voltages can be obtained by studying the superposition of the resulting voltages at the nodes of the following three problems applied at the three nodes with current: 1 3 ( I 1 − I 2 ) , − 1 3 ( I 1 − I 2 ) , 0 {\displaystyle {\frac {1}{3}}\left(I_{1}-I_{2}\right),-{\frac {1}{3}}\left(I_{1}-I_{2}\right),0} 0 , 1 3 ( I 2 − I 3 ) , − 1 3 ( I 2 − I 3 ) {\displaystyle 0,{\frac {1}{3}}\left(I_{2}-I_{3}\right),-{\frac {1}{3}}\left(I_{2}-I_{3}\right)} and − 1 3 ( I 3 − I 1 ) , 0 , 1 3 ( I 3 − I 1 ) {\displaystyle -{\frac {1}{3}}\left(I_{3}-I_{1}\right),0,{\frac {1}{3}}\left(I_{3}-I_{1}\right)} The equivalence can be readily shown by using Kirchhoff's circuit laws that I 1 + I 2 + I 3 = 0 {\displaystyle I_{1}+I_{2}+I_{3}=0} . Now each problem is relatively simple, since it involves only one single ideal current source. To obtain exactly the same outcome voltages at the nodes for each problem, the equivalent resistances in the two circuits must be the same, this can be easily found by using the basic rules of series and parallel circuits: R 3 + R 1 = ( R c + R a ) R b R a + R b + R c , R 3 R 1 = R a R c . {\displaystyle R_{3}+R_{1}={\frac {\left(R_{\text{c}}+R_{\text{a}}\right)R_{\text{b}}}{R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}},\quad {\frac {R_{3}}{R_{1}}}={\frac {R_{\text{a}}}{R_{\text{c}}}}.} Though usually six equations are more than enough to express three variables ( R 1 , R 2 , R 3 {\displaystyle R_{1},R_{2},R_{3}} ) in term of the other three variables( R a , R b , R c {\displaystyle R_{\text{a}},R_{\text{b}},R_{\text{c}}} ), here it is straightforward to show that these equations indeed lead to the above designed expressions. In fact, the superposition theorem establishes the relation between the values of the resistances, the uniqueness theorem guarantees the uniqueness of such solution. == Simplification of networks == Resistive networks between two terminals can theoretically be simplified to a single equivalent resistor (more generally, the same is true of impedance). Series and parallel transforms are basic tools for doing so, but for complex networks such as the bridge illustrated here, they do not suffice. The Y-Δ transform can be used to eliminate one node at a time and produce a network that can be further simplified, as shown. The reverse transformation, Δ-Y, which adds a node, is often handy to pave the way for further simplification as well. Every two-terminal network represented by a planar graph can be reduced to a single equivalent resistor by a sequence of series, parallel, Y-Δ, and Δ-Y transformations. However, there are non-planar networks that cannot be simplified using these transformations, such as a regular square grid wrapped around a torus, or any member of the Petersen family. == Graph theory == In graph theory, the Y-Δ transform means replacing a Y subgraph of a graph with the equivalent Δ subgraph. The transform preserves the number of edges in a graph, but not the number of vertices or the number of cycles. Two graphs are said to be Y-Δ equivalent if one can be obtained from the other by a series of Y-Δ transforms in either direction. For example, the Petersen family is a Y-Δ equivalence class. == Demonstration == === Δ-load to Y-load transformation equations === To relate { R a , R b , R c } {\displaystyle \left\{R_{\text{a}},R_{\text{b}},R_{\text{c}}\right\}} from Δ to { R 1 , R 2 , R 3 } {\displaystyle \left\{R_{1},R_{2},R_{3}\right\}} from Y, the impedance between two corresponding nodes is compared. The impedance in either configuration is determined as if one of the nodes is disconnected from the circuit. The impedance between N1 and N2 with N3 disconnected in Δ: R Δ ( N 1 , N 2 ) = R c ∥ ( R a + R b ) = 1 1 R c + 1 R a + R b = R c ( R a + R b ) R a + R b + R c {\displaystyle {\begin{aligned}R_{\Delta }\left(N_{1},N_{2}\right)&=R_{\text{c}}\parallel (R_{\text{a}}+R_{\text{b}})\\[3pt]&={\frac {1}{{\frac {1}{R_{\text{c}}}}+{\frac {1}{R_{\text{a}}+R_{\text{b}}}}}}\\[3pt]&={\frac {R_{\text{c}}\left(R_{\text{a}}+R_{\text{b}}\right)}{R_{\text{a}}+R_{\text{b}}+R_{\text{c}}}}\end{aligned}}} To simplify, let R T {\displaystyle R_{\text{T}}} be the sum of { R a , R b , R c } {\displaystyle \left\{R_{\text{a}},R_{\text{b}},R_{\text{c}}\right\}} . R T = R a + R b + R c {\displaystyle R_{\text{T}}=R_{\text{a}}+R_{\text{b}}+R_{\text{c}}} Thus, R Δ ( N 1 , N 2 ) = R c ( R a + R b ) R T {\displaystyle R_{\Delta }\left(N_{1},N_{2}\right)={\frac {R_{\text{c}}(R_{\text{a}}+R_{\text{b}})}{R_{\text{T}}}}} The corresponding impedance between N1 and N2 in Y is simple: R Y ( N 1 , N 2 ) = R 1 + R 2 {\displaystyle R_{\text{Y}}\left(N_{1},N_{2}\right)=R_{1}+R_{2}} hence: R 1 + R 2 = R c ( R a + R b ) R T {\displaystyle R_{1}+R_{2}={\frac {R_{\text{c}}(R_{\text{a}}+R_{\text{b}})}{R_{\text{T}}}}}   (1) Repeating for R ( N 2 , N 3 ) {\displaystyle R(N_{2},N_{3})} : R 2 + R 3 = R a ( R b + R c ) R T {\displaystyle R_{2}+R_{3}={\frac {R_{\text{a}}(R_{\text{b}}+R_{\text{c}})}{R_{\text{T}}}}}   (2) and for R ( N 1 , N 3 ) {\displaystyle R\left(N_{1},N_{3}\right)} : R 1 + R 3 = R b ( R a + R c ) R T . {\displaystyle R_{1}+R_{3}={\frac {R_{\text{b}}\left(R_{\text{a}}+R_{\text{c}}\right)}{R_{\text{T}}}}.}   (3) From here, the values of { R 1 , R 2 , R 3 } {\displaystyle \left\{R_{1},R_{2},R_{3}\right\}} can be determined by linear combination (addition and/or subtraction). For example, adding (1) and (3), then subtracting (2) yields R 1 + R 2 + R 1 + R 3 − R 2 − R 3 = R c ( R a + R b ) R T + R b ( R a + R c ) R T − R a ( R b + R c ) R T ⇒ 2 R 1 = 2 R b R c R T ⇒ R 1 = R b R c R T . {\displaystyle {\begin{aligned}R_{1}+R_{2}+R_{1}+R_{3}-R_{2}-R_{3}&={\frac {R_{\text{c}}(R_{\text{a}}+R_{\text{b}})}{R_{\text{T}}}}+{\frac {R_{\text{b}}(R_{\text{a}}+R_{\text{c}})}{R_{\text{T}}}}-{\frac {R_{\text{a}}(R_{\text{b}}+R_{\text{c}})}{R_{\text{T}}}}\\[3pt]{}\Rightarrow 2R_{1}&={\frac {2R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}\\[3pt]{}\Rightarrow R_{1}&={\frac {R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}.\end{aligned}}} For completeness: R 1 = R b R c R T {\displaystyle R_{1}={\frac {R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}} (4) R 2 = R a R c R T {\displaystyle R_{2}={\frac {R_{\text{a}}R_{\text{c}}}{R_{\text{T}}}}} (5) R 3 = R a R b R T {\displaystyle R_{3}={\frac {R_{\text{a}}R_{\text{b}}}{R_{\text{T}}}}} (6) === Y-load to Δ-load transformation equations === Let R T = R a + R b + R c {\displaystyle R_{\text{T}}=R_{\text{a}}+R_{\text{b}}+R_{\text{c}}} . We can write the Δ to Y equations as R 1 = R b R c R T {\displaystyle R_{1}={\frac {R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}}   (1) R 2 = R a R c R T {\displaystyle R_{2}={\frac {R_{\text{a}}R_{\text{c}}}{R_{\text{T}}}}}   (2) R 3 = R a R b R T . {\displaystyle R_{3}={\frac {R_{\text{a}}R_{\text{b}}}{R_{\text{T}}}}.}   (3) Multiplying the pairs of equations yields R 1 R 2 = R a R b R c 2 R T 2 {\displaystyle R_{1}R_{2}={\frac {R_{\text{a}}R_{\text{b}}R_{\text{c}}^{2}}{R_{\text{T}}^{2}}}}   (4) R 1 R 3 = R a R b 2 R c R T 2 {\displaystyle R_{1}R_{3}={\frac {R_{\text{a}}R_{\text{b}}^{2}R_{\text{c}}}{R_{\text{T}}^{2}}}}   (5) R 2 R 3 = R a 2 R b R c R T 2 {\displaystyle R_{2}R_{3}={\frac {R_{\text{a}}^{2}R_{\text{b}}R_{\text{c}}}{R_{\text{T}}^{2}}}}   (6) and the sum of these equations is R 1 R 2 + R 1 R 3 + R 2 R 3 = R a R b R c 2 + R a R b 2 R c + R a 2 R b R c R T 2 {\displaystyle R_{1}R_{2}+R_{1}R_{3}+R_{2}R_{3}={\frac {R_{\text{a}}R_{\text{b}}R_{\text{c}}^{2}+R_{\text{a}}R_{\text{b}}^{2}R_{\text{c}}+R_{\text{a}}^{2}R_{\text{b}}R_{\text{c}}}{R_{\text{T}}^{2}}}}   (7) Factor R a R b R c {\displaystyle R_{\text{a}}R_{\text{b}}R_{\text{c}}} from the right side, leaving R T {\displaystyle R_{\text{T}}} in the numerator, canceling with an R T {\displaystyle R_{\text{T}}} in the denominator. R 1 R 2 + R 1 R 3 + R 2 R 3 = ( R a R b R c ) ( R a + R b + R c ) R T 2 = R a R b R c R T {\displaystyle {\begin{aligned}R_{1}R_{2}+R_{1}R_{3}+R_{2}R_{3}&={}{\frac {\left(R_{\text{a}}R_{\text{b}}R_{\text{c}}\right)\left(R_{\text{a}}+R_{\text{b}}+R_{\text{c}}\right)}{R_{\text{T}}^{2}}}\\&={}{\frac {R_{\text{a}}R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}\end{aligned}}} (8) Note the similarity between (8) and {(1), (2), (3)} Divide (8) by (1) R 1 R 2 + R 1 R 3 + R 2 R 3 R 1 = R a R b R c R T R T R b R c = R a , {\displaystyle {\begin{aligned}{\frac {R_{1}R_{2}+R_{1}R_{3}+R_{2}R_{3}}{R_{1}}}&={}{\frac {R_{\text{a}}R_{\text{b}}R_{\text{c}}}{R_{\text{T}}}}{\frac {R_{\text{T}}}{R_{\text{b}}R_{\text{c}}}}\\&={}R_{\text{a}},\end{aligned}}} which is the equation for R a {\displaystyle R_{\text{a}}} . Dividing (8) by (2) or (3) (expressions for R 2 {\displaystyle R_{2}} or R 3 {\displaystyle R_{3}} ) gives the remaining equations. == Δ to Y transformation of a practical generator == During the analysis of balanced three-phase power systems, usually an equivalent per-phase (or single-phase) circuit is analyzed instead due to its simplicity. For that, equivalent wye connections are used for generators, transformers, loads and motors. The stator windings of a practical delta-connected three-phase generator, shown in the following figure, can be converted to an equivalent wye-connected generator, using the six following formulas: Z s1Y = Z s1 Z s3 Z s1 + Z s2 + Z s3 Z s2Y = Z s1 Z s2 Z s1 + Z s2 + Z s3 Z s3Y = Z s2 Z s3 Z s1 + Z s2 + Z s3 V s1Y = ( V s1 Z s1 − V s3 Z s3 ) Z s1Y V s2Y = ( V s2 Z s2 − V s1 Z s1 ) Z s2Y V s3Y = ( V s3 Z s3 − V s2 Z s2 ) Z s3Y {\displaystyle {\begin{aligned}&Z_{\text{s1Y}}={\dfrac {Z_{\text{s1}}\,Z_{\text{s3}}}{Z_{\text{s1}}+Z_{\text{s2}}+Z_{\text{s3}}}}\\[2ex]&Z_{\text{s2Y}}={\dfrac {Z_{\text{s1}}\,Z_{\text{s2}}}{Z_{\text{s1}}+Z_{\text{s2}}+Z_{\text{s3}}}}\\[2ex]&Z_{\text{s3Y}}={\dfrac {Z_{\text{s2}}\,Z_{\text{s3}}}{Z_{\text{s1}}+Z_{\text{s2}}+Z_{\text{s3}}}}\\[2ex]&V_{\text{s1Y}}=\left({\dfrac {V_{\text{s1}}}{Z_{\text{s1}}}}-{\dfrac {V_{\text{s3}}}{Z_{\text{s3}}}}\right)Z_{\text{s1Y}}\\[2ex]&V_{\text{s2Y}}=\left({\dfrac {V_{\text{s2}}}{Z_{\text{s2}}}}-{\dfrac {V_{\text{s1}}}{Z_{\text{s1}}}}\right)Z_{\text{s2Y}}\\[2ex]&V_{\text{s3Y}}=\left({\dfrac {V_{\text{s3}}}{Z_{\text{s3}}}}-{\dfrac {V_{\text{s2}}}{Z_{\text{s2}}}}\right)Z_{\text{s3Y}}\end{aligned}}} The resulting network is the following. The neutral node of the equivalent network is fictitious, and so are the line-to-neutral phasor voltages. During the transformation, the line phasor currents and the line (or line-to-line or phase-to-phase) phasor voltages are not altered. If the actual delta generator is balanced, meaning that the internal phasor voltages have the same magnitude and are phase-shifted by 120° between each other and the three complex impedances are the same, then the previous formulas reduce to the four following: Z sY = Z s 3 V s1Y = V s1 3 ∠ ± 30 ∘ V s2Y = V s2 3 ∠ ± 30 ∘ V s3Y = V s3 3 ∠ ± 30 ∘ {\displaystyle {\begin{aligned}&Z_{\text{sY}}={\dfrac {Z_{\text{s}}}{3}}\\&V_{\text{s1Y}}={\dfrac {V_{\text{s1}}}{{\sqrt {3}}\,\angle \pm 30^{\circ }}}\\[2ex]&V_{\text{s2Y}}={\dfrac {V_{\text{s2}}}{{\sqrt {3}}\,\angle \pm 30^{\circ }}}\\[2ex]&V_{\text{s3Y}}={\dfrac {V_{\text{s3}}}{{\sqrt {3}}\,\angle \pm 30^{\circ }}}\end{aligned}}} where for the last three equations, the first sign (+) is used if the phase sequence is positive/abc or the second sign (−) is used if the phase sequence is negative/acb. == See also == Star-mesh transform Network analysis (electrical circuits) Electrical network, three-phase power, polyphase systems for examples of Y and Δ connections AC motor for a discussion of the Y-Δ starting technique == References == == Notes == == Bibliography == William Stevenson, Elements of Power System Analysis 3rd ed., McGraw Hill, New York, 1975, ISBN 0-07-061285-4 == External links == Star-Triangle Conversion: Knowledge on resistive networks and resistors Calculator of Star-Triangle transform
Wikipedia/Star-triangle_transform
In graph theory and statistics, a graphon (also known as a graph limit) is a symmetric measurable function W : [ 0 , 1 ] 2 → [ 0 , 1 ] {\displaystyle W:[0,1]^{2}\to [0,1]} , that is important in the study of dense graphs. Graphons arise both as a natural notion for the limit of a sequence of dense graphs, and as the fundamental defining objects of exchangeable random graph models. Graphons are tied to dense graphs by the following pair of observations: the random graph models defined by graphons give rise to dense graphs almost surely, and, by the regularity lemma, graphons capture the structure of arbitrary large dense graphs. == Statistical formulation == A graphon is a symmetric measurable function W : [ 0 , 1 ] 2 → [ 0 , 1 ] {\displaystyle W:[0,1]^{2}\to [0,1]} . Usually a graphon is understood as defining an exchangeable random graph model according to the following scheme: Each vertex j {\displaystyle j} of the graph is assigned an independent random value u j ∼ U [ 0 , 1 ] {\displaystyle u_{j}\sim U[0,1]} Edge ( i , j ) {\displaystyle (i,j)} is independently included in the graph with probability W ( u i , u j ) {\displaystyle W(u_{i},u_{j})} . A random graph model is an exchangeable random graph model if and only if it can be defined in terms of a (possibly random) graphon in this way. The model based on a fixed graphon W {\displaystyle W} is sometimes denoted G ( n , W ) {\displaystyle \mathbb {G} (n,W)} , by analogy with the Erdős–Rényi model of random graphs. A graph generated from a graphon W {\displaystyle W} in this way is called a W {\displaystyle W} -random graph. It follows from this definition and the law of large numbers that, if W ≠ 0 {\displaystyle W\neq 0} , exchangeable random graph models are dense almost surely. === Examples === The simplest example of a graphon is W ( x , y ) ≡ p {\displaystyle W(x,y)\equiv p} for some constant p ∈ [ 0 , 1 ] {\displaystyle p\in [0,1]} . In this case the associated exchangeable random graph model is the Erdős–Rényi model G ( n , p ) {\displaystyle G(n,p)} that includes each edge independently with probability p {\displaystyle p} . If we instead start with a graphon that is piecewise constant by: dividing the unit square into k × k {\displaystyle k\times k} blocks, and setting W {\displaystyle W} equal to p l m {\displaystyle p_{lm}} on the ( ℓ , m ) th {\displaystyle (\ell ,m)^{\text{th}}} block, the resulting exchangeable random graph model is the k {\displaystyle k} community stochastic block model, a generalization of the Erdős–Rényi model. We can interpret this as a random graph model consisting of k {\displaystyle k} distinct Erdős–Rényi graphs with parameters p ℓ ℓ {\displaystyle p_{\ell \ell }} respectively, with bigraphs between them where each possible edge between blocks ( ℓ , ℓ ) {\displaystyle (\ell ,\ell )} and ( m , m ) {\displaystyle (m,m)} is included independently with probability p ℓ m {\displaystyle p_{\ell m}} . Many other popular random graph models can be understood as exchangeable random graph models defined by some graphon, a detailed survey is included in Orbanz and Roy. === Jointly exchangeable adjacency matrices === A random graph of size n {\displaystyle n} can be represented as a random n × n {\displaystyle n\times n} adjacency matrix. In order to impose consistency (in the sense of projectivity) between random graphs of different sizes it is natural to study the sequence of adjacency matrices arising as the upper-left n × n {\displaystyle n\times n} sub-matrices of some infinite array of random variables; this allows us to generate G n {\displaystyle G_{n}} by adding a node to G n − 1 {\displaystyle G_{n-1}} and sampling the edges ( j , n ) {\displaystyle (j,n)} for j < n {\displaystyle j<n} . With this perspective, random graphs are defined as random infinite symmetric arrays ( X i j ) {\displaystyle (X_{ij})} . Following the fundamental importance of exchangeable sequences in classical probability, it is natural to look for an analogous notion in the random graph setting. One such notion is given by jointly exchangeable matrices; i.e. random matrices satisfying ( X i j ) = d ( X σ ( i ) σ ( j ) ) {\displaystyle (X_{ij})\ {\overset {d}{=}}\,(X_{\sigma (i)\sigma (j)})} for all permutations σ {\displaystyle \sigma } of the natural numbers, where = d {\displaystyle {\overset {d}{=}}} means equal in distribution. Intuitively, this condition means that the distribution of the random graph is unchanged by a relabeling of its vertices: that is, the labels of the vertices carry no information. There is a representation theorem for jointly exchangeable random adjacency matrices, analogous to de Finetti’s representation theorem for exchangeable sequences. This is a special case of the Aldous–Hoover theorem for jointly exchangeable arrays and, in this setting, asserts that the random matrix ( X i j ) {\displaystyle (X_{ij})} is generated by: Sample u j ∼ U [ 0 , 1 ] {\displaystyle u_{j}\sim U[0,1]} independently X i j = X j i = 1 {\displaystyle X_{ij}=X_{ji}=1} independently at random with probability W ( u i , u j ) , {\displaystyle W(u_{i},u_{j}),} where W : [ 0 , 1 ] 2 → [ 0 , 1 ] {\displaystyle W:[0,1]^{2}\to [0,1]} is a (possibly random) graphon. That is, a random graph model has a jointly exchangeable adjacency matrix if and only if it is a jointly exchangeable random graph model defined in terms of some graphon. === Graphon estimation === Due to identifiability issues, it is impossible to estimate either the graphon function W {\displaystyle W} or the node latent positions u i , {\displaystyle u_{i},} and there are two main directions of graphon estimation. One direction aims at estimating W {\displaystyle W} up to an equivalence class, or estimate the probability matrix induced by W {\displaystyle W} . == Analytic formulation == Any graph on n {\displaystyle n} vertices { 1 , 2 , … , n } {\displaystyle \{1,2,\dots ,n\}} can be identified with its adjacency matrix A G {\displaystyle A_{G}} . This matrix corresponds to a step function W G : [ 0 , 1 ] 2 → [ 0 , 1 ] {\displaystyle W_{G}:[0,1]^{2}\to [0,1]} , defined by partitioning [ 0 , 1 ] {\displaystyle [0,1]} into intervals I 1 , I 2 , … , I n {\displaystyle I_{1},I_{2},\dots ,I_{n}} such that I j {\displaystyle I_{j}} has interior ( j − 1 n , j n ) {\displaystyle \left({\frac {j-1}{n}},{\frac {j}{n}}\right)} and for each ( x , y ) ∈ I i × I j {\displaystyle (x,y)\in I_{i}\times I_{j}} , setting W G ( x , y ) {\displaystyle W_{G}(x,y)} equal to the ( i , j ) th {\displaystyle (i,j)^{\text{th}}} entry of A G {\displaystyle A_{G}} . This function W G {\displaystyle W_{G}} is the associated graphon of the graph G {\displaystyle G} . In general, if we have a sequence of graphs ( G n ) {\displaystyle (G_{n})} where the number of vertices of G n {\displaystyle G_{n}} goes to infinity, we can analyze the limiting behavior of the sequence by considering the limiting behavior of the functions ( W G n ) {\displaystyle (W_{G_{n}})} . If these graphs converge (according to some suitable definition of convergence), then we expect the limit of these graphs to correspond to the limit of these associated functions. This motivates the definition of a graphon (short for "graph function") as a symmetric measurable function W : [ 0 , 1 ] 2 → [ 0 , 1 ] {\displaystyle W:[0,1]^{2}\to [0,1]} which captures the notion of a limit of a sequence of graphs. It turns out that for sequences of dense graphs, several apparently distinct notions of convergence are equivalent and under all of them the natural limit object is a graphon. === Examples === ==== Constant graphon ==== Take a sequence of ( G n ) {\displaystyle (G_{n})} Erdős–Rényi random graphs G n = G ( n , p ) {\displaystyle G_{n}=G(n,p)} with some fixed parameter p {\displaystyle p} . Intuitively, as n {\displaystyle n} tends to infinity, the limit of this sequence of graphs is determined solely by edge density of these graphs. In the space of graphons, it turns out that such a sequence converges almost surely to the constant W ( x , y ) ≡ p {\displaystyle W(x,y)\equiv p} , which captures the above intuition. ==== Half graphon ==== Take the sequence ( H n ) {\displaystyle (H_{n})} of half-graphs, defined by taking H n {\displaystyle H_{n}} to be the bipartite graph on 2 n {\displaystyle 2n} vertices u 1 , u 2 , … , u n {\displaystyle u_{1},u_{2},\dots ,u_{n}} and v 1 , v 2 , … , v n {\displaystyle v_{1},v_{2},\dots ,v_{n}} such that u i {\displaystyle u_{i}} is adjacent to v j {\displaystyle v_{j}} precisely when i ≤ j {\displaystyle i\leq j} . If the vertices are listed in the presented order, then the adjacency matrix A H n {\displaystyle A_{H_{n}}} has two corners of "half square" block matrices filled with ones, with the rest of the entries equal to zero. For example, the adjacency matrix of H 3 {\displaystyle H_{3}} is given by [ 0 0 0 1 1 1 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 0 1 1 0 0 0 0 1 1 1 0 0 0 ] . {\displaystyle {\begin{bmatrix}0&0&0&1&1&1\\0&0&0&0&1&1\\0&0&0&0&0&1\\1&0&0&0&0&0\\1&1&0&0&0&0\\1&1&1&0&0&0\end{bmatrix}}.} As n {\displaystyle n} gets large, these corners of ones "smooth" out. Matching this intuition, the sequence ( H n ) {\displaystyle (H_{n})} converges to the half-graphon W {\displaystyle W} defined by W ( x , y ) = 1 {\displaystyle W(x,y)=1} when | x − y | ≥ 1 / 2 {\displaystyle |x-y|\geq 1/2} and W ( x , y ) = 0 {\displaystyle W(x,y)=0} otherwise. ==== Complete bipartite graphon ==== Take the sequence ( K n , n ) {\displaystyle (K_{n,n})} of complete bipartite graphs with equal sized parts. If we order the vertices by placing all vertices in one part at the beginning and placing the vertices of the other part at the end, the adjacency matrix of ( K n , n ) {\displaystyle (K_{n,n})} looks like a block off-diagonal matrix, with two blocks of ones and two blocks of zeros. For example, the adjacency matrix of K 2 , 2 {\displaystyle K_{2,2}} is given by [ 0 0 1 1 0 0 1 1 1 1 0 0 1 1 0 0 ] . {\displaystyle {\begin{bmatrix}0&0&1&1\\0&0&1&1\\1&1&0&0\\1&1&0&0\end{bmatrix}}.} As n {\displaystyle n} gets larger, this block structure of the adjacency matrix remains constant, so that this sequence of graphs converges to a "complete bipartite" graphon W {\displaystyle W} defined by W ( x , y ) = 1 {\displaystyle W(x,y)=1} whenever min ( x , y ) ≤ 1 / 2 {\displaystyle \min(x,y)\leq 1/2} and max ( x , y ) > 1 / 2 {\displaystyle \max(x,y)>1/2} , and setting W ( x , y ) = 0 {\displaystyle W(x,y)=0} otherwise. If we instead order the vertices of K n , n {\displaystyle K_{n,n}} by alternating between parts, the adjacency matrix has a chessboard structure of zeros and ones. For example, under this ordering, the adjacency matrix of K 2 , 2 {\displaystyle K_{2,2}} is given by [ 0 1 0 1 1 0 1 0 0 1 0 1 1 0 1 0 ] . {\displaystyle {\begin{bmatrix}0&1&0&1\\1&0&1&0\\0&1&0&1\\1&0&1&0\end{bmatrix}}.} As n {\displaystyle n} gets larger, the adjacency matrices become a finer and finer chessboard. Despite this behavior, we still want the limit of ( K n , n ) {\displaystyle (K_{n,n})} to be unique and result in the graphon from example 3. This means that when we formally define convergence for a sequence of graphs, the definition of a limit should be agnostic to relabelings of the vertices. ==== Limit of W-random graphs ==== Take a random sequence ( G n ) {\displaystyle (G_{n})} of W {\displaystyle W} -random graphs by drawing G n ∼ G ( n , W ) {\displaystyle G_{n}\sim \mathbb {G} (n,W)} for some fixed graphon W {\displaystyle W} . Then just like in the first example from this section, it turns out that ( G n ) {\displaystyle (G_{n})} converges to W {\displaystyle W} almost surely. === Recovering graph parameters from graphons === Given graph G {\displaystyle G} with associated graphon W = W G {\displaystyle W=W_{G}} , we can recover graph theoretic properties and parameters of G {\displaystyle G} by integrating transformations of W {\displaystyle W} . For example, the edge density (i.e. average degree divided by number of vertices) of G {\displaystyle G} is given by the integral ∫ 0 1 ∫ 0 1 W ( x , y ) d x d y . {\displaystyle \int _{0}^{1}\int _{0}^{1}W(x,y)\;\mathrm {d} x\,\mathrm {d} y.} This is because W {\displaystyle W} is { 0 , 1 } {\displaystyle \{0,1\}} -valued, and each edge ( i , j ) {\displaystyle (i,j)} in G {\displaystyle G} corresponds to a region I i × I j {\displaystyle I_{i}\times I_{j}} of area 1 / n 2 {\displaystyle 1/n^{2}} where W {\displaystyle W} equals 1 {\displaystyle 1} . Similar reasoning shows that the triangle density in G {\displaystyle G} is equal to 1 6 ∫ 0 1 ∫ 0 1 ∫ 0 1 W ( x , y ) W ( y , z ) W ( z , x ) d x d y d z . {\displaystyle {\frac {1}{6}}\int _{0}^{1}\int _{0}^{1}\int _{0}^{1}W(x,y)W(y,z)W(z,x)\;\mathrm {d} x\,\mathrm {d} y\,\mathrm {d} z.} === Notions of convergence === There are many different ways to measure the distance between two graphs. If we are interested in metrics that "preserve" extremal properties of graphs, then we should restrict our attention to metrics that identify random graphs as similar. For example, if we randomly draw two graphs independently from an Erdős–Rényi model G ( n , p ) {\displaystyle G(n,p)} for some fixed p {\displaystyle p} , the distance between these two graphs under a "reasonable" metric should be close to zero with high probability for large n {\displaystyle n} . Naively, given two graphs on the same vertex set, one might define their distance as the number of edges that must be added or removed to get from one graph to the other, i.e. their edit distance. However, the edit distance does not identify random graphs as similar; in fact, two graphs drawn independently from G ( n , 1 2 ) {\displaystyle G(n,{\tfrac {1}{2}})} have an expected (normalized) edit distance of 1 2 {\displaystyle {\tfrac {1}{2}}} . There are two natural metrics that behave well on dense random graphs in the sense that we want. The first is a sampling metric, which says that two graphs are close if their distributions of subgraphs are close. The second is an edge discrepancy metric, which says two graphs are close when their edge densities are close on all their corresponding subsets of vertices. Miraculously, a sequence of graphs converges with respect to one metric precisely when it converges with respect to the other. Moreover, the limit objects under both metrics turn out to be graphons. The equivalence of these two notions of convergence mirrors how various notions of quasirandom graphs are equivalent. ==== Homomorphism densities ==== One way to measure the distance between two graphs G {\displaystyle G} and H {\displaystyle H} is to compare their relative subgraph counts. That is, for each graph F {\displaystyle F} we can compare the number of copies of F {\displaystyle F} in G {\displaystyle G} and F {\displaystyle F} in H {\displaystyle H} . If these numbers are close for every graph F {\displaystyle F} , then intuitively G {\displaystyle G} and H {\displaystyle H} are similar looking graphs. Rather than dealing directly with subgraphs, however, it turns out to be easier to work with graph homomorphisms. This is fine when dealing with large, dense graphs, since in this scenario the number of subgraphs and the number of graph homomorphisms from a fixed graph are asymptotically equal. Given two graphs F {\displaystyle F} and G {\displaystyle G} , the homomorphism density t ( F , G ) {\displaystyle t(F,G)} of F {\displaystyle F} in G {\displaystyle G} is defined to be the number of graph homomorphisms from F {\displaystyle F} to G {\displaystyle G} . In other words, t ( F , G ) {\displaystyle t(F,G)} is the probability a randomly chosen map from the vertices of F {\displaystyle F} to the vertices of G {\displaystyle G} sends adjacent vertices in F {\displaystyle F} to adjacent vertices in G {\displaystyle G} . Graphons offer a simple way to compute homomorphism densities. Indeed, given a graph G {\displaystyle G} with associated graphon W G {\displaystyle W_{G}} and another F {\displaystyle F} , we have t ( F , G ) = ∫ ∏ ( i , j ) ∈ E ( F ) W G ( x i , x j ) { d x i } i ∈ V ( F ) {\displaystyle t(F,G)=\int \prod _{(i,j)\in E(F)}W_{G}(x_{i},x_{j})\;\left\{\mathrm {d} x_{i}\right\}_{i\in V(F)}} where the integral is multidimensional, taken over the unit hypercube [ 0 , 1 ] V ( F ) {\displaystyle [0,1]^{V(F)}} . This follows from the definition of an associated graphon, by considering when the above integrand is equal to 1 {\displaystyle 1} . We can then extend the definition of homomorphism density to arbitrary graphons W {\displaystyle W} , by using the same integral and defining t ( F , W ) = ∫ ∏ ( i , j ) ∈ E ( F ) W ( x i , x j ) { d x i } i ∈ V ( F ) {\displaystyle t(F,W)=\int \prod _{(i,j)\in E(F)}W(x_{i},x_{j})\;\left\{\mathrm {d} x_{i}\right\}_{i\in V(F)}} for any graph F {\displaystyle F} . Given this setup, we say a sequence of graphs ( G n ) {\displaystyle (G_{n})} is left-convergent if for every fixed graph F {\displaystyle F} , the sequence of homomorphism densities ( t ( F , G n ) ) {\displaystyle \left(t(F,G_{n})\right)} converges. Although not evident from the definition alone, if ( G n ) {\displaystyle (G_{n})} converges in this sense, then there always exists a graphon W {\displaystyle W} such that for every graph F {\displaystyle F} , we have lim n → ∞ t ( F , G n ) = t ( F , W ) {\displaystyle \lim _{n\to \infty }t(F,G_{n})=t(F,W)} simultaneously. ==== Cut distance ==== Take two graphs G {\displaystyle G} and H {\displaystyle H} on the same vertex set. Because these graphs share the same vertices, one way to measure their distance is to restrict to subsets X , Y {\displaystyle X,Y} of the vertex set, and for each such pair of subsets compare the number of edges e G ( X , Y ) {\displaystyle e_{G}(X,Y)} from X {\displaystyle X} to Y {\displaystyle Y} in G {\displaystyle G} to the number of edges e H ( X , Y ) {\displaystyle e_{H}(X,Y)} between X {\displaystyle X} and Y {\displaystyle Y} in H {\displaystyle H} . If these numbers are similar for every pair of subsets (relative to the total number of vertices), then that suggests G {\displaystyle G} and H {\displaystyle H} are similar graphs. As a preliminary formalization of this notion of distance, for any pair of graphs G {\displaystyle G} and H {\displaystyle H} on the same vertex set V {\displaystyle V} of size | V | = n {\displaystyle |V|=n} , define the labeled cut distance between G {\displaystyle G} and H {\displaystyle H} to be d ◻ ( G , H ) = 1 n 2 max X , Y ⊆ V | e G ( X , Y ) − e H ( X , Y ) | . {\displaystyle d_{\square }(G,H)={\frac {1}{n^{2}}}\max _{X,Y\subseteq V}\left|e_{G}(X,Y)-e_{H}(X,Y)\right|.} In other words, the labeled cut distance encodes the maximum discrepancy of the edge densities between G {\displaystyle G} and H {\displaystyle H} . We can generalize this concept to graphons by expressing the edge density 1 n 2 e G ( X , Y ) {\displaystyle {\tfrac {1}{n^{2}}}e_{G}(X,Y)} in terms of the associated graphon W G {\displaystyle W_{G}} , giving the equality d ◻ ( G , H ) = max X , Y ⊆ V | ∫ I X ∫ I Y W G ( x , y ) − W H ( x , y ) d x d y | {\displaystyle d_{\square }(G,H)=\max _{X,Y\subseteq V}\left|\int _{I_{X}}\int _{I_{Y}}W_{G}(x,y)-W_{H}(x,y)\;\mathrm {d} x\,\mathrm {d} y\right|} where I X , I Y ⊆ [ 0 , 1 ] {\displaystyle I_{X},I_{Y}\subseteq [0,1]} are unions of intervals corresponding to the vertices in X {\displaystyle X} and Y {\displaystyle Y} . Note that this definition can still be used even when the graphs being compared do not share a vertex set. This motivates the following more general definition. Definition 1. For any symmetric, measurable function f : [ 0 , 1 ] 2 → R {\displaystyle f:[0,1]^{2}\to \mathbb {R} } , define the cut norm of f {\displaystyle f} to be the quantity ‖ f ‖ ◻ = sup S , T ⊆ [ 0 , 1 ] | ∫ S ∫ T f ( x , y ) d x d y | {\displaystyle \lVert f\rVert _{\square }=\sup _{S,T\subseteq [0,1]}\left|\int _{S}\int _{T}f(x,y)\;\mathrm {d} x\,\mathrm {d} y\right|} taken over all measurable subsets S , T {\displaystyle S,T} of the unit interval. This captures our earlier notion of labeled cut distance, as we have the equality ‖ W G − W H ‖ ◻ = d ◻ ( G , H ) {\displaystyle \lVert W_{G}-W_{H}\rVert _{\square }=d_{\square }(G,H)} . This distance measure still has one major limitation: it can assign nonzero distance to two isomorphic graphs. To make sure isomorphic graphs have distance zero, we should compute the minimum cut norm over all possible "relabellings" of the vertices. This motivates the following definition of the cut distance. Definition 2. For any pair of graphons U {\displaystyle U} and W {\displaystyle W} , define their cut distance to be δ ◻ ( U , W ) = inf φ ‖ U − W φ ‖ ◻ {\displaystyle \delta _{\square }(U,W)=\inf _{\varphi }\lVert U-W^{\varphi }\rVert _{\square }} where W φ ( x , y ) = W ( φ ( x ) , φ ( y ) ) {\displaystyle W^{\varphi }(x,y)=W(\varphi (x),\varphi (y))} is the composition of W {\displaystyle W} with the map φ {\displaystyle \varphi } , and the infimum is taken over all measure-preserving bijections from the unit interval to itself. The cut distance between two graphs is defined to be the cut distance between their associated graphons. We now say that a sequence of graphs ( G n ) {\displaystyle (G_{n})} is convergent under the cut distance if it is a Cauchy sequence under the cut distance δ ◻ {\displaystyle \delta _{\square }} . Although not a direct consequence of the definition, if such a sequence of graphs is Cauchy, then it always converges to some graphon W {\displaystyle W} . ==== Equivalence of convergence ==== As it turns out, for any sequence of graphs ( G n ) {\displaystyle (G_{n})} , left-convergence is equivalent to convergence under the cut distance, and furthermore, the limit graphon W {\displaystyle W} is the same. We can also consider convergence of graphons themselves using the same definitions, and the same equivalence is true. In fact, both notions of convergence are related more strongly through what are called counting lemmas. Counting Lemma. For any pair of graphons U {\displaystyle U} and W {\displaystyle W} , we have | t ( F , U ) − t ( F , W ) | ≤ e ( F ) δ ◻ ( U , W ) {\displaystyle |t(F,U)-t(F,W)|\leq e(F)\delta _{\square }(U,W)} for all graphs F {\displaystyle F} . The name "counting lemma" comes from the bounds that this lemma gives on homomorphism densities t ( F , W ) {\displaystyle t(F,W)} , which are analogous to subgraph counts of graphs. This lemma is a generalization of the graph counting lemma that appears in the field of regularity partitions, and it immediately shows that convergence under the cut distance implies left-convergence. Inverse Counting Lemma. For every real number ε > 0 {\displaystyle \varepsilon >0} , there exist a real number η > 0 {\displaystyle \eta >0} and a positive integer k {\displaystyle k} such that for any pair of graphons U {\displaystyle U} and W {\displaystyle W} with | t ( F , U ) − t ( F , W ) | ≤ η {\displaystyle |t(F,U)-t(F,W)|\leq \eta } for all graphs F {\displaystyle F} satisfying v ( F ) ≤ k {\displaystyle v(F)\leq k} , we must have δ ◻ ( U , W ) < ε {\displaystyle \delta _{\square }(U,W)<\varepsilon } . This lemma shows that left-convergence implies convergence under the cut distance. === The space of graphons === We can make the cut-distance into a metric by taking the set of all graphons and identifying two graphons U ∼ W {\displaystyle U\sim W} whenever δ ◻ ( U , W ) = 0 {\displaystyle \delta _{\square }(U,W)=0} . The resulting space of graphons is denoted W ~ 0 {\displaystyle {\widetilde {\mathcal {W}}}_{0}} , and together with δ ◻ {\displaystyle \delta _{\square }} forms a metric space. This space turns out to be compact. Moreover, it contains the set of all finite graphs, represented by their associated graphons, as a dense subset. These observations show that the space of graphons is a completion of the space of graphs with respect to the cut distance. One immediate consequence of this is the following. Corollary 1. For every real number ε > 0 {\displaystyle \varepsilon >0} , there is an integer N {\displaystyle N} such that for every graphon W {\displaystyle W} , there is a graph G {\displaystyle G} with at most N {\displaystyle N} vertices such that δ ◻ ( W , W G ) < ε {\displaystyle \delta _{\square }(W,W_{G})<\varepsilon } . To see why, let G {\displaystyle {\mathcal {G}}} be the set of graphs. Consider for each graph G ∈ G {\displaystyle G\in {\mathcal {G}}} the open ball B ◻ ( G , ε ) {\displaystyle B_{\square }(G,\varepsilon )} containing all graphons W {\displaystyle W} such that δ ◻ ( W , W G ) < ε {\displaystyle \delta _{\square }(W,W_{G})<\varepsilon } . The set of open balls for all graphs covers W ~ 0 {\displaystyle {\widetilde {\mathcal {W}}}_{0}} , so compactness implies that there is a finite subcover { B ◻ ( G , ε ) ∣ G ∈ G 0 } {\displaystyle \{B_{\square }(G,\varepsilon )\mid G\in {\mathcal {G}}_{0}\}} for some finite subset G 0 ⊂ G {\displaystyle {\mathcal {G}}_{0}\subset {\mathcal {G}}} . We can now take N {\displaystyle N} to be the largest number of vertices among the graphs in G 0 {\displaystyle {\mathcal {G}}_{0}} . == Applications == === Regularity lemma === Compactness of the space of graphons ( W ~ 0 , δ ◻ ) {\displaystyle ({\widetilde {\mathcal {W}}}_{0},\delta _{\square })} can be thought of as an analytic formulation of Szemerédi's regularity lemma; in fact, a stronger result than the original lemma. Szemeredi's regularity lemma can be translated into the language of graphons as follows. Define a step function to be a graphon W {\displaystyle W} that is piecewise constant, i.e. for some partition P {\displaystyle {\mathcal {P}}} of [ 0 , 1 ] {\displaystyle [0,1]} , W {\displaystyle W} is constant on S × T {\displaystyle S\times T} for all S , T ∈ P {\displaystyle S,T\in {\mathcal {P}}} . The statement that a graph G {\displaystyle G} has a regularity partition is equivalent to saying that its associated graphon W G {\displaystyle W_{G}} is close to a step function. The proof of compactness requires only the weak regularity lemma: Weak Regularity Lemma for Graphons. For every graphon W {\displaystyle W} and ε > 0 {\displaystyle \varepsilon >0} , there is a step function W ′ {\displaystyle W'} with at most ⌈ 4 1 / ε 2 ⌉ {\displaystyle \lceil 4^{1/\varepsilon ^{2}}\rceil } steps such that ‖ W − W ′ ‖ ◻ ≤ ε {\displaystyle \lVert W-W'\rVert _{\square }\leq \varepsilon } . but it can be used to prove stronger regularity results, such as the strong regularity lemma: Strong Regularity Lemma for Graphons. For every sequence ε = ( ε 0 , ε 1 , … ) {\displaystyle \mathbf {\varepsilon } =(\varepsilon _{0},\varepsilon _{1},\dots )} of positive real numbers, there is a positive integer S {\displaystyle S} such that for every graphon W {\displaystyle W} , there is a graphon W ′ {\displaystyle W'} and a step function U {\displaystyle U} with k < S {\displaystyle k<S} steps such that ‖ W − W ′ ‖ 1 ≤ ε 0 {\displaystyle \lVert W-W'\rVert _{1}\leq \varepsilon _{0}} and ‖ W ′ − U ‖ ◻ ≤ ε k . {\displaystyle \lVert W'-U\rVert _{\square }\leq \varepsilon _{k}.} The proof of the strong regularity lemma is similar in concept to Corollary 1 above. It turns out that every graphon W {\displaystyle W} can be approximated with a step function U {\displaystyle U} in the L 1 {\displaystyle L_{1}} norm, showing that the set of balls B 1 ( U , ε 0 ) {\displaystyle B_{1}(U,\varepsilon _{0})} cover W ~ 0 {\displaystyle {\widetilde {\mathcal {W}}}_{0}} . These sets are not open in the δ ◻ {\displaystyle \delta _{\square }} metric, but they can be enlarged slightly to be open. Now, we can take a finite subcover, and one can show that the desired condition follows. === Sidorenko's conjecture === The analytic nature of graphons allows greater flexibility in attacking inequalities related to homomorphisms. For example, Sidorenko's conjecture is a major open problem in extremal graph theory, which asserts that for any graph G {\displaystyle G} on n {\displaystyle n} vertices with average degree p n {\displaystyle pn} (for some p ∈ [ 0 , 1 ] {\displaystyle p\in [0,1]} ) and bipartite graph H {\displaystyle H} on v {\displaystyle v} vertices and e {\displaystyle e} edges, the number of homomorphisms from H {\displaystyle H} to G {\displaystyle G} is at least p e n v {\displaystyle p^{e}n^{v}} . Since this quantity is the expected number of labeled subgraphs of H {\displaystyle H} in a random graph G ( n , p ) {\displaystyle G(n,p)} , the conjecture can be interpreted as the claim that for any bipartite graph H {\displaystyle H} , the random graph achieves (in expectation) the minimum number of copies of H {\displaystyle H} over all graphs with some fixed edge density. Many approaches to Sidorenko's conjecture formulate the problem as an integral inequality on graphons, which then allows the problem to be attacked using other analytical approaches. == Generalizations == Graphons are naturally associated with dense simple graphs. There are extensions of this model to dense directed weighted graphs, often referred to as decorated graphons. There are also recent extensions to the sparse graph regime, from both the perspective of random graph models and graph limit theory. == References ==
Wikipedia/Graphon
A digitally controlled oscillator or DCO is used in synthesizers, microcontrollers, and software-defined radios. The name is analogous with "voltage-controlled oscillator". DCOs were designed to overcome the tuning stability limitations of early VCO designs. == Confusion over terminology == The term "digitally controlled oscillator" has been used to describe the combination of a voltage-controlled oscillator driven by a control signal from a digital-to-analog converter, and is also sometimes used to describe numerically controlled oscillators. This article refers specifically to the DCOs used in many synthesizers of the 1980s . These include the Roland Juno-6, Juno-60, Juno-106, JX-3P, JX-8P, and JX-10, the Elka Synthex, the Yamaha DX7, the Oberheim Matrix-6, some instruments by Akai and Kawai, and the recent Prophet '08 and its successor Rev2 by Dave Smith Instruments. == Relation to earlier VCO designs == Many voltage-controlled oscillators for electronic music are based on a capacitor charging linearly in an op-amp integrator configuration. When the capacitor charge reaches a certain level, a comparator generates a reset pulse, which discharges the capacitor and the cycle begins again. This produces a rising ramp (or sawtooth) waveform, and this type of oscillator core is known as a ramp core. A common DCO design uses a programmable counter IC such as the 8253 instead of a comparator. This provides stable digital pitch generation by using the leading edge of a square wave to derive a reset pulse to discharge the capacitor in the oscillator's ramp core. == Historical context == In the early 1980s, many manufacturers were beginning to produce polyphonic synthesizers. The VCO designs of the time still left something to be desired in terms of tuning stability. Whilst this was an issue for monophonic synthesizers, the limited number of oscillators (typically 3 or fewer) meant that keeping instruments tuned was a manageable task, often performed using dedicated front panel controls. With the advent of polyphony, tuning problems became worse and costs went up, due to the much larger number of oscillators involved (often 16 in an 8-voice instrument like the Yamaha CS-80 from 1977 or Roland Jupiter-8 from 1981). This created a need for a cheap, reliable, and stable oscillator design. Engineers working on the problem looked to the frequency division technology used in electronic organs of the time and the microprocessors and associated chips that were starting to appear, and developed the DCO. The DCO was seen at the time as an improvement over the unstable tuning of VCOs. However, it shared the same ramp core, and the same limited range of waveforms. Although sophisticated analogue waveshaping is possible, the greater simplicity and arbitrary waveforms of digital systems like direct digital synthesis led to most later instruments adopting entirely digital oscillator designs. == Operation == A DCO can be considered as a VCO that is synchronised to an external frequency reference. The reference in this case is the reset pulses. These are produced by a digital counter such as the 8253 chip. The counter acts as a frequency divider, counting pulses from a high frequency master clock (typically several MHz) and toggling the state of its output when the count reaches some predetermined value. The frequency of the counter's output can thus be defined by the number of pulses counted, and this generates a square wave at the required frequency. The leading edge of this square wave is used to derive a reset pulse to discharge the capacitor in the oscillator's ramp core. This ensures that the ramp waveform produced is of the same frequency as the counter output. == Problems with the design == For a given capacitor charging current, the amplitude of the output waveform will decrease linearly with frequency. In musical terms, this means a waveform an octave higher in pitch is of half the amplitude. In order to produce a constant amplitude over the full range of the oscillator, some compensation scheme must be employed. This is often done by controlling the charging current from the same microprocessor that controls the counter reset value. == See also == Direct digital synthesizer Numerically controlled oscillator Voltage-controlled oscillator == References ==
Wikipedia/Digitally_controlled_oscillator
Leeson's equation is an empirical expression that describes an oscillator's phase noise spectrum. Leeson's expression for single-sideband (SSB) phase noise in dBc/Hz (decibels relative to output level per hertz) and augmented for flicker noise: L ( f m ) = 10 log 10 ⁡ [ 2 F k T P s ( ( f 0 2 Q l f m ) 2 + 1 ) ( f c f m + 1 ) ] {\displaystyle L(f_{\text{m}})=10\log _{10}{\bigg [}{\frac {2FkT}{P_{s}}}{\bigg (}{\bigg (}{\frac {f_{0}}{2Q_{\text{l}}f_{\text{m}}}}{\bigg )}^{2}+1{\bigg )}{\bigg (}{\frac {f_{\text{c}}}{f_{\text{m}}}}+1{\bigg )}{\bigg ]}} where f0 is the output frequency, Ql is the loaded quality factor, fm is the offset from the output frequency (Hz), fc is the 1/f corner frequency, F is the noise factor of the amplifier, k is the Boltzmann constant, T is absolute temperature, and Ps is the available power at the sustaining amplifier input. There is often misunderstanding around Leeson's equation, even in text books. In the 1966 paper, Leeson stated correctly that "Ps is the signal level at the oscillator active element input" (often referred to as the power through the resonator now, strictly speaking it is the available power at the amplifier input). F is the device noise factor, however this does need to be measured at the operating power level. The common misunderstanding, that Ps is the oscillator output level, may result from derivations that are not completely general. In 1982, W. P. Robins (IEE Publication "Phase noise in signal sources") correctly showed that the Leeson equation (in the −20 dB/decade region) is not just an empirical rule, but a result that follows from a linear analysis of an oscillator circuit. However, a used constraint in his circuit was that the oscillator output power was approximately equal to the active device input power. The Leeson equation is presented in various forms. In the above equation, if fc is set to zero the equation represents a linear analysis of a feedback oscillator in the general case (and flicker noise is not included), it is for this that Leeson is most recognised, showing a −20 dB/decade of offset frequency slope. If used correctly, the Leeson equation gives a useful prediction of oscillator performance in this range. If a value for fc is included, the equation also shows a curve fit for the flicker noise. The fc for an amplifier depends on the actual configuration used, because radio-frequency and low-frequency negative feedback can have an effect on fc. So for accurate results, fc must be determined from added noise measurements on the amplifier using R.F., with the actual circuit configuration to be used in the oscillator. Evidence that Ps is the amplifier input power (often contradicted or very unclear in text books) can be found in the derivation in further reading which also shows experimental results, "Enrico Rubiola, The Leeson Effect" also shows this in a different form. == References == == Further reading == Rubiola, Enrico (2008), Phase noise and frequency stability in oscillators, Cambridge, ISBN 978-0-521-15328-7 Rohde, Ulrich L. (20 October 2011), Noise in Oscillators with Active Inductors (PDF), p. 9 Brooking, P, Derivation of Leeson's equation https://www.youtube.com/channel/UCzJBRg4C5dbjP_4PWWRX4Dg == External links == Ali M. Niknejad, Oscillator Phase Noise, University of California, Berkeley, 2009 http://rfic.eecs.berkeley.edu/~niknejad/ee242/pdf/eecs242_lect22_phasenoise.pdf, stating "Leeson modified the above noise model to account for several experimentally observed phenomena". Also, "In Leeson’s model, the factor F is a fitting parameter rather than arising from any physical concepts. It’s tempting to call this the oscillator "noise figure", but this is misleading." John van der Merwe, An Experimental Investigation into the Validity of Leeson's Equation for Low Phase Noise Oscillator Design, December 2010, https://scholar.sun.ac.za/bitstream/handle/10019.1/5424/vandermerwe_experimental_2010.pdf and http://www.researchgate.net/publication/48339964_An_experimental_investigation_into_the_validity_of_Leeson's_equation_for_low_phase_noise_oscillator_design Enrico Rubiola, The Leeson effect, arXiv:physics/0502143 . Superseded by Rubiola 2008. MIT OpenCourseWare Lecture Notes - High Speed Communication Circuits, Noise in Voltage Controlled Oscillators, https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/resources/lec17/
Wikipedia/Leeson's_equation
A numerically controlled oscillator (NCO) is a digital signal generator which creates a synchronous (i.e., clocked), discrete-time, discrete-valued representation of a waveform, usually sinusoidal. NCOs are often used in conjunction with a digital-to-analog converter (DAC) at the output to create a direct digital synthesizer (DDS). Numerically controlled oscillators offer several advantages over other types of oscillators in terms of agility, accuracy, stability and reliability. NCOs are used in many communications systems including digital up/down converters used in 3G wireless and software radio systems, digital phase-locked loops, radar systems, drivers for optical or acoustic transmissions, and multilevel FSK/PSK modulators/demodulators. == Operation == An NCO generally consists of two parts: A phase accumulator (PA), which adds to the value held at its output a frequency control value at each clock sample. A phase-to-amplitude converter (PAC), which uses the phase accumulator output word (phase word) usually as an index into a waveform look-up table (LUT) to provide a corresponding amplitude sample. Sometimes interpolation is used with the look-up table to provide better accuracy and reduce phase error noise. Other methods of converting phase to amplitude, including mathematical algorithms such as power series can be used, particularly in a software NCO. When clocked, the phase accumulator (PA) creates a modulo-2N sawtooth waveform which is then converted by the phase-to-amplitude converter (PAC) to a sampled sinusoid, where N is the number of bits carried in the phase accumulator. N sets the NCO frequency resolution and is normally much larger than the number of bits defining the memory space of the PAC look-up table. If the PAC capacity is 2M, the PA output word must be truncated to M bits as shown in Figure 1. However, the truncated bits can be used for interpolation. The truncation of the phase output word does not affect the frequency accuracy but produces a time-varying periodic phase error which is a primary source of spurious products. Another spurious product generation mechanism is finite word length effects of the PAC output (amplitude) word. The frequency accuracy relative to the clock frequency is limited only by the precision of the arithmetic used to compute the phase. NCOs are phase- and frequency-agile, and can be trivially modified to produce a phase-modulated or frequency-modulated output by summation at the appropriate node, or provide quadrature outputs as shown in the figure. == Phase accumulator == A binary phase accumulator consists of an N-bit binary adder and a register configured as shown in Figure 1. Each clock cycle produces a new N-bit output consisting of the previous output obtained from the register summed with the frequency control word (FCW) which is constant for a given output frequency. The resulting output waveform is a staircase with step size Δ F {\displaystyle \Delta F} , the integer value of the FCW. In some configurations, the phase output is taken from the output of the register which introduces a one clock cycle latency but allows the adder to operate at a higher clock rate. The adder is designed to overflow when the sum of the absolute value of its operands exceeds its capacity (2N−1). The overflow bit is discarded so the output word width is always equal to its input word width. The remainder ϕ n {\displaystyle \phi _{n}} , called the residual, is stored in the register and the cycle repeats, starting this time from ϕ n {\displaystyle \phi _{n}} (see figure 2). Since a phase accumulator is a finite-state machine, eventually the residual at some sample K must return to the initial value ϕ 0 {\displaystyle \phi _{0}} . The interval K is referred to as the grand repetition rate (GRR) given by GRR = 2 N GCD ( Δ F , 2 N ) {\displaystyle {\mbox{GRR}}={\frac {2^{N}}{{\mbox{GCD}}(\Delta F,2^{N})}}} where GCD is the greatest common divisor function. The GRR represents the true periodicity for a given Δ F {\displaystyle \Delta F} which for a high resolution NCO can be very long. Usually we are more interested in the operating frequency determined by the average overflow rate, given by F o u t = Δ F 2 N F c l o c k {\displaystyle F_{out}={\frac {\Delta F}{2^{N}}}F_{clock}} (1) The frequency resolution, defined as the smallest possible incremental change in frequency, is given by F r e s = F c l o c k 2 N {\displaystyle F_{res}={\frac {F_{clock}}{2^{N}}}} (2) Equation (1) shows that the phase accumulator can be thought of as a programmable non-integer frequency divider of divide ratio Δ F / 2 N {\displaystyle \Delta F/2^{N}} . == Phase-to-amplitude converter == The phase-amplitude converter creates the sample-domain waveform from the truncated phase output word received from the PA. The PAC can be a simple read only memory containing 2M contiguous samples of the desired output waveform which typically is a sinusoid. Often though, various tricks are employed to reduce the amount of memory required. This include various trigonometric expansions, trigonometric approximations and methods which take advantage of the quadrature symmetry exhibited by sinusoids. Alternatively, the PAC may consist of random access memory which can be filled as desired to create an arbitrary waveform generator. == Spurious products == Spurious products are the result of harmonic or non-harmonic distortion in the creation of the output waveform due to non-linear numerical effects in the signal processing chain. Only numerical errors are covered here. For other distortion mechanisms created in the digital-to-analog converter see the corresponding section in the direct-digital synthesizer article. === Phase truncation spurs === The number of phase accumulator bits of an NCO (N) is usually between 16 and 64. If the PA output word were used directly to index the PAC look-up table an untenably high storage capacity in the ROM would be required. As such, the PA output word must be truncated to span a reasonable memory space. Truncation of the phase word causes phase modulation of the output sinusoid which introduces non-harmonic distortion in proportion to the number of bits truncated. The number of spurious products created by this distortion is given by: n W = 2 W GCD ( Δ F , 2 W ) − 1 {\displaystyle n_{W}={\frac {2^{W}}{{\mbox{GCD}}(\Delta F,2^{W})}}-1} (3) where W is the number of bits truncated. In calculating the spurious-free dynamic range, we are interested in the spurious product with the largest amplitude relative to the carrier output level given by: ζ m a x = 2 − M π GCD ( Δ F , 2 W ) sin ⁡ ( π ⋅ 2 − P GCD ( Δ F , 2 W ) ) {\displaystyle \zeta _{max}=2^{-M}{\frac {\pi {\mbox{GCD}}(\Delta F,2^{W})}{\sin \left(\pi \cdot 2^{-P}{\mbox{GCD}}(\Delta F,2^{W})\right)}}} where P is the size of the phase-to-amplitude converter's lookup table in bits, i.e., M in Figure 1. For W >4, ζ m a x ≈ − 6.02 ⋅ P dBc . {\displaystyle \zeta _{max}\approx -6.02\cdot P\;{\mbox{dBc}}.} Another related spurious generation method is the slight modulation due to the GRR outlined above. The amplitude of these spurs is low for large N and their frequency is generally too low to be detectable but they may cause issues for some applications. One way to reduce the truncation in the address lookup is to have several smaller lookup tables in parallel and use the upper bits to index into the tables and the lower bits to weigh them for linear or quadratic interpolation. Ie use a 24-bit phase accumulator to look up into two 16-bit LUTS. Address into the truncated 16 MSBs, and that plus 1. Linearly interpolate using the 8 LSBs as weights. (One could instead use 3 LUTs instead and quadratically interpolate). This can result in decreased distortion for the same amount of memory at the cost of some multipliers. === Amplitude truncation spurs === Another source of spurious products is the amplitude quantization of the sampled waveform contained in the PAC look up table(s). If the number of DAC bits is P, the AM spur level is approximately equal to −6.02 P − 1.76 dBc. === Mitigation techniques === Phase truncation spurs can be reduced substantially by the introduction of white gaussian noise prior to truncation. The so-called dither noise is summed into the lower W+1 bits of the PA output word to linearize the truncation operation. Often the improvement can be achieved without penalty because the DAC noise floor tends to dominate system performance. Amplitude truncation spurs can not be mitigated in this fashion. Introduction of noise into the static values held in the PAC ROMs would not eliminate the cyclicality of the truncation error terms and thus would not achieve the desired effect. == See also == Direct digital synthesis (DDS) Digital-to-analog converter (DAC) Digitally controlled oscillator (DCO) == References ==
Wikipedia/Numerically-controlled_oscillator
In statistics, the Fisher transformation (or Fisher z-transformation) of a Pearson correlation coefficient is its inverse hyperbolic tangent (artanh). When the sample correlation coefficient r is near 1 or -1, its distribution is highly skewed, which makes it difficult to estimate confidence intervals and apply tests of significance for the population correlation coefficient ρ. The Fisher transformation solves this problem by yielding a variable whose distribution is approximately normally distributed, with a variance that is stable over different values of r. == Definition == Given a set of N bivariate sample pairs (Xi, Yi), i = 1, ..., N, the sample correlation coefficient r is given by r = cov ⁡ ( X , Y ) σ X σ Y = ∑ i = 1 N ( X i − X ¯ ) ( Y i − Y ¯ ) ∑ i = 1 N ( X i − X ¯ ) 2 ∑ i = 1 N ( Y i − Y ¯ ) 2 . {\displaystyle r={\frac {\operatorname {cov} (X,Y)}{\sigma _{X}\sigma _{Y}}}={\frac {\sum _{i=1}^{N}(X_{i}-{\bar {X}})(Y_{i}-{\bar {Y}})}{{\sqrt {\sum _{i=1}^{N}(X_{i}-{\bar {X}})^{2}}}{\sqrt {\sum _{i=1}^{N}(Y_{i}-{\bar {Y}})^{2}}}}}.} Here cov ⁡ ( X , Y ) {\displaystyle \operatorname {cov} (X,Y)} stands for the covariance between the variables X {\displaystyle X} and Y {\displaystyle Y} and σ {\displaystyle \sigma } stands for the standard deviation of the respective variable. Fisher's z-transformation of r is defined as z = 1 2 ln ⁡ ( 1 + r 1 − r ) = artanh ⁡ ( r ) , {\displaystyle z={1 \over 2}\ln \left({1+r \over 1-r}\right)=\operatorname {artanh} (r),} where "ln" is the natural logarithm function and "artanh" is the inverse hyperbolic tangent function. If (X, Y) has a bivariate normal distribution with correlation ρ and the pairs (Xi, Yi) are independent and identically distributed, then z is approximately normally distributed with mean 1 2 ln ⁡ ( 1 + ρ 1 − ρ ) , {\displaystyle {1 \over 2}\ln \left({{1+\rho } \over {1-\rho }}\right),} and a standard deviation which does not depend on the value of the correlation rho (i.e., a Variance-stabilizing transformation) 1 N − 3 , {\displaystyle {1 \over {\sqrt {N-3}}},} where N is the sample size, and ρ is the true correlation coefficient. This transformation, and its inverse r = exp ⁡ ( 2 z ) − 1 exp ⁡ ( 2 z ) + 1 = tanh ⁡ ( z ) , {\displaystyle r={\frac {\exp(2z)-1}{\exp(2z)+1}}=\operatorname {tanh} (z),} can be used to construct a large-sample confidence interval for r using standard normal theory and derivations. See also application to partial correlation. == Derivation == Hotelling gives a concise derivation of the Fisher transformation. To derive the Fisher transformation, one starts by considering an arbitrary increasing, twice-differentiable function of r {\displaystyle r} , say G ( r ) {\displaystyle G(r)} . Finding the first term in the large- N {\displaystyle N} expansion of the corresponding skewness κ 3 {\displaystyle \kappa _{3}} results in κ 3 = 6 ρ − 3 ( 1 − ρ 2 ) G ′ ′ ( ρ ) / G ′ ( ρ ) N + O ( N − 3 / 2 ) . {\displaystyle \kappa _{3}={\frac {6\rho -3(1-\rho ^{2})G^{\prime \prime }(\rho )/G^{\prime }(\rho )}{\sqrt {N}}}+O(N^{-3/2}).} Setting κ 3 = 0 {\displaystyle \kappa _{3}=0} and solving the corresponding differential equation for G {\displaystyle G} yields the inverse hyperbolic tangent G ( ρ ) = artanh ⁡ ( ρ ) {\displaystyle G(\rho )=\operatorname {artanh} (\rho )} function. Similarly expanding the mean m and variance v of artanh ⁡ ( r ) {\displaystyle \operatorname {artanh} (r)} , one gets m = artanh ⁡ ( ρ ) + ρ 2 N + O ( N − 2 ) {\displaystyle \operatorname {artanh} (\rho )+{\frac {\rho }{2N}}+O(N^{-2})} and v = 1 N + 6 − ρ 2 2 N 2 + O ( N − 3 ) {\displaystyle {\frac {1}{N}}+{\frac {6-\rho ^{2}}{2N^{2}}}+O(N^{-3})} respectively. The extra terms are not part of the usual Fisher transformation. For large values of ρ {\displaystyle \rho } and small values of N {\displaystyle N} they represent a large improvement of accuracy at minimal cost, although they greatly complicate the computation of the inverse – a closed-form expression is not available. The near-constant variance of the transformation is the result of removing its skewness – the actual improvement is achieved by the latter, not by the extra terms. Including the extra terms, i.e., computing (z-m)/v1/2, yields: z − artanh ⁡ ( ρ ) − ρ 2 N 1 N + 6 − ρ 2 2 N 2 {\displaystyle {\frac {z-\operatorname {artanh} (\rho )-{\frac {\rho }{2N}}}{\sqrt {{\frac {1}{N}}+{\frac {6-\rho ^{2}}{2N^{2}}}}}}} which has, to an excellent approximation, a standard normal distribution. == Application == The application of Fisher's transformation can be enhanced using a software calculator as shown in the figure. Assuming that the r-squared value found is 0.80, that there are 30 data , and accepting a 90% confidence interval, the r-squared value in another random sample from the same population may range from 0.656 to 0.888. When r-squared is outside this range, the population is considered to be different. == Discussion == The Fisher transformation is an approximate variance-stabilizing transformation for r when X and Y follow a bivariate normal distribution. This means that the variance of z is approximately constant for all values of the population correlation coefficient ρ. Without the Fisher transformation, the variance of r grows smaller as |ρ| gets closer to 1. Since the Fisher transformation is approximately the identity function when |r| < 1/2, it is sometimes useful to remember that the variance of r is well approximated by 1/N as long as |ρ| is not too large and N is not too small. This is related to the fact that the asymptotic variance of r is 1 for bivariate normal data. The behavior of this transform has been extensively studied since Fisher introduced it in 1915. Fisher himself found the exact distribution of z for data from a bivariate normal distribution in 1921; Gayen in 1951 determined the exact distribution of z for data from a bivariate Type A Edgeworth distribution. Hotelling in 1953 calculated the Taylor series expressions for the moments of z and several related statistics and Hawkins in 1989 discovered the asymptotic distribution of z for data from a distribution with bounded fourth moments. An alternative to the Fisher transformation is to use the exact confidence distribution density for ρ given by π ( ρ | r ) = Γ ( ν + 1 ) 2 π Γ ( ν + 1 2 ) ( 1 − r 2 ) ν − 1 2 ⋅ ( 1 − ρ 2 ) ν − 2 2 ⋅ ( 1 − r ρ ) 1 − 2 ν 2 F ( 3 2 , − 1 2 ; ν + 1 2 ; 1 + r ρ 2 ) {\displaystyle \pi (\rho |r)={\frac {\Gamma (\nu +1)}{{\sqrt {2\pi }}\Gamma (\nu +{\frac {1}{2}})}}(1-r^{2})^{\frac {\nu -1}{2}}\cdot (1-\rho ^{2})^{\frac {\nu -2}{2}}\cdot (1-r\rho )^{\frac {1-2\nu }{2}}F\!\left({\frac {3}{2}},-{\frac {1}{2}};\nu +{\frac {1}{2}};{\frac {1+r\rho }{2}}\right)} where F {\displaystyle F} is the Gaussian hypergeometric function and ν = N − 1 > 1 {\displaystyle \nu =N-1>1} . == Other uses == While the Fisher transformation is mainly associated with the Pearson product-moment correlation coefficient for bivariate normal observations, it can also be applied to Spearman's rank correlation coefficient in more general cases. A similar result for the asymptotic distribution applies, but with a minor adjustment factor: see the cited article for details. == See also == Data transformation (statistics) Meta-analysis (this transformation is used in meta analysis for stabilizing the variance) Partial correlation Pearson correlation coefficient § Inference == References == == External links == R implementation
Wikipedia/Fisher_z-transformation
In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Generating functions are often expressed in closed form (rather than as a series), by some expression involving operations on the formal series. There are various types of generating functions, including ordinary generating functions, exponential generating functions, Lambert series, Bell series, and Dirichlet series. Every sequence in principle has a generating function of each type (except that Lambert and Dirichlet series require indices to start at 1 rather than 0), but the ease with which they can be handled may differ considerably. The particular generating function, if any, that is most useful in a given context will depend upon the nature of the sequence and the details of the problem being addressed. Generating functions are sometimes called generating series, in that a series of terms can be said to be the generator of its sequence of term coefficients. == History == Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. George Pólya writes in Mathematics and plausible reasoning: The name "generating function" is due to Laplace. Yet, without giving it a name, Euler used the device of generating functions long before Laplace [..]. He applied this mathematical tool to several problems in Combinatory Analysis and the Theory of Numbers. == Definition == A generating function is a device somewhat similar to a bag. Instead of carrying many little objects detachedly, which could be embarrassing, we put them all in a bag, and then we have only one object to carry, the bag. A generating function is a clothesline on which we hang up a sequence of numbers for display. === Convergence === Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers. Thus generating functions are not functions in the formal sense of a mapping from a domain to a codomain. These expressions in terms of the indeterminate x may involve arithmetic operations, differentiation with respect to x and composition with (i.e., substitution into) other generating functions; since these operations are also defined for functions, the result looks like a function of x. Indeed, the closed form expression can often be interpreted as a function that can be evaluated at (sufficiently small) concrete values of x, and which has the formal series as its series expansion; this explains the designation "generating functions". However such interpretation is not required to be possible, because formal series are not required to give a convergent series when a nonzero numeric value is substituted for x. === Limitations === Not all expressions that are meaningful as functions of x are meaningful as expressions designating formal series; for example, negative and fractional powers of x are examples of functions that do not have a corresponding formal power series. == Types == === Ordinary generating function (OGF) === When the term generating function is used without qualification, it is usually taken to mean an ordinary generating function. The ordinary generating function of a sequence an is: G ( a n ; x ) = ∑ n = 0 ∞ a n x n . {\displaystyle G(a_{n};x)=\sum _{n=0}^{\infty }a_{n}x^{n}.} If an is the probability mass function of a discrete random variable, then its ordinary generating function is called a probability-generating function. === Exponential generating function (EGF) === The exponential generating function of a sequence an is EG ⁡ ( a n ; x ) = ∑ n = 0 ∞ a n x n n ! . {\displaystyle \operatorname {EG} (a_{n};x)=\sum _{n=0}^{\infty }a_{n}{\frac {x^{n}}{n!}}.} Exponential generating functions are generally more convenient than ordinary generating functions for combinatorial enumeration problems that involve labelled objects. Another benefit of exponential generating functions is that they are useful in transferring linear recurrence relations to the realm of differential equations. For example, take the Fibonacci sequence {fn} that satisfies the linear recurrence relation fn+2 = fn+1 + fn. The corresponding exponential generating function has the form EF ⁡ ( x ) = ∑ n = 0 ∞ f n n ! x n {\displaystyle \operatorname {EF} (x)=\sum _{n=0}^{\infty }{\frac {f_{n}}{n!}}x^{n}} and its derivatives can readily be shown to satisfy the differential equation EF″(x) = EF′(x) + EF(x) as a direct analogue with the recurrence relation above. In this view, the factorial term n! is merely a counter-term to normalise the derivative operator acting on xn. === Poisson generating function === The Poisson generating function of a sequence an is PG ⁡ ( a n ; x ) = ∑ n = 0 ∞ a n e − x x n n ! = e − x EG ⁡ ( a n ; x ) . {\displaystyle \operatorname {PG} (a_{n};x)=\sum _{n=0}^{\infty }a_{n}e^{-x}{\frac {x^{n}}{n!}}=e^{-x}\,\operatorname {EG} (a_{n};x).} === Lambert series === The Lambert series of a sequence an is LG ⁡ ( a n ; x ) = ∑ n = 1 ∞ a n x n 1 − x n . {\displaystyle \operatorname {LG} (a_{n};x)=\sum _{n=1}^{\infty }a_{n}{\frac {x^{n}}{1-x^{n}}}.} Note that in a Lambert series the index n starts at 1, not at 0, as the first term would otherwise be undefined. The Lambert series coefficients in the power series expansions b n := [ x n ] LG ⁡ ( a n ; x ) {\displaystyle b_{n}:=[x^{n}]\operatorname {LG} (a_{n};x)} for integers n ≥ 1 are related by the divisor sum b n = ∑ d | n a d . {\displaystyle b_{n}=\sum _{d|n}a_{d}.} The main article provides several more classical, or at least well-known examples related to special arithmetic functions in number theory. As an example of a Lambert series identity not given in the main article, we can show that for |x|, |xq| < 1 we have that ∑ n = 1 ∞ q n x n 1 − x n = ∑ n = 1 ∞ q n x n 2 1 − q x n + ∑ n = 1 ∞ q n x n ( n + 1 ) 1 − x n , {\displaystyle \sum _{n=1}^{\infty }{\frac {q^{n}x^{n}}{1-x^{n}}}=\sum _{n=1}^{\infty }{\frac {q^{n}x^{n^{2}}}{1-qx^{n}}}+\sum _{n=1}^{\infty }{\frac {q^{n}x^{n(n+1)}}{1-x^{n}}},} where we have the special case identity for the generating function of the divisor function, d(n) ≡ σ0(n), given by ∑ n = 1 ∞ x n 1 − x n = ∑ n = 1 ∞ x n 2 ( 1 + x n ) 1 − x n . {\displaystyle \sum _{n=1}^{\infty }{\frac {x^{n}}{1-x^{n}}}=\sum _{n=1}^{\infty }{\frac {x^{n^{2}}\left(1+x^{n}\right)}{1-x^{n}}}.} === Bell series === The Bell series of a sequence an is an expression in terms of both an indeterminate x and a prime p and is given by: BG p ⁡ ( a n ; x ) = ∑ n = 0 ∞ a p n x n . {\displaystyle \operatorname {BG} _{p}(a_{n};x)=\sum _{n=0}^{\infty }a_{p^{n}}x^{n}.} === Dirichlet series generating functions (DGFs) === Formal Dirichlet series are often classified as generating functions, although they are not strictly formal power series. The Dirichlet series generating function of a sequence an is: DG ⁡ ( a n ; s ) = ∑ n = 1 ∞ a n n s . {\displaystyle \operatorname {DG} (a_{n};s)=\sum _{n=1}^{\infty }{\frac {a_{n}}{n^{s}}}.} The Dirichlet series generating function is especially useful when an is a multiplicative function, in which case it has an Euler product expression in terms of the function's Bell series: DG ⁡ ( a n ; s ) = ∏ p BG p ⁡ ( a n ; p − s ) . {\displaystyle \operatorname {DG} (a_{n};s)=\prod _{p}\operatorname {BG} _{p}(a_{n};p^{-s})\,.} If an is a Dirichlet character then its Dirichlet series generating function is called a Dirichlet L-series. We also have a relation between the pair of coefficients in the Lambert series expansions above and their DGFs. Namely, we can prove that: [ x n ] LG ⁡ ( a n ; x ) = b n {\displaystyle [x^{n}]\operatorname {LG} (a_{n};x)=b_{n}} if and only if DG ⁡ ( a n ; s ) ζ ( s ) = DG ⁡ ( b n ; s ) , {\displaystyle \operatorname {DG} (a_{n};s)\zeta (s)=\operatorname {DG} (b_{n};s),} where ζ(s) is the Riemann zeta function. The sequence ak generated by a Dirichlet series generating function (DGF) corresponding to: DG ⁡ ( a k ; s ) = ζ ( s ) m {\displaystyle \operatorname {DG} (a_{k};s)=\zeta (s)^{m}} has the ordinary generating function: ∑ k = 1 k = n a k x k = x + ( m 1 ) ∑ 2 ≤ a ≤ n x a + ( m 2 ) ∑ a = 2 ∞ ∑ b = 2 ∞ a b ≤ n x a b + ( m 3 ) ∑ a = 2 ∞ ∑ c = 2 ∞ ∑ b = 2 ∞ a b c ≤ n x a b c + ( m 4 ) ∑ a = 2 ∞ ∑ b = 2 ∞ ∑ c = 2 ∞ ∑ d = 2 ∞ a b c d ≤ n x a b c d + ⋯ {\displaystyle \sum _{k=1}^{k=n}a_{k}x^{k}=x+{\binom {m}{1}}\sum _{2\leq a\leq n}x^{a}+{\binom {m}{2}}{\underset {ab\leq n}{\sum _{a=2}^{\infty }\sum _{b=2}^{\infty }}}x^{ab}+{\binom {m}{3}}{\underset {abc\leq n}{\sum _{a=2}^{\infty }\sum _{c=2}^{\infty }\sum _{b=2}^{\infty }}}x^{abc}+{\binom {m}{4}}{\underset {abcd\leq n}{\sum _{a=2}^{\infty }\sum _{b=2}^{\infty }\sum _{c=2}^{\infty }\sum _{d=2}^{\infty }}}x^{abcd}+\cdots } === Polynomial sequence generating functions === The idea of generating functions can be extended to sequences of other objects. Thus, for example, polynomial sequences of binomial type are generated by: e x f ( t ) = ∑ n = 0 ∞ p n ( x ) n ! t n {\displaystyle e^{xf(t)}=\sum _{n=0}^{\infty }{\frac {p_{n}(x)}{n!}}t^{n}} where pn(x) is a sequence of polynomials and f(t) is a function of a certain form. Sheffer sequences are generated in a similar way. See the main article generalized Appell polynomials for more information. Examples of polynomial sequences generated by more complex generating functions include: Appell polynomials Chebyshev polynomials Difference polynomials Generalized Appell polynomials q-difference polynomials === Other generating functions === Other sequences generated by more complex generating functions include: Double exponential generating functions e.g. the Bell numbers Hadamard products of generating functions and diagonal generating functions, and their corresponding integral transformations ==== Convolution polynomials ==== Knuth's article titled "Convolution Polynomials" defines a generalized class of convolution polynomial sequences by their special generating functions of the form F ( z ) x = exp ⁡ ( x log ⁡ F ( z ) ) = ∑ n = 0 ∞ f n ( x ) z n , {\displaystyle F(z)^{x}=\exp {\bigl (}x\log F(z){\bigr )}=\sum _{n=0}^{\infty }f_{n}(x)z^{n},} for some analytic function F with a power series expansion such that F(0) = 1. We say that a family of polynomials, f0, f1, f2, ..., forms a convolution family if deg fn ≤ n and if the following convolution condition holds for all x, y and for all n ≥ 0: f n ( x + y ) = f n ( x ) f 0 ( y ) + f n − 1 ( x ) f 1 ( y ) + ⋯ + f 1 ( x ) f n − 1 ( y ) + f 0 ( x ) f n ( y ) . {\displaystyle f_{n}(x+y)=f_{n}(x)f_{0}(y)+f_{n-1}(x)f_{1}(y)+\cdots +f_{1}(x)f_{n-1}(y)+f_{0}(x)f_{n}(y).} We see that for non-identically zero convolution families, this definition is equivalent to requiring that the sequence have an ordinary generating function of the first form given above. A sequence of convolution polynomials defined in the notation above has the following properties: The sequence n! · fn(x) is of binomial type Special values of the sequence include fn(1) = [zn] F(z) and fn(0) = δn,0, and For arbitrary (fixed) x , y , t ∈ C {\displaystyle x,y,t\in \mathbb {C} } , these polynomials satisfy convolution formulas of the form f n ( x + y ) = ∑ k = 0 n f k ( x ) f n − k ( y ) f n ( 2 x ) = ∑ k = 0 n f k ( x ) f n − k ( x ) x n f n ( x + y ) = ( x + y ) ∑ k = 0 n k f k ( x ) f n − k ( y ) ( x + y ) f n ( x + y + t n ) x + y + t n = ∑ k = 0 n x f k ( x + t k ) x + t k y f n − k ( y + t ( n − k ) ) y + t ( n − k ) . {\displaystyle {\begin{aligned}f_{n}(x+y)&=\sum _{k=0}^{n}f_{k}(x)f_{n-k}(y)\\f_{n}(2x)&=\sum _{k=0}^{n}f_{k}(x)f_{n-k}(x)\\xnf_{n}(x+y)&=(x+y)\sum _{k=0}^{n}kf_{k}(x)f_{n-k}(y)\\{\frac {(x+y)f_{n}(x+y+tn)}{x+y+tn}}&=\sum _{k=0}^{n}{\frac {xf_{k}(x+tk)}{x+tk}}{\frac {yf_{n-k}(y+t(n-k))}{y+t(n-k)}}.\end{aligned}}} For a fixed non-zero parameter t ∈ C {\displaystyle t\in \mathbb {C} } , we have modified generating functions for these convolution polynomial sequences given by z F n ( x + t n ) ( x + t n ) = [ z n ] F t ( z ) x , {\displaystyle {\frac {zF_{n}(x+tn)}{(x+tn)}}=\left[z^{n}\right]{\mathcal {F}}_{t}(z)^{x},} where 𝓕t(z) is implicitly defined by a functional equation of the form 𝓕t(z) = F(x𝓕t(z)t). Moreover, we can use matrix methods (as in the reference) to prove that given two convolution polynomial sequences, ⟨ fn(x) ⟩ and ⟨ gn(x) ⟩, with respective corresponding generating functions, F(z)x and G(z)x, then for arbitrary t we have the identity [ z n ] ( G ( z ) F ( z G ( z ) t ) ) x = ∑ k = 0 n F k ( x ) G n − k ( x + t k ) . {\displaystyle \left[z^{n}\right]\left(G(z)F\left(zG(z)^{t}\right)\right)^{x}=\sum _{k=0}^{n}F_{k}(x)G_{n-k}(x+tk).} Examples of convolution polynomial sequences include the binomial power series, 𝓑t(z) = 1 + z𝓑t(z)t, so-termed tree polynomials, the Bell numbers, B(n), the Laguerre polynomials, and the Stirling convolution polynomials. == Ordinary generating functions == === Examples for simple sequences === Polynomials are a special case of ordinary generating functions, corresponding to finite sequences, or equivalently sequences that vanish after a certain point. These are important in that many finite sequences can usefully be interpreted as generating functions, such as the Poincaré polynomial and others. A fundamental generating function is that of the constant sequence 1, 1, 1, 1, 1, 1, 1, 1, 1, ..., whose ordinary generating function is the geometric series ∑ n = 0 ∞ x n = 1 1 − x . {\displaystyle \sum _{n=0}^{\infty }x^{n}={\frac {1}{1-x}}.} The left-hand side is the Maclaurin series expansion of the right-hand side. Alternatively, the equality can be justified by multiplying the power series on the left by 1 − x, and checking that the result is the constant power series 1 (in other words, that all coefficients except the one of x0 are equal to 0). Moreover, there can be no other power series with this property. The left-hand side therefore designates the multiplicative inverse of 1 − x in the ring of power series. Expressions for the ordinary generating function of other sequences are easily derived from this one. For instance, the substitution x → ax gives the generating function for the geometric sequence 1, a, a2, a3, ... for any constant a: ∑ n = 0 ∞ ( a x ) n = 1 1 − a x . {\displaystyle \sum _{n=0}^{\infty }(ax)^{n}={\frac {1}{1-ax}}.} (The equality also follows directly from the fact that the left-hand side is the Maclaurin series expansion of the right-hand side.) In particular, ∑ n = 0 ∞ ( − 1 ) n x n = 1 1 + x . {\displaystyle \sum _{n=0}^{\infty }(-1)^{n}x^{n}={\frac {1}{1+x}}.} One can also introduce regular gaps in the sequence by replacing x by some power of x, so for instance for the sequence 1, 0, 1, 0, 1, 0, 1, 0, ... (which skips over x, x3, x5, ...) one gets the generating function ∑ n = 0 ∞ x 2 n = 1 1 − x 2 . {\displaystyle \sum _{n=0}^{\infty }x^{2n}={\frac {1}{1-x^{2}}}.} By squaring the initial generating function, or by finding the derivative of both sides with respect to x and making a change of running variable n → n + 1, one sees that the coefficients form the sequence 1, 2, 3, 4, 5, ..., so one has ∑ n = 0 ∞ ( n + 1 ) x n = 1 ( 1 − x ) 2 , {\displaystyle \sum _{n=0}^{\infty }(n+1)x^{n}={\frac {1}{(1-x)^{2}}},} and the third power has as coefficients the triangular numbers 1, 3, 6, 10, 15, 21, ... whose term n is the binomial coefficient (n + 22), so that ∑ n = 0 ∞ ( n + 2 2 ) x n = 1 ( 1 − x ) 3 . {\displaystyle \sum _{n=0}^{\infty }{\binom {n+2}{2}}x^{n}={\frac {1}{(1-x)^{3}}}.} More generally, for any non-negative integer k and non-zero real value a, it is true that ∑ n = 0 ∞ a n ( n + k k ) x n = 1 ( 1 − a x ) k + 1 . {\displaystyle \sum _{n=0}^{\infty }a^{n}{\binom {n+k}{k}}x^{n}={\frac {1}{(1-ax)^{k+1}}}\,.} Since 2 ( n + 2 2 ) − 3 ( n + 1 1 ) + ( n 0 ) = 2 ( n + 1 ) ( n + 2 ) 2 − 3 ( n + 1 ) + 1 = n 2 , {\displaystyle 2{\binom {n+2}{2}}-3{\binom {n+1}{1}}+{\binom {n}{0}}=2{\frac {(n+1)(n+2)}{2}}-3(n+1)+1=n^{2},} one can find the ordinary generating function for the sequence 0, 1, 4, 9, 16, ... of square numbers by linear combination of binomial-coefficient generating sequences: G ( n 2 ; x ) = ∑ n = 0 ∞ n 2 x n = 2 ( 1 − x ) 3 − 3 ( 1 − x ) 2 + 1 1 − x = x ( x + 1 ) ( 1 − x ) 3 . {\displaystyle G(n^{2};x)=\sum _{n=0}^{\infty }n^{2}x^{n}={\frac {2}{(1-x)^{3}}}-{\frac {3}{(1-x)^{2}}}+{\frac {1}{1-x}}={\frac {x(x+1)}{(1-x)^{3}}}.} We may also expand alternately to generate this same sequence of squares as a sum of derivatives of the geometric series in the following form: G ( n 2 ; x ) = ∑ n = 0 ∞ n 2 x n = ∑ n = 0 ∞ n ( n − 1 ) x n + ∑ n = 0 ∞ n x n = x 2 D 2 [ 1 1 − x ] + x D [ 1 1 − x ] = 2 x 2 ( 1 − x ) 3 + x ( 1 − x ) 2 = x ( x + 1 ) ( 1 − x ) 3 . {\displaystyle {\begin{aligned}G(n^{2};x)&=\sum _{n=0}^{\infty }n^{2}x^{n}\\[4px]&=\sum _{n=0}^{\infty }n(n-1)x^{n}+\sum _{n=0}^{\infty }nx^{n}\\[4px]&=x^{2}D^{2}\left[{\frac {1}{1-x}}\right]+xD\left[{\frac {1}{1-x}}\right]\\[4px]&={\frac {2x^{2}}{(1-x)^{3}}}+{\frac {x}{(1-x)^{2}}}={\frac {x(x+1)}{(1-x)^{3}}}.\end{aligned}}} By induction, we can similarly show for positive integers m ≥ 1 that n m = ∑ j = 0 m { m j } n ! ( n − j ) ! , {\displaystyle n^{m}=\sum _{j=0}^{m}{\begin{Bmatrix}m\\j\end{Bmatrix}}{\frac {n!}{(n-j)!}},} where {nk} denote the Stirling numbers of the second kind and where the generating function ∑ n = 0 ∞ n ! ( n − j ) ! z n = j ! ⋅ z j ( 1 − z ) j + 1 , {\displaystyle \sum _{n=0}^{\infty }{\frac {n!}{(n-j)!}}\,z^{n}={\frac {j!\cdot z^{j}}{(1-z)^{j+1}}},} so that we can form the analogous generating functions over the integral mth powers generalizing the result in the square case above. In particular, since we can write z k ( 1 − z ) k + 1 = ∑ i = 0 k ( k i ) ( − 1 ) k − i ( 1 − z ) i + 1 , {\displaystyle {\frac {z^{k}}{(1-z)^{k+1}}}=\sum _{i=0}^{k}{\binom {k}{i}}{\frac {(-1)^{k-i}}{(1-z)^{i+1}}},} we can apply a well-known finite sum identity involving the Stirling numbers to obtain that ∑ n = 0 ∞ n m z n = ∑ j = 0 m { m + 1 j + 1 } ( − 1 ) m − j j ! ( 1 − z ) j + 1 . {\displaystyle \sum _{n=0}^{\infty }n^{m}z^{n}=\sum _{j=0}^{m}{\begin{Bmatrix}m+1\\j+1\end{Bmatrix}}{\frac {(-1)^{m-j}j!}{(1-z)^{j+1}}}.} === Rational functions === The ordinary generating function of a sequence can be expressed as a rational function (the ratio of two finite-degree polynomials) if and only if the sequence is a linear recursive sequence with constant coefficients; this generalizes the examples above. Conversely, every sequence generated by a fraction of polynomials satisfies a linear recurrence with constant coefficients; these coefficients are identical to the coefficients of the fraction denominator polynomial (so they can be directly read off). This observation shows it is easy to solve for generating functions of sequences defined by a linear finite difference equation with constant coefficients, and then hence, for explicit closed-form formulas for the coefficients of these generating functions. The prototypical example here is to derive Binet's formula for the Fibonacci numbers via generating function techniques. We also notice that the class of rational generating functions precisely corresponds to the generating functions that enumerate quasi-polynomial sequences of the form f n = p 1 ( n ) ρ 1 n + ⋯ + p ℓ ( n ) ρ ℓ n , {\displaystyle f_{n}=p_{1}(n)\rho _{1}^{n}+\cdots +p_{\ell }(n)\rho _{\ell }^{n},} where the reciprocal roots, ρ i ∈ C {\displaystyle \rho _{i}\in \mathbb {C} } , are fixed scalars and where pi(n) is a polynomial in n for all 1 ≤ i ≤ ℓ. In general, Hadamard products of rational functions produce rational generating functions. Similarly, if F ( s , t ) := ∑ m , n ≥ 0 f ( m , n ) w m z n {\displaystyle F(s,t):=\sum _{m,n\geq 0}f(m,n)w^{m}z^{n}} is a bivariate rational generating function, then its corresponding diagonal generating function, diag ⁡ ( F ) := ∑ n = 0 ∞ f ( n , n ) z n , {\displaystyle \operatorname {diag} (F):=\sum _{n=0}^{\infty }f(n,n)z^{n},} is algebraic. For example, if we let F ( s , t ) := ∑ i , j ≥ 0 ( i + j i ) s i t j = 1 1 − s − t , {\displaystyle F(s,t):=\sum _{i,j\geq 0}{\binom {i+j}{i}}s^{i}t^{j}={\frac {1}{1-s-t}},} then this generating function's diagonal coefficient generating function is given by the well-known OGF formula diag ⁡ ( F ) = ∑ n = 0 ∞ ( 2 n n ) z n = 1 1 − 4 z . {\displaystyle \operatorname {diag} (F)=\sum _{n=0}^{\infty }{\binom {2n}{n}}z^{n}={\frac {1}{\sqrt {1-4z}}}.} This result is computed in many ways, including Cauchy's integral formula or contour integration, taking complex residues, or by direct manipulations of formal power series in two variables. === Operations on generating functions === ==== Multiplication yields convolution ==== Multiplication of ordinary generating functions yields a discrete convolution (the Cauchy product) of the sequences. For example, the sequence of cumulative sums (compare to the slightly more general Euler–Maclaurin formula) ( a 0 , a 0 + a 1 , a 0 + a 1 + a 2 , … ) {\displaystyle (a_{0},a_{0}+a_{1},a_{0}+a_{1}+a_{2},\ldots )} of a sequence with ordinary generating function G(an; x) has the generating function G ( a n ; x ) ⋅ 1 1 − x {\displaystyle G(a_{n};x)\cdot {\frac {1}{1-x}}} because ⁠1/1 − x⁠ is the ordinary generating function for the sequence (1, 1, ...). See also the section on convolutions in the applications section of this article below for further examples of problem solving with convolutions of generating functions and interpretations. ==== Shifting sequence indices ==== For integers m ≥ 1, we have the following two analogous identities for the modified generating functions enumerating the shifted sequence variants of ⟨ gn − m ⟩ and ⟨ gn + m ⟩, respectively: z m G ( z ) = ∑ n = m ∞ g n − m z n G ( z ) − g 0 − g 1 z − ⋯ − g m − 1 z m − 1 z m = ∑ n = 0 ∞ g n + m z n . {\displaystyle {\begin{aligned}&z^{m}G(z)=\sum _{n=m}^{\infty }g_{n-m}z^{n}\\[4px]&{\frac {G(z)-g_{0}-g_{1}z-\cdots -g_{m-1}z^{m-1}}{z^{m}}}=\sum _{n=0}^{\infty }g_{n+m}z^{n}.\end{aligned}}} ==== Differentiation and integration of generating functions ==== We have the following respective power series expansions for the first derivative of a generating function and its integral: G ′ ( z ) = ∑ n = 0 ∞ ( n + 1 ) g n + 1 z n z ⋅ G ′ ( z ) = ∑ n = 0 ∞ n g n z n ∫ 0 z G ( t ) d t = ∑ n = 1 ∞ g n − 1 n z n . {\displaystyle {\begin{aligned}G'(z)&=\sum _{n=0}^{\infty }(n+1)g_{n+1}z^{n}\\[4px]z\cdot G'(z)&=\sum _{n=0}^{\infty }ng_{n}z^{n}\\[4px]\int _{0}^{z}G(t)\,dt&=\sum _{n=1}^{\infty }{\frac {g_{n-1}}{n}}z^{n}.\end{aligned}}} The differentiation–multiplication operation of the second identity can be repeated k times to multiply the sequence by nk, but that requires alternating between differentiation and multiplication. If instead doing k differentiations in sequence, the effect is to multiply by the kth falling factorial: z k G ( k ) ( z ) = ∑ n = 0 ∞ n k _ g n z n = ∑ n = 0 ∞ n ( n − 1 ) ⋯ ( n − k + 1 ) g n z n for all k ∈ N . {\displaystyle z^{k}G^{(k)}(z)=\sum _{n=0}^{\infty }n^{\underline {k}}g_{n}z^{n}=\sum _{n=0}^{\infty }n(n-1)\dotsb (n-k+1)g_{n}z^{n}\quad {\text{for all }}k\in \mathbb {N} .} Using the Stirling numbers of the second kind, that can be turned into another formula for multiplying by n k {\displaystyle n^{k}} as follows (see the main article on generating function transformations): ∑ j = 0 k { k j } z j F ( j ) ( z ) = ∑ n = 0 ∞ n k f n z n for all k ∈ N . {\displaystyle \sum _{j=0}^{k}{\begin{Bmatrix}k\\j\end{Bmatrix}}z^{j}F^{(j)}(z)=\sum _{n=0}^{\infty }n^{k}f_{n}z^{n}\quad {\text{for all }}k\in \mathbb {N} .} A negative-order reversal of this sequence powers formula corresponding to the operation of repeated integration is defined by the zeta series transformation and its generalizations defined as a derivative-based transformation of generating functions, or alternately termwise by and performing an integral transformation on the sequence generating function. Related operations of performing fractional integration on a sequence generating function are discussed here. ==== Enumerating arithmetic progressions of sequences ==== In this section we give formulas for generating functions enumerating the sequence {fan + b} given an ordinary generating function F(z), where a ≥ 2, 0 ≤ b < a, and a and b are integers (see the main article on transformations). For a = 2, this is simply the familiar decomposition of a function into even and odd parts (i.e., even and odd powers): ∑ n = 0 ∞ f 2 n z 2 n = F ( z ) + F ( − z ) 2 ∑ n = 0 ∞ f 2 n + 1 z 2 n + 1 = F ( z ) − F ( − z ) 2 . {\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }f_{2n}z^{2n}&={\frac {F(z)+F(-z)}{2}}\\[4px]\sum _{n=0}^{\infty }f_{2n+1}z^{2n+1}&={\frac {F(z)-F(-z)}{2}}.\end{aligned}}} More generally, suppose that a ≥ 3 and that ωa = exp ⁠2πi/a⁠ denotes the ath primitive root of unity. Then, as an application of the discrete Fourier transform, we have the formula ∑ n = 0 ∞ f a n + b z a n + b = 1 a ∑ m = 0 a − 1 ω a − m b F ( ω a m z ) . {\displaystyle \sum _{n=0}^{\infty }f_{an+b}z^{an+b}={\frac {1}{a}}\sum _{m=0}^{a-1}\omega _{a}^{-mb}F\left(\omega _{a}^{m}z\right).} For integers m ≥ 1, another useful formula providing somewhat reversed floored arithmetic progressions — effectively repeating each coefficient m times — are generated by the identity ∑ n = 0 ∞ f ⌊ n m ⌋ z n = 1 − z m 1 − z F ( z m ) = ( 1 + z + ⋯ + z m − 2 + z m − 1 ) F ( z m ) . {\displaystyle \sum _{n=0}^{\infty }f_{\left\lfloor {\frac {n}{m}}\right\rfloor }z^{n}={\frac {1-z^{m}}{1-z}}F(z^{m})=\left(1+z+\cdots +z^{m-2}+z^{m-1}\right)F(z^{m}).} === P-recursive sequences and holonomic generating functions === ==== Definitions ==== A formal power series (or function) F(z) is said to be holonomic if it satisfies a linear differential equation of the form c 0 ( z ) F ( r ) ( z ) + c 1 ( z ) F ( r − 1 ) ( z ) + ⋯ + c r ( z ) F ( z ) = 0 , {\displaystyle c_{0}(z)F^{(r)}(z)+c_{1}(z)F^{(r-1)}(z)+\cdots +c_{r}(z)F(z)=0,} where the coefficients ci(z) are in the field of rational functions, C ( z ) {\displaystyle \mathbb {C} (z)} . Equivalently, F ( z ) {\displaystyle F(z)} is holonomic if the vector space over C ( z ) {\displaystyle \mathbb {C} (z)} spanned by the set of all of its derivatives is finite dimensional. Since we can clear denominators if need be in the previous equation, we may assume that the functions, ci(z) are polynomials in z. Thus we can see an equivalent condition that a generating function is holonomic if its coefficients satisfy a P-recurrence of the form c ^ s ( n ) f n + s + c ^ s − 1 ( n ) f n + s − 1 + ⋯ + c ^ 0 ( n ) f n = 0 , {\displaystyle {\widehat {c}}_{s}(n)f_{n+s}+{\widehat {c}}_{s-1}(n)f_{n+s-1}+\cdots +{\widehat {c}}_{0}(n)f_{n}=0,} for all large enough n ≥ n0 and where the ĉi(n) are fixed finite-degree polynomials in n. In other words, the properties that a sequence be P-recursive and have a holonomic generating function are equivalent. Holonomic functions are closed under the Hadamard product operation ⊙ on generating functions. ==== Examples ==== The functions ez, log z, cos z, arcsin z, 1 + z {\displaystyle {\sqrt {1+z}}} , the dilogarithm function Li2(z), the generalized hypergeometric functions pFq(...; ...; z) and the functions defined by the power series ∑ n = 0 ∞ z n ( n ! ) 2 {\displaystyle \sum _{n=0}^{\infty }{\frac {z^{n}}{(n!)^{2}}}} and the non-convergent ∑ n = 0 ∞ n ! ⋅ z n {\displaystyle \sum _{n=0}^{\infty }n!\cdot z^{n}} are all holonomic. Examples of P-recursive sequences with holonomic generating functions include fn ≔ ⁠1/n + 1⁠ (2nn) and fn ≔ ⁠2n/n2 + 1⁠, where sequences such as n {\displaystyle {\sqrt {n}}} and log n are not P-recursive due to the nature of singularities in their corresponding generating functions. Similarly, functions with infinitely many singularities such as tan z, sec z, and Γ(z) are not holonomic functions. ==== Software for working with P-recursive sequences and holonomic generating functions ==== Tools for processing and working with P-recursive sequences in Mathematica include the software packages provided for non-commercial use on the RISC Combinatorics Group algorithmic combinatorics software site. Despite being mostly closed-source, particularly powerful tools in this software suite are provided by the Guess package for guessing P-recurrences for arbitrary input sequences (useful for experimental mathematics and exploration) and the Sigma package which is able to find P-recurrences for many sums and solve for closed-form solutions to P-recurrences involving generalized harmonic numbers. Other packages listed on this particular RISC site are targeted at working with holonomic generating functions specifically. === Relation to discrete-time Fourier transform === When the series converges absolutely, G ( a n ; e − i ω ) = ∑ n = 0 ∞ a n e − i ω n {\displaystyle G\left(a_{n};e^{-i\omega }\right)=\sum _{n=0}^{\infty }a_{n}e^{-i\omega n}} is the discrete-time Fourier transform of the sequence a0, a1, .... === Asymptotic growth of a sequence === In calculus, often the growth rate of the coefficients of a power series can be used to deduce a radius of convergence for the power series. The reverse can also hold; often the radius of convergence for a generating function can be used to deduce the asymptotic growth of the underlying sequence. For instance, if an ordinary generating function G(an; x) that has a finite radius of convergence of r can be written as G ( a n ; x ) = A ( x ) + B ( x ) ( 1 − x r ) − β x α {\displaystyle G(a_{n};x)={\frac {A(x)+B(x)\left(1-{\frac {x}{r}}\right)^{-\beta }}{x^{\alpha }}}} where each of A(x) and B(x) is a function that is analytic to a radius of convergence greater than r (or is entire), and where B(r) ≠ 0 then a n ∼ B ( r ) r α Γ ( β ) n β − 1 ( 1 r ) n ∼ B ( r ) r α ( n + β − 1 n ) ( 1 r ) n = B ( r ) r α ( ( β n ) ) ( 1 r ) n , {\displaystyle a_{n}\sim {\frac {B(r)}{r^{\alpha }\Gamma (\beta )}}\,n^{\beta -1}\left({\frac {1}{r}}\right)^{n}\sim {\frac {B(r)}{r^{\alpha }}}{\binom {n+\beta -1}{n}}\left({\frac {1}{r}}\right)^{n}={\frac {B(r)}{r^{\alpha }}}\left(\!\!{\binom {\beta }{n}}\!\!\right)\left({\frac {1}{r}}\right)^{n}\,,} using the gamma function, a binomial coefficient, or a multiset coefficient. Note that limit as n goes to infinity of the ratio of an to any of these expressions is guaranteed to be 1; not merely that an is proportional to them. Often this approach can be iterated to generate several terms in an asymptotic series for an. In particular, G ( a n − B ( r ) r α ( n + β − 1 n ) ( 1 r ) n ; x ) = G ( a n ; x ) − B ( r ) r α ( 1 − x r ) − β . {\displaystyle G\left(a_{n}-{\frac {B(r)}{r^{\alpha }}}{\binom {n+\beta -1}{n}}\left({\frac {1}{r}}\right)^{n};x\right)=G(a_{n};x)-{\frac {B(r)}{r^{\alpha }}}\left(1-{\frac {x}{r}}\right)^{-\beta }\,.} The asymptotic growth of the coefficients of this generating function can then be sought via the finding of A, B, α, β, and r to describe the generating function, as above. Similar asymptotic analysis is possible for exponential generating functions; with an exponential generating function, it is ⁠an/n!⁠ that grows according to these asymptotic formulae. Generally, if the generating function of one sequence minus the generating function of a second sequence has a radius of convergence that is larger than the radius of convergence of the individual generating functions then the two sequences have the same asymptotic growth. ==== Asymptotic growth of the sequence of squares ==== As derived above, the ordinary generating function for the sequence of squares is: G ( n 2 ; x ) = x ( x + 1 ) ( 1 − x ) 3 . {\displaystyle G(n^{2};x)={\frac {x(x+1)}{(1-x)^{3}}}.} With r = 1, α = −1, β = 3, A(x) = 0, and B(x) = x + 1, we can verify that the squares grow as expected, like the squares: a n ∼ B ( r ) r α Γ ( β ) n β − 1 ( 1 r ) n = 1 + 1 1 − 1 Γ ( 3 ) n 3 − 1 ( 1 1 ) n = n 2 . {\displaystyle a_{n}\sim {\frac {B(r)}{r^{\alpha }\Gamma (\beta )}}\,n^{\beta -1}\left({\frac {1}{r}}\right)^{n}={\frac {1+1}{1^{-1}\,\Gamma (3)}}\,n^{3-1}\left({\frac {1}{1}}\right)^{n}=n^{2}.} ==== Asymptotic growth of the Catalan numbers ==== The ordinary generating function for the Catalan numbers is G ( C n ; x ) = 1 − 1 − 4 x 2 x . {\displaystyle G(C_{n};x)={\frac {1-{\sqrt {1-4x}}}{2x}}.} With r = ⁠1/4⁠, α = 1, β = −⁠1/2⁠, A(x) = ⁠1/2⁠, and B(x) = −⁠1/2⁠, we can conclude that, for the Catalan numbers: C n ∼ B ( r ) r α Γ ( β ) n β − 1 ( 1 r ) n = − 1 2 ( 1 4 ) 1 Γ ( − 1 2 ) n − 1 2 − 1 ( 1 1 4 ) n = 4 n n 3 2 π . {\displaystyle C_{n}\sim {\frac {B(r)}{r^{\alpha }\Gamma (\beta )}}\,n^{\beta -1}\left({\frac {1}{r}}\right)^{n}={\frac {-{\frac {1}{2}}}{\left({\frac {1}{4}}\right)^{1}\Gamma \left(-{\frac {1}{2}}\right)}}\,n^{-{\frac {1}{2}}-1}\left({\frac {1}{\,{\frac {1}{4}}\,}}\right)^{n}={\frac {4^{n}}{n^{\frac {3}{2}}{\sqrt {\pi }}}}.} === Bivariate and multivariate generating functions === The generating function in several variables can be generalized to arrays with multiple indices. These non-polynomial double sum examples are called multivariate generating functions, or super generating functions. For two variables, these are often called bivariate generating functions. ==== Bivariate case ==== The ordinary generating function of a two-dimensional array am,n (where n and m are natural numbers) is: G ( a m , n ; x , y ) = ∑ m , n = 0 ∞ a m , n x m y n . {\displaystyle G(a_{m,n};x,y)=\sum _{m,n=0}^{\infty }a_{m,n}x^{m}y^{n}.} For instance, since (1 + x)n is the ordinary generating function for binomial coefficients for a fixed n, one may ask for a bivariate generating function that generates the binomial coefficients (nk) for all k and n. To do this, consider (1 + x)n itself as a sequence in n, and find the generating function in y that has these sequence values as coefficients. Since the generating function for an is: 1 1 − a y , {\displaystyle {\frac {1}{1-ay}},} the generating function for the binomial coefficients is: ∑ n , k ( n k ) x k y n = 1 1 − ( 1 + x ) y = 1 1 − y − x y . {\displaystyle \sum _{n,k}{\binom {n}{k}}x^{k}y^{n}={\frac {1}{1-(1+x)y}}={\frac {1}{1-y-xy}}.} Other examples of such include the following two-variable generating functions for the binomial coefficients, the Stirling numbers, and the Eulerian numbers, where ω and z denote the two variables: e z + w z = ∑ m , n ≥ 0 ( n m ) w m z n n ! e w ( e z − 1 ) = ∑ m , n ≥ 0 { n m } w m z n n ! 1 ( 1 − z ) w = ∑ m , n ≥ 0 [ n m ] w m z n n ! 1 − w e ( w − 1 ) z − w = ∑ m , n ≥ 0 ⟨ n m ⟩ w m z n n ! e w − e z w e z − z e w = ∑ m , n ≥ 0 ⟨ m + n + 1 m ⟩ w m z n ( m + n + 1 ) ! . {\displaystyle {\begin{aligned}e^{z+wz}&=\sum _{m,n\geq 0}{\binom {n}{m}}w^{m}{\frac {z^{n}}{n!}}\\[4px]e^{w(e^{z}-1)}&=\sum _{m,n\geq 0}{\begin{Bmatrix}n\\m\end{Bmatrix}}w^{m}{\frac {z^{n}}{n!}}\\[4px]{\frac {1}{(1-z)^{w}}}&=\sum _{m,n\geq 0}{\begin{bmatrix}n\\m\end{bmatrix}}w^{m}{\frac {z^{n}}{n!}}\\[4px]{\frac {1-w}{e^{(w-1)z}-w}}&=\sum _{m,n\geq 0}\left\langle {\begin{matrix}n\\m\end{matrix}}\right\rangle w^{m}{\frac {z^{n}}{n!}}\\[4px]{\frac {e^{w}-e^{z}}{we^{z}-ze^{w}}}&=\sum _{m,n\geq 0}\left\langle {\begin{matrix}m+n+1\\m\end{matrix}}\right\rangle {\frac {w^{m}z^{n}}{(m+n+1)!}}.\end{aligned}}} ==== Multivariate case ==== Multivariate generating functions arise in practice when calculating the number of contingency tables of non-negative integers with specified row and column totals. Suppose the table has r rows and c columns; the row sums are t1, t2 ... tr and the column sums are s1, s2 ... sc. Then, according to I. J. Good, the number of such tables is the coefficient of: x 1 t 1 ⋯ x r t r y 1 s 1 ⋯ y c s c {\displaystyle x_{1}^{t_{1}}\cdots x_{r}^{t_{r}}y_{1}^{s_{1}}\cdots y_{c}^{s_{c}}} in: ∏ i = 1 r ∏ j = 1 c 1 1 − x i y j . {\displaystyle \prod _{i=1}^{r}\prod _{j=1}^{c}{\frac {1}{1-x_{i}y_{j}}}.} === Representation by continued fractions (Jacobi-type J-fractions) === ==== Definitions ==== Expansions of (formal) Jacobi-type and Stieltjes-type continued fractions (J-fractions and S-fractions, respectively) whose hth rational convergents represent 2h-order accurate power series are another way to express the typically divergent ordinary generating functions for many special one and two-variate sequences. The particular form of the Jacobi-type continued fractions (J-fractions) are expanded as in the following equation and have the next corresponding power series expansions with respect to z for some specific, application-dependent component sequences, {abi} and {ci}, where z ≠ 0 denotes the formal variable in the second power series expansion given below: J [ ∞ ] ( z ) = 1 1 − c 1 z − ab 2 z 2 1 − c 2 z − ab 3 z 2 ⋱ = 1 + c 1 z + ( ab 2 + c 1 2 ) z 2 + ( 2 ab 2 c 1 + c 1 3 + ab 2 c 2 ) z 3 + ⋯ {\displaystyle {\begin{aligned}J^{[\infty ]}(z)&={\cfrac {1}{1-c_{1}z-{\cfrac {{\text{ab}}_{2}z^{2}}{1-c_{2}z-{\cfrac {{\text{ab}}_{3}z^{2}}{\ddots }}}}}}\\[4px]&=1+c_{1}z+\left({\text{ab}}_{2}+c_{1}^{2}\right)z^{2}+\left(2{\text{ab}}_{2}c_{1}+c_{1}^{3}+{\text{ab}}_{2}c_{2}\right)z^{3}+\cdots \end{aligned}}} The coefficients of z n {\displaystyle z^{n}} , denoted in shorthand by jn ≔ [zn] J[∞](z), in the previous equations correspond to matrix solutions of the equations: [ k 0 , 1 k 1 , 1 0 0 ⋯ k 0 , 2 k 1 , 2 k 2 , 2 0 ⋯ k 0 , 3 k 1 , 3 k 2 , 3 k 3 , 3 ⋯ ⋮ ⋮ ⋮ ⋮ ] = [ k 0 , 0 0 0 0 ⋯ k 0 , 1 k 1 , 1 0 0 ⋯ k 0 , 2 k 1 , 2 k 2 , 2 0 ⋯ ⋮ ⋮ ⋮ ⋮ ] ⋅ [ c 1 1 0 0 ⋯ ab 2 c 2 1 0 ⋯ 0 ab 3 c 3 1 ⋯ ⋮ ⋮ ⋮ ⋮ ] , {\displaystyle {\begin{bmatrix}k_{0,1}&k_{1,1}&0&0&\cdots \\k_{0,2}&k_{1,2}&k_{2,2}&0&\cdots \\k_{0,3}&k_{1,3}&k_{2,3}&k_{3,3}&\cdots \\\vdots &\vdots &\vdots &\vdots \end{bmatrix}}={\begin{bmatrix}k_{0,0}&0&0&0&\cdots \\k_{0,1}&k_{1,1}&0&0&\cdots \\k_{0,2}&k_{1,2}&k_{2,2}&0&\cdots \\\vdots &\vdots &\vdots &\vdots \end{bmatrix}}\cdot {\begin{bmatrix}c_{1}&1&0&0&\cdots \\{\text{ab}}_{2}&c_{2}&1&0&\cdots \\0&{\text{ab}}_{3}&c_{3}&1&\cdots \\\vdots &\vdots &\vdots &\vdots \end{bmatrix}},} where j0 ≡ k0,0 = 1, jn = k0,n for n ≥ 1, kr,s = 0 if r > s, and where for all integers p, q ≥ 0, we have an addition formula relation given by: j p + q = k 0 , p ⋅ k 0 , q + ∑ i = 1 min ( p , q ) ab 2 ⋯ ab i + 1 × k i , p ⋅ k i , q . {\displaystyle j_{p+q}=k_{0,p}\cdot k_{0,q}+\sum _{i=1}^{\min(p,q)}{\text{ab}}_{2}\cdots {\text{ab}}_{i+1}\times k_{i,p}\cdot k_{i,q}.} ==== Properties of the hth convergent functions ==== For h ≥ 0 (though in practice when h ≥ 2), we can define the rational hth convergents to the infinite J-fraction, J[∞](z), expanded by: Conv h ⁡ ( z ) := P h ( z ) Q h ( z ) = j 0 + j 1 z + ⋯ + j 2 h − 1 z 2 h − 1 + ∑ n = 2 h ∞ j ~ h , n z n {\displaystyle \operatorname {Conv} _{h}(z):={\frac {P_{h}(z)}{Q_{h}(z)}}=j_{0}+j_{1}z+\cdots +j_{2h-1}z^{2h-1}+\sum _{n=2h}^{\infty }{\widetilde {j}}_{h,n}z^{n}} component-wise through the sequences, Ph(z) and Qh(z), defined recursively by: P h ( z ) = ( 1 − c h z ) P h − 1 ( z ) − ab h z 2 P h − 2 ( z ) + δ h , 1 Q h ( z ) = ( 1 − c h z ) Q h − 1 ( z ) − ab h z 2 Q h − 2 ( z ) + ( 1 − c 1 z ) δ h , 1 + δ 0 , 1 . {\displaystyle {\begin{aligned}P_{h}(z)&=(1-c_{h}z)P_{h-1}(z)-{\text{ab}}_{h}z^{2}P_{h-2}(z)+\delta _{h,1}\\Q_{h}(z)&=(1-c_{h}z)Q_{h-1}(z)-{\text{ab}}_{h}z^{2}Q_{h-2}(z)+(1-c_{1}z)\delta _{h,1}+\delta _{0,1}.\end{aligned}}} Moreover, the rationality of the convergent function Convh(z) for all h ≥ 2 implies additional finite difference equations and congruence properties satisfied by the sequence of jn, and for Mh ≔ ab2 ⋯ abh + 1 if h ‖ Mh then we have the congruence j n ≡ [ z n ] Conv h ⁡ ( z ) ( mod h ) , {\displaystyle j_{n}\equiv [z^{n}]\operatorname {Conv} _{h}(z){\pmod {h}},} for non-symbolic, determinate choices of the parameter sequences {abi} and {ci} when h ≥ 2, that is, when these sequences do not implicitly depend on an auxiliary parameter such as q, x, or R as in the examples contained in the table below. ==== Examples ==== The next table provides examples of closed-form formulas for the component sequences found computationally (and subsequently proved correct in the cited references) in several special cases of the prescribed sequences, jn, generated by the general expansions of the J-fractions defined in the first subsection. Here we define 0 < |a|, |b|, |q| < 1 and the parameters R , α ∈ Z + {\displaystyle R,\alpha \in \mathbb {Z} ^{+}} and x to be indeterminates with respect to these expansions, where the prescribed sequences enumerated by the expansions of these J-fractions are defined in terms of the q-Pochhammer symbol, Pochhammer symbol, and the binomial coefficients. The radii of convergence of these series corresponding to the definition of the Jacobi-type J-fractions given above are in general different from that of the corresponding power series expansions defining the ordinary generating functions of these sequences. == Examples == === Square numbers === Generating functions for the sequence of square numbers an = n2 are: where ζ(s) is the Riemann zeta function. == Applications == Generating functions are used to: Find a closed formula for a sequence given in a recurrence relation, for example, Fibonacci numbers. Find recurrence relations for sequences—the form of a generating function may suggest a recurrence formula. Find relationships between sequences—if the generating functions of two sequences have a similar form, then the sequences themselves may be related. Explore the asymptotic behaviour of sequences. Prove identities involving sequences. Solve enumeration problems in combinatorics and encoding their solutions. Rook polynomials are an example of an application in combinatorics. Evaluate infinite sums. === Various techniques: Evaluating sums and tackling other problems with generating functions === ==== Example 1: Formula for sums of harmonic numbers ==== Generating functions give us several methods to manipulate sums and to establish identities between sums. The simplest case occurs when sn = Σnk = 0 ak. We then know that S(z) = ⁠A(z)/1 − z⁠ for the corresponding ordinary generating functions. For example, we can manipulate s n = ∑ k = 1 n H k , {\displaystyle s_{n}=\sum _{k=1}^{n}H_{k}\,,} where Hk = 1 + ⁠1/2⁠ + ⋯ + ⁠1/k⁠ are the harmonic numbers. Let H ( z ) = ∑ n = 1 ∞ H n z n {\displaystyle H(z)=\sum _{n=1}^{\infty }{H_{n}z^{n}}} be the ordinary generating function of the harmonic numbers. Then H ( z ) = 1 1 − z ∑ n = 1 ∞ z n n , {\displaystyle H(z)={\frac {1}{1-z}}\sum _{n=1}^{\infty }{\frac {z^{n}}{n}}\,,} and thus S ( z ) = ∑ n = 1 ∞ s n z n = 1 ( 1 − z ) 2 ∑ n = 1 ∞ z n n . {\displaystyle S(z)=\sum _{n=1}^{\infty }{s_{n}z^{n}}={\frac {1}{(1-z)^{2}}}\sum _{n=1}^{\infty }{\frac {z^{n}}{n}}\,.} Using 1 ( 1 − z ) 2 = ∑ n = 0 ∞ ( n + 1 ) z n , {\displaystyle {\frac {1}{(1-z)^{2}}}=\sum _{n=0}^{\infty }(n+1)z^{n}\,,} convolution with the numerator yields s n = ∑ k = 1 n n + 1 − k k = ( n + 1 ) H n − n , {\displaystyle s_{n}=\sum _{k=1}^{n}{\frac {n+1-k}{k}}=(n+1)H_{n}-n\,,} which can also be written as ∑ k = 1 n H k = ( n + 1 ) ( H n + 1 − 1 ) . {\displaystyle \sum _{k=1}^{n}{H_{k}}=(n+1)(H_{n+1}-1)\,.} ==== Example 2: Modified binomial coefficient sums and the binomial transform ==== As another example of using generating functions to relate sequences and manipulate sums, for an arbitrary sequence ⟨ fn ⟩ we define the two sequences of sums s n := ∑ m = 0 n ( n m ) f m 3 n − m s ~ n := ∑ m = 0 n ( n m ) ( m + 1 ) ( m + 2 ) ( m + 3 ) f m 3 n − m , {\displaystyle {\begin{aligned}s_{n}&:=\sum _{m=0}^{n}{\binom {n}{m}}f_{m}3^{n-m}\\[4px]{\tilde {s}}_{n}&:=\sum _{m=0}^{n}{\binom {n}{m}}(m+1)(m+2)(m+3)f_{m}3^{n-m}\,,\end{aligned}}} for all n ≥ 0, and seek to express the second sums in terms of the first. We suggest an approach by generating functions. First, we use the binomial transform to write the generating function for the first sum as S ( z ) = 1 1 − 3 z F ( z 1 − 3 z ) . {\displaystyle S(z)={\frac {1}{1-3z}}F\left({\frac {z}{1-3z}}\right).} Since the generating function for the sequence ⟨ (n + 1)(n + 2)(n + 3) fn ⟩ is given by 6 F ( z ) + 18 z F ′ ( z ) + 9 z 2 F ″ ( z ) + z 3 F ‴ ( z ) {\displaystyle 6F(z)+18zF'(z)+9z^{2}F''(z)+z^{3}F'''(z)} we may write the generating function for the second sum defined above in the form S ~ ( z ) = 6 ( 1 − 3 z ) F ( z 1 − 3 z ) + 18 z ( 1 − 3 z ) 2 F ′ ( z 1 − 3 z ) + 9 z 2 ( 1 − 3 z ) 3 F ″ ( z 1 − 3 z ) + z 3 ( 1 − 3 z ) 4 F ‴ ( z 1 − 3 z ) . {\displaystyle {\tilde {S}}(z)={\frac {6}{(1-3z)}}F\left({\frac {z}{1-3z}}\right)+{\frac {18z}{(1-3z)^{2}}}F'\left({\frac {z}{1-3z}}\right)+{\frac {9z^{2}}{(1-3z)^{3}}}F''\left({\frac {z}{1-3z}}\right)+{\frac {z^{3}}{(1-3z)^{4}}}F'''\left({\frac {z}{1-3z}}\right).} In particular, we may write this modified sum generating function in the form of a ( z ) ⋅ S ( z ) + b ( z ) ⋅ z S ′ ( z ) + c ( z ) ⋅ z 2 S ″ ( z ) + d ( z ) ⋅ z 3 S ‴ ( z ) , {\displaystyle a(z)\cdot S(z)+b(z)\cdot zS'(z)+c(z)\cdot z^{2}S''(z)+d(z)\cdot z^{3}S'''(z),} for a(z) = 6(1 − 3z)3, b(z) = 18(1 − 3z)3, c(z) = 9(1 − 3z)3, and d(z) = (1 − 3z)3, where (1 − 3z)3 = 1 − 9z + 27z2 − 27z3. Finally, it follows that we may express the second sums through the first sums in the following form: s ~ n = [ z n ] ( 6 ( 1 − 3 z ) 3 ∑ n = 0 ∞ s n z n + 18 ( 1 − 3 z ) 3 ∑ n = 0 ∞ n s n z n + 9 ( 1 − 3 z ) 3 ∑ n = 0 ∞ n ( n − 1 ) s n z n + ( 1 − 3 z ) 3 ∑ n = 0 ∞ n ( n − 1 ) ( n − 2 ) s n z n ) = ( n + 1 ) ( n + 2 ) ( n + 3 ) s n − 9 n ( n + 1 ) ( n + 2 ) s n − 1 + 27 ( n − 1 ) n ( n + 1 ) s n − 2 − ( n − 2 ) ( n − 1 ) n s n − 3 . {\displaystyle {\begin{aligned}{\tilde {s}}_{n}&=[z^{n}]\left(6(1-3z)^{3}\sum _{n=0}^{\infty }s_{n}z^{n}+18(1-3z)^{3}\sum _{n=0}^{\infty }ns_{n}z^{n}+9(1-3z)^{3}\sum _{n=0}^{\infty }n(n-1)s_{n}z^{n}+(1-3z)^{3}\sum _{n=0}^{\infty }n(n-1)(n-2)s_{n}z^{n}\right)\\[4px]&=(n+1)(n+2)(n+3)s_{n}-9n(n+1)(n+2)s_{n-1}+27(n-1)n(n+1)s_{n-2}-(n-2)(n-1)ns_{n-3}.\end{aligned}}} ==== Example 3: Generating functions for mutually recursive sequences ==== In this example, we reformulate a generating function example given in Section 7.3 of Concrete Mathematics (see also Section 7.1 of the same reference for pretty pictures of generating function series). In particular, suppose that we seek the total number of ways (denoted Un) to tile a 3-by-n rectangle with unmarked 2-by-1 domino pieces. Let the auxiliary sequence, Vn, be defined as the number of ways to cover a 3-by-n rectangle-minus-corner section of the full rectangle. We seek to use these definitions to give a closed form formula for Un without breaking down this definition further to handle the cases of vertical versus horizontal dominoes. Notice that the ordinary generating functions for our two sequences correspond to the series: U ( z ) = 1 + 3 z 2 + 11 z 4 + 41 z 6 + ⋯ , V ( z ) = z + 4 z 3 + 15 z 5 + 56 z 7 + ⋯ . {\displaystyle {\begin{aligned}U(z)=1+3z^{2}+11z^{4}+41z^{6}+\cdots ,\\V(z)=z+4z^{3}+15z^{5}+56z^{7}+\cdots .\end{aligned}}} If we consider the possible configurations that can be given starting from the left edge of the 3-by-n rectangle, we are able to express the following mutually dependent, or mutually recursive, recurrence relations for our two sequences when n ≥ 2 defined as above where U0 = 1, U1 = 0, V0 = 0, and V1 = 1: U n = 2 V n − 1 + U n − 2 V n = U n − 1 + V n − 2 . {\displaystyle {\begin{aligned}U_{n}&=2V_{n-1}+U_{n-2}\\V_{n}&=U_{n-1}+V_{n-2}.\end{aligned}}} Since we have that for all integers m ≥ 0, the index-shifted generating functions satisfy z m G ( z ) = ∑ n = m ∞ g n − m z n , {\displaystyle z^{m}G(z)=\sum _{n=m}^{\infty }g_{n-m}z^{n}\,,} we can use the initial conditions specified above and the previous two recurrence relations to see that we have the next two equations relating the generating functions for these sequences given by U ( z ) = 2 z V ( z ) + z 2 U ( z ) + 1 V ( z ) = z U ( z ) + z 2 V ( z ) = z 1 − z 2 U ( z ) , {\displaystyle {\begin{aligned}U(z)&=2zV(z)+z^{2}U(z)+1\\V(z)&=zU(z)+z^{2}V(z)={\frac {z}{1-z^{2}}}U(z),\end{aligned}}} which then implies by solving the system of equations (and this is the particular trick to our method here) that U ( z ) = 1 − z 2 1 − 4 z 2 + z 4 = 1 3 − 3 ⋅ 1 1 − ( 2 + 3 ) z 2 + 1 3 + 3 ⋅ 1 1 − ( 2 − 3 ) z 2 . {\displaystyle U(z)={\frac {1-z^{2}}{1-4z^{2}+z^{4}}}={\frac {1}{3-{\sqrt {3}}}}\cdot {\frac {1}{1-\left(2+{\sqrt {3}}\right)z^{2}}}+{\frac {1}{3+{\sqrt {3}}}}\cdot {\frac {1}{1-\left(2-{\sqrt {3}}\right)z^{2}}}.} Thus by performing algebraic simplifications to the sequence resulting from the second partial fractions expansions of the generating function in the previous equation, we find that U2n + 1 ≡ 0 and that U 2 n = ⌈ ( 2 + 3 ) n 3 − 3 ⌉ , {\displaystyle U_{2n}=\left\lceil {\frac {\left(2+{\sqrt {3}}\right)^{n}}{3-{\sqrt {3}}}}\right\rceil \,,} for all integers n ≥ 0. We also note that the same shifted generating function technique applied to the second-order recurrence for the Fibonacci numbers is the prototypical example of using generating functions to solve recurrence relations in one variable already covered, or at least hinted at, in the subsection on rational functions given above. === Convolution (Cauchy products) === A discrete convolution of the terms in two formal power series turns a product of generating functions into a generating function enumerating a convolved sum of the original sequence terms (see Cauchy product). Consider A(z) and B(z) are ordinary generating functions. C ( z ) = A ( z ) B ( z ) ⇔ [ z n ] C ( z ) = ∑ k = 0 n a k b n − k {\displaystyle C(z)=A(z)B(z)\Leftrightarrow [z^{n}]C(z)=\sum _{k=0}^{n}{a_{k}b_{n-k}}} Consider A(z) and B(z) are exponential generating functions. C ( z ) = A ( z ) B ( z ) ⇔ [ z n n ! ] C ( z ) = ∑ k = 0 n ( n k ) a k b n − k {\displaystyle C(z)=A(z)B(z)\Leftrightarrow \left[{\frac {z^{n}}{n!}}\right]C(z)=\sum _{k=0}^{n}{\binom {n}{k}}a_{k}b_{n-k}} Consider the triply convolved sequence resulting from the product of three ordinary generating functions C ( z ) = F ( z ) G ( z ) H ( z ) ⇔ [ z n ] C ( z ) = ∑ j + k + l = n f j g k h l {\displaystyle C(z)=F(z)G(z)H(z)\Leftrightarrow [z^{n}]C(z)=\sum _{j+k+l=n}f_{j}g_{k}h_{l}} Consider the m-fold convolution of a sequence with itself for some positive integer m ≥ 1 (see the example below for an application) C ( z ) = G ( z ) m ⇔ [ z n ] C ( z ) = ∑ k 1 + k 2 + ⋯ + k m = n g k 1 g k 2 ⋯ g k m {\displaystyle C(z)=G(z)^{m}\Leftrightarrow [z^{n}]C(z)=\sum _{k_{1}+k_{2}+\cdots +k_{m}=n}g_{k_{1}}g_{k_{2}}\cdots g_{k_{m}}} Multiplication of generating functions, or convolution of their underlying sequences, can correspond to a notion of independent events in certain counting and probability scenarios. For example, if we adopt the notational convention that the probability generating function, or pgf, of a random variable Z is denoted by GZ(z), then we can show that for any two random variables G X + Y ( z ) = G X ( z ) G Y ( z ) , {\displaystyle G_{X+Y}(z)=G_{X}(z)G_{Y}(z)\,,} if X and Y are independent. ==== Example: The money-changing problem ==== The number of ways to pay n ≥ 0 cents in coin denominations of values in the set {1, 5, 10, 25, 50} (i.e., in pennies, nickels, dimes, quarters, and half dollars, respectively), where we distinguish instances based upon the total number of each coin but not upon the order in which the coins are presented, is given by the ordinary generating function 1 1 − z 1 1 − z 5 1 1 − z 10 1 1 − z 25 1 1 − z 50 . {\displaystyle {\frac {1}{1-z}}{\frac {1}{1-z^{5}}}{\frac {1}{1-z^{10}}}{\frac {1}{1-z^{25}}}{\frac {1}{1-z^{50}}}\,.} When we also distinguish based upon the order in which the coins are presented (e.g., one penny then one nickel is distinct from one nickel then one penny), the ordinary generating function is 1 1 − z − z 5 − z 10 − z 25 − z 50 . {\displaystyle {\frac {1}{1-z-z^{5}-z^{10}-z^{25}-z^{50}}}\,.} If we allow the n cents to be paid in coins of any positive integer denomination, we arrive at the partition function ordinary generating function expanded by an infinite q-Pochhammer symbol product, ∏ n = 1 ∞ ( 1 − z n ) − 1 . {\displaystyle \prod _{n=1}^{\infty }\left(1-z^{n}\right)^{-1}\,.} When the order of the coins matters, the ordinary generating function is 1 1 − ∑ n = 1 ∞ z n = 1 − z 1 − 2 z . {\displaystyle {\frac {1}{1-\sum _{n=1}^{\infty }z^{n}}}={\frac {1-z}{1-2z}}\,.} ==== Example: Generating function for the Catalan numbers ==== An example where convolutions of generating functions are useful allows us to solve for a specific closed-form function representing the ordinary generating function for the Catalan numbers, Cn. In particular, this sequence has the combinatorial interpretation as being the number of ways to insert parentheses into the product x0 · x1 ·⋯· xn so that the order of multiplication is completely specified. For example, C2 = 2 which corresponds to the two expressions x0 · (x1 · x2) and (x0 · x1) · x2. It follows that the sequence satisfies a recurrence relation given by C n = ∑ k = 0 n − 1 C k C n − 1 − k + δ n , 0 = C 0 C n − 1 + C 1 C n − 2 + ⋯ + C n − 1 C 0 + δ n , 0 , n ≥ 0 , {\displaystyle C_{n}=\sum _{k=0}^{n-1}C_{k}C_{n-1-k}+\delta _{n,0}=C_{0}C_{n-1}+C_{1}C_{n-2}+\cdots +C_{n-1}C_{0}+\delta _{n,0}\,,\quad n\geq 0\,,} and so has a corresponding convolved generating function, C(z), satisfying C ( z ) = z ⋅ C ( z ) 2 + 1 . {\displaystyle C(z)=z\cdot C(z)^{2}+1\,.} Since C(0) = 1 ≠ ∞, we then arrive at a formula for this generating function given by C ( z ) = 1 − 1 − 4 z 2 z = ∑ n = 0 ∞ 1 n + 1 ( 2 n n ) z n . {\displaystyle C(z)={\frac {1-{\sqrt {1-4z}}}{2z}}=\sum _{n=0}^{\infty }{\frac {1}{n+1}}{\binom {2n}{n}}z^{n}\,.} Note that the first equation implicitly defining C(z) above implies that C ( z ) = 1 1 − z ⋅ C ( z ) , {\displaystyle C(z)={\frac {1}{1-z\cdot C(z)}}\,,} which then leads to another "simple" (of form) continued fraction expansion of this generating function. ==== Example: Spanning trees of fans and convolutions of convolutions ==== A fan of order n is defined to be a graph on the vertices {0, 1, ..., n} with 2n − 1 edges connected according to the following rules: Vertex 0 is connected by a single edge to each of the other n vertices, and vertex k {\displaystyle k} is connected by a single edge to the next vertex k + 1 for all 1 ≤ k < n. There is one fan of order one, three fans of order two, eight fans of order three, and so on. A spanning tree is a subgraph of a graph which contains all of the original vertices and which contains enough edges to make this subgraph connected, but not so many edges that there is a cycle in the subgraph. We ask how many spanning trees fn of a fan of order n are possible for each n ≥ 1. As an observation, we may approach the question by counting the number of ways to join adjacent sets of vertices. For example, when n = 4, we have that f4 = 4 + 3 · 1 + 2 · 2 + 1 · 3 + 2 · 1 · 1 + 1 · 2 · 1 + 1 · 1 · 2 + 1 · 1 · 1 · 1 = 21, which is a sum over the m-fold convolutions of the sequence gn = n = [zn] ⁠z/(1 − z)2⁠ for m ≔ 1, 2, 3, 4. More generally, we may write a formula for this sequence as f n = ∑ m > 0 ∑ k 1 + k 2 + ⋯ + k m = n k 1 , k 2 , … , k m > 0 g k 1 g k 2 ⋯ g k m , {\displaystyle f_{n}=\sum _{m>0}\sum _{\scriptstyle k_{1}+k_{2}+\cdots +k_{m}=n \atop \scriptstyle k_{1},k_{2},\ldots ,k_{m}>0}g_{k_{1}}g_{k_{2}}\cdots g_{k_{m}}\,,} from which we see that the ordinary generating function for this sequence is given by the next sum of convolutions as F ( z ) = G ( z ) + G ( z ) 2 + G ( z ) 3 + ⋯ = G ( z ) 1 − G ( z ) = z ( 1 − z ) 2 − z = z 1 − 3 z + z 2 , {\displaystyle F(z)=G(z)+G(z)^{2}+G(z)^{3}+\cdots ={\frac {G(z)}{1-G(z)}}={\frac {z}{(1-z)^{2}-z}}={\frac {z}{1-3z+z^{2}}}\,,} from which we are able to extract an exact formula for the sequence by taking the partial fraction expansion of the last generating function. === Implicit generating functions and the Lagrange inversion formula === One often encounters generating functions specified by a functional equation, instead of an explicit specification. For example, the generating function T(z) for the number of binary trees on n nodes (leaves included) satisfies T ( z ) = z ( 1 + T ( z ) 2 ) {\displaystyle T(z)=z\left(1+T(z)^{2}\right)} The Lagrange inversion theorem is a tool used to explicitly evaluate solutions to such equations. Applying the above theorem to our functional equation yields (with ϕ ( z ) = 1 + z 2 {\textstyle \phi (z)=1+z^{2}} ): [ z n ] T ( z ) = [ z n − 1 ] 1 n ( 1 + z 2 ) n {\displaystyle [z^{n}]T(z)=[z^{n-1}]{\frac {1}{n}}(1+z^{2})^{n}} Via the binomial theorem expansion, for even n {\displaystyle n} , the formula returns 0 {\displaystyle 0} . This is expected as one can prove that the number of leaves of a binary tree are one more than the number of its internal nodes, so the total sum should always be an odd number. For odd n {\displaystyle n} , however, we get [ z n − 1 ] 1 n ( 1 + z 2 ) n = 1 n ( n n + 1 2 ) {\displaystyle [z^{n-1}]{\frac {1}{n}}(1+z^{2})^{n}={\frac {1}{n}}{\dbinom {n}{\frac {n+1}{2}}}} The expression becomes much neater if we let n {\displaystyle n} be the number of internal nodes: Now the expression just becomes the n {\displaystyle n} th Catalan number. === Introducing a free parameter (snake oil method) === Sometimes the sum sn is complicated, and it is not always easy to evaluate. The "Free Parameter" method is another method (called "snake oil" by H. Wilf) to evaluate these sums. Both methods discussed so far have n as limit in the summation. When n does not appear explicitly in the summation, we may consider n as a "free" parameter and treat sn as a coefficient of F(z) = Σ sn zn, change the order of the summations on n and k, and try to compute the inner sum. For example, if we want to compute s n = ∑ k = 0 ∞ ( n + k m + 2 k ) ( 2 k k ) ( − 1 ) k k + 1 , m , n ∈ N 0 , {\displaystyle s_{n}=\sum _{k=0}^{\infty }{{\binom {n+k}{m+2k}}{\binom {2k}{k}}{\frac {(-1)^{k}}{k+1}}}\,,\quad m,n\in \mathbb {N} _{0}\,,} we can treat n as a "free" parameter, and set F ( z ) = ∑ n = 0 ∞ ( ∑ k = 0 ∞ ( n + k m + 2 k ) ( 2 k k ) ( − 1 ) k k + 1 ) z n . {\displaystyle F(z)=\sum _{n=0}^{\infty }{\left(\sum _{k=0}^{\infty }{{\binom {n+k}{m+2k}}{\binom {2k}{k}}{\frac {(-1)^{k}}{k+1}}}\right)}z^{n}\,.} Interchanging summation ("snake oil") gives F ( z ) = ∑ k = 0 ∞ ( 2 k k ) ( − 1 ) k k + 1 z − k ∑ n = 0 ∞ ( n + k m + 2 k ) z n + k . {\displaystyle F(z)=\sum _{k=0}^{\infty }{{\binom {2k}{k}}{\frac {(-1)^{k}}{k+1}}z^{-k}}\sum _{n=0}^{\infty }{{\binom {n+k}{m+2k}}z^{n+k}}\,.} Now the inner sum is ⁠zm + 2k/(1 − z)m + 2k + 1⁠. Thus F ( z ) = z m ( 1 − z ) m + 1 ∑ k = 0 ∞ 1 k + 1 ( 2 k k ) ( − z ( 1 − z ) 2 ) k = z m ( 1 − z ) m + 1 ∑ k = 0 ∞ C k ( − z ( 1 − z ) 2 ) k where C k = k th Catalan number = z m ( 1 − z ) m + 1 1 − 1 + 4 z ( 1 − z ) 2 − 2 z ( 1 − z ) 2 = − z m − 1 2 ( 1 − z ) m − 1 ( 1 − 1 + z 1 − z ) = z m ( 1 − z ) m = z z m − 1 ( 1 − z ) m . {\displaystyle {\begin{aligned}F(z)&={\frac {z^{m}}{(1-z)^{m+1}}}\sum _{k=0}^{\infty }{{\frac {1}{k+1}}{\binom {2k}{k}}\left({\frac {-z}{(1-z)^{2}}}\right)^{k}}\\[4px]&={\frac {z^{m}}{(1-z)^{m+1}}}\sum _{k=0}^{\infty }{C_{k}\left({\frac {-z}{(1-z)^{2}}}\right)^{k}}&{\text{where }}C_{k}=k{\text{th Catalan number}}\\[4px]&={\frac {z^{m}}{(1-z)^{m+1}}}{\frac {1-{\sqrt {1+{\frac {4z}{(1-z)^{2}}}}}}{\frac {-2z}{(1-z)^{2}}}}\\[4px]&={\frac {-z^{m-1}}{2(1-z)^{m-1}}}\left(1-{\frac {1+z}{1-z}}\right)\\[4px]&={\frac {z^{m}}{(1-z)^{m}}}=z{\frac {z^{m-1}}{(1-z)^{m}}}\,.\end{aligned}}} Then we obtain s n = { ( n − 1 m − 1 ) for m ≥ 1 , [ n = 0 ] for m = 0 . {\displaystyle s_{n}={\begin{cases}\displaystyle {\binom {n-1}{m-1}}&{\text{for }}m\geq 1\,,\\{}[n=0]&{\text{for }}m=0\,.\end{cases}}} It is instructive to use the same method again for the sum, but this time take m as the free parameter instead of n. We thus set G ( z ) = ∑ m = 0 ∞ ( ∑ k = 0 ∞ ( n + k m + 2 k ) ( 2 k k ) ( − 1 ) k k + 1 ) z m . {\displaystyle G(z)=\sum _{m=0}^{\infty }\left(\sum _{k=0}^{\infty }{\binom {n+k}{m+2k}}{\binom {2k}{k}}{\frac {(-1)^{k}}{k+1}}\right)z^{m}\,.} Interchanging summation ("snake oil") gives G ( z ) = ∑ k = 0 ∞ ( 2 k k ) ( − 1 ) k k + 1 z − 2 k ∑ m = 0 ∞ ( n + k m + 2 k ) z m + 2 k . {\displaystyle G(z)=\sum _{k=0}^{\infty }{\binom {2k}{k}}{\frac {(-1)^{k}}{k+1}}z^{-2k}\sum _{m=0}^{\infty }{\binom {n+k}{m+2k}}z^{m+2k}\,.} Now the inner sum is (1 + z)n + k. Thus G ( z ) = ( 1 + z ) n ∑ k = 0 ∞ 1 k + 1 ( 2 k k ) ( − ( 1 + z ) z 2 ) k = ( 1 + z ) n ∑ k = 0 ∞ C k ( − ( 1 + z ) z 2 ) k where C k = k th Catalan number = ( 1 + z ) n 1 − 1 + 4 ( 1 + z ) z 2 − 2 ( 1 + z ) z 2 = ( 1 + z ) n z 2 − z z 2 + 4 + 4 z − 2 ( 1 + z ) = ( 1 + z ) n z 2 − z ( z + 2 ) − 2 ( 1 + z ) = ( 1 + z ) n − 2 z − 2 ( 1 + z ) = z ( 1 + z ) n − 1 . {\displaystyle {\begin{aligned}G(z)&=(1+z)^{n}\sum _{k=0}^{\infty }{\frac {1}{k+1}}{\binom {2k}{k}}\left({\frac {-(1+z)}{z^{2}}}\right)^{k}\\[4px]&=(1+z)^{n}\sum _{k=0}^{\infty }C_{k}\,\left({\frac {-(1+z)}{z^{2}}}\right)^{k}&{\text{where }}C_{k}=k{\text{th Catalan number}}\\[4px]&=(1+z)^{n}\,{\frac {1-{\sqrt {1+{\frac {4(1+z)}{z^{2}}}}}}{\frac {-2(1+z)}{z^{2}}}}\\[4px]&=(1+z)^{n}\,{\frac {z^{2}-z{\sqrt {z^{2}+4+4z}}}{-2(1+z)}}\\[4px]&=(1+z)^{n}\,{\frac {z^{2}-z(z+2)}{-2(1+z)}}\\[4px]&=(1+z)^{n}\,{\frac {-2z}{-2(1+z)}}=z(1+z)^{n-1}\,.\end{aligned}}} Thus we obtain s n = [ z m ] z ( 1 + z ) n − 1 = [ z m − 1 ] ( 1 + z ) n − 1 = ( n − 1 m − 1 ) , {\displaystyle s_{n}=\left[z^{m}\right]z(1+z)^{n-1}=\left[z^{m-1}\right](1+z)^{n-1}={\binom {n-1}{m-1}}\,,} for m ≥ 1 as before. === Generating functions prove congruences === We say that two generating functions (power series) are congruent modulo m, written A(z) ≡ B(z) (mod m) if their coefficients are congruent modulo m for all n ≥ 0, i.e., an ≡ bn (mod m) for all relevant cases of the integers n (note that we need not assume that m is an integer here—it may very well be polynomial-valued in some indeterminate x, for example). If the "simpler" right-hand-side generating function, B(z), is a rational function of z, then the form of this sequence suggests that the sequence is eventually periodic modulo fixed particular cases of integer-valued m ≥ 2. For example, we can prove that the Euler numbers, ⟨ E n ⟩ = ⟨ 1 , 1 , 5 , 61 , 1385 , … ⟩ ⟼ ⟨ 1 , 1 , 2 , 1 , 2 , 1 , 2 , … ⟩ ( mod 3 ) , {\displaystyle \langle E_{n}\rangle =\langle 1,1,5,61,1385,\ldots \rangle \longmapsto \langle 1,1,2,1,2,1,2,\ldots \rangle {\pmod {3}}\,,} satisfy the following congruence modulo 3: ∑ n = 0 ∞ E n z n = 1 − z 2 1 + z 2 ( mod 3 ) . {\displaystyle \sum _{n=0}^{\infty }E_{n}z^{n}={\frac {1-z^{2}}{1+z^{2}}}{\pmod {3}}\,.} One useful method of obtaining congruences for sequences enumerated by special generating functions modulo any integers (i.e., not only prime powers pk) is given in the section on continued fraction representations of (even non-convergent) ordinary generating functions by J-fractions above. We cite one particular result related to generating series expanded through a representation by continued fraction from Lando's Lectures on Generating Functions as follows: Generating functions also have other uses in proving congruences for their coefficients. We cite the next two specific examples deriving special case congruences for the Stirling numbers of the first kind and for the partition function p(n) which show the versatility of generating functions in tackling problems involving integer sequences. ==== The Stirling numbers modulo small integers ==== The main article on the Stirling numbers generated by the finite products S n ( x ) := ∑ k = 0 n [ n k ] x k = x ( x + 1 ) ( x + 2 ) ⋯ ( x + n − 1 ) , n ≥ 1 , {\displaystyle S_{n}(x):=\sum _{k=0}^{n}{\begin{bmatrix}n\\k\end{bmatrix}}x^{k}=x(x+1)(x+2)\cdots (x+n-1)\,,\quad n\geq 1\,,} provides an overview of the congruences for these numbers derived strictly from properties of their generating function as in Section 4.6 of Wilf's stock reference Generatingfunctionology. We repeat the basic argument and notice that when reduces modulo 2, these finite product generating functions each satisfy S n ( x ) = [ x ( x + 1 ) ] ⋅ [ x ( x + 1 ) ] ⋯ = x ⌈ n 2 ⌉ ( x + 1 ) ⌊ n 2 ⌋ , {\displaystyle S_{n}(x)=[x(x+1)]\cdot [x(x+1)]\cdots =x^{\left\lceil {\frac {n}{2}}\right\rceil }(x+1)^{\left\lfloor {\frac {n}{2}}\right\rfloor }\,,} which implies that the parity of these Stirling numbers matches that of the binomial coefficient [ n k ] ≡ ( ⌊ n 2 ⌋ k − ⌈ n 2 ⌉ ) ( mod 2 ) , {\displaystyle {\begin{bmatrix}n\\k\end{bmatrix}}\equiv {\binom {\left\lfloor {\frac {n}{2}}\right\rfloor }{k-\left\lceil {\frac {n}{2}}\right\rceil }}{\pmod {2}}\,,} and consequently shows that [nk] is even whenever k < ⌊ ⁠n/2⁠ ⌋. Similarly, we can reduce the right-hand-side products defining the Stirling number generating functions modulo 3 to obtain slightly more complicated expressions providing that [ n m ] ≡ [ x m ] ( x ⌈ n 3 ⌉ ( x + 1 ) ⌈ n − 1 3 ⌉ ( x + 2 ) ⌊ n 3 ⌋ ) ( mod 3 ) ≡ ∑ k = 0 m ( ⌈ n − 1 3 ⌉ k ) ( ⌊ n 3 ⌋ m − k − ⌈ n 3 ⌉ ) × 2 ⌈ n 3 ⌉ + ⌊ n 3 ⌋ − ( m − k ) ( mod 3 ) . {\displaystyle {\begin{aligned}{\begin{bmatrix}n\\m\end{bmatrix}}&\equiv [x^{m}]\left(x^{\left\lceil {\frac {n}{3}}\right\rceil }(x+1)^{\left\lceil {\frac {n-1}{3}}\right\rceil }(x+2)^{\left\lfloor {\frac {n}{3}}\right\rfloor }\right)&&{\pmod {3}}\\&\equiv \sum _{k=0}^{m}{\begin{pmatrix}\left\lceil {\frac {n-1}{3}}\right\rceil \\k\end{pmatrix}}{\begin{pmatrix}\left\lfloor {\frac {n}{3}}\right\rfloor \\m-k-\left\lceil {\frac {n}{3}}\right\rceil \end{pmatrix}}\times 2^{\left\lceil {\frac {n}{3}}\right\rceil +\left\lfloor {\frac {n}{3}}\right\rfloor -(m-k)}&&{\pmod {3}}\,.\end{aligned}}} ==== Congruences for the partition function ==== In this example, we pull in some of the machinery of infinite products whose power series expansions generate the expansions of many special functions and enumerate partition functions. In particular, we recall that the partition function p(n) is generated by the reciprocal infinite q-Pochhammer symbol product (or z-Pochhammer product as the case may be) given by ∑ n = 0 ∞ p ( n ) z n = 1 ( 1 − z ) ( 1 − z 2 ) ( 1 − z 3 ) ⋯ = 1 + z + 2 z 2 + 3 z 3 + 5 z 4 + 7 z 5 + 11 z 6 + ⋯ . {\displaystyle {\begin{aligned}\sum _{n=0}^{\infty }p(n)z^{n}&={\frac {1}{\left(1-z\right)\left(1-z^{2}\right)\left(1-z^{3}\right)\cdots }}\\[4pt]&=1+z+2z^{2}+3z^{3}+5z^{4}+7z^{5}+11z^{6}+\cdots .\end{aligned}}} This partition function satisfies many known congruence properties, which notably include the following results though there are still many open questions about the forms of related integer congruences for the function: p ( 5 m + 4 ) ≡ 0 ( mod 5 ) p ( 7 m + 5 ) ≡ 0 ( mod 7 ) p ( 11 m + 6 ) ≡ 0 ( mod 11 ) p ( 25 m + 24 ) ≡ 0 ( mod 5 2 ) . {\displaystyle {\begin{aligned}p(5m+4)&\equiv 0{\pmod {5}}\\p(7m+5)&\equiv 0{\pmod {7}}\\p(11m+6)&\equiv 0{\pmod {11}}\\p(25m+24)&\equiv 0{\pmod {5^{2}}}\,.\end{aligned}}} We show how to use generating functions and manipulations of congruences for formal power series to give a highly elementary proof of the first of these congruences listed above. First, we observe that in the binomial coefficient generating function 1 ( 1 − z ) 5 = ∑ i = 0 ∞ ( 4 + i 4 ) z i , {\displaystyle {\frac {1}{(1-z)^{5}}}=\sum _{i=0}^{\infty }{\binom {4+i}{4}}z^{i}\,,} all of the coefficients are divisible by 5 except for those which correspond to the powers 1, z5, z10, ... and moreover in those cases the remainder of the coefficient is 1 modulo 5. Thus, 1 ( 1 − z ) 5 ≡ 1 1 − z 5 ( mod 5 ) , {\displaystyle {\frac {1}{(1-z)^{5}}}\equiv {\frac {1}{1-z^{5}}}{\pmod {5}}\,,} or equivalently 1 − z 5 ( 1 − z ) 5 ≡ 1 ( mod 5 ) . {\displaystyle {\frac {1-z^{5}}{(1-z)^{5}}}\equiv 1{\pmod {5}}\,.} It follows that ( 1 − z 5 ) ( 1 − z 10 ) ( 1 − z 15 ) ⋯ ( ( 1 − z ) ( 1 − z 2 ) ( 1 − z 3 ) ⋯ ) 5 ≡ 1 ( mod 5 ) . {\displaystyle {\frac {\left(1-z^{5}\right)\left(1-z^{10}\right)\left(1-z^{15}\right)\cdots }{\left((1-z)\left(1-z^{2}\right)\left(1-z^{3}\right)\cdots \right)^{5}}}\equiv 1{\pmod {5}}\,.} Using the infinite product expansions of z ⋅ ( 1 − z 5 ) ( 1 − z 10 ) ⋯ ( 1 − z ) ( 1 − z 2 ) ⋯ = z ⋅ ( ( 1 − z ) ( 1 − z 2 ) ⋯ ) 4 × ( 1 − z 5 ) ( 1 − z 10 ) ⋯ ( ( 1 − z ) ( 1 − z 2 ) ⋯ ) 5 , {\displaystyle z\cdot {\frac {\left(1-z^{5}\right)\left(1-z^{10}\right)\cdots }{\left(1-z\right)\left(1-z^{2}\right)\cdots }}=z\cdot \left((1-z)\left(1-z^{2}\right)\cdots \right)^{4}\times {\frac {\left(1-z^{5}\right)\left(1-z^{10}\right)\cdots }{\left(\left(1-z\right)\left(1-z^{2}\right)\cdots \right)^{5}}}\,,} it can be shown that the coefficient of z5m + 5 in z · ((1 − z)(1 − z2)⋯)4 is divisible by 5 for all m. Finally, since ∑ n = 1 ∞ p ( n − 1 ) z n = z ( 1 − z ) ( 1 − z 2 ) ⋯ = z ⋅ ( 1 − z 5 ) ( 1 − z 10 ) ⋯ ( 1 − z ) ( 1 − z 2 ) ⋯ × ( 1 + z 5 + z 10 + ⋯ ) ( 1 + z 10 + z 20 + ⋯ ) ⋯ {\displaystyle {\begin{aligned}\sum _{n=1}^{\infty }p(n-1)z^{n}&={\frac {z}{(1-z)\left(1-z^{2}\right)\cdots }}\\[6px]&=z\cdot {\frac {\left(1-z^{5}\right)\left(1-z^{10}\right)\cdots }{(1-z)\left(1-z^{2}\right)\cdots }}\times \left(1+z^{5}+z^{10}+\cdots \right)\left(1+z^{10}+z^{20}+\cdots \right)\cdots \end{aligned}}} we may equate the coefficients of z5m + 5 in the previous equations to prove our desired congruence result, namely that p(5m + 4) ≡ 0 (mod 5) for all m ≥ 0. === Transformations of generating functions === There are a number of transformations of generating functions that provide other applications (see the main article). A transformation of a sequence's ordinary generating function (OGF) provides a method of converting the generating function for one sequence into a generating function enumerating another. These transformations typically involve integral formulas involving a sequence OGF (see integral transformations) or weighted sums over the higher-order derivatives of these functions (see derivative transformations). Generating function transformations can come into play when we seek to express a generating function for the sums s n := ∑ m = 0 n ( n m ) C n , m a m , {\displaystyle s_{n}:=\sum _{m=0}^{n}{\binom {n}{m}}C_{n,m}a_{m},} in the form of S(z) = g(z) A(f(z)) involving the original sequence generating function. For example, if the sums are s n := ∑ k = 0 ∞ ( n + k m + 2 k ) a k {\displaystyle s_{n}:=\sum _{k=0}^{\infty }{\binom {n+k}{m+2k}}a_{k}\,} then the generating function for the modified sum expressions is given by S ( z ) = z m ( 1 − z ) m + 1 A ( z ( 1 − z ) 2 ) {\displaystyle S(z)={\frac {z^{m}}{(1-z)^{m+1}}}A\left({\frac {z}{(1-z)^{2}}}\right)} (see also the binomial transform and the Stirling transform). There are also integral formulas for converting between a sequence's OGF, F(z), and its exponential generating function, or EGF, F̂(z), and vice versa given by F ( z ) = ∫ 0 ∞ F ^ ( t z ) e − t d t , F ^ ( z ) = 1 2 π ∫ − π π F ( z e − i ϑ ) e e i ϑ d ϑ , {\displaystyle {\begin{aligned}F(z)&=\int _{0}^{\infty }{\hat {F}}(tz)e^{-t}\,dt\,,\\[4px]{\hat {F}}(z)&={\frac {1}{2\pi }}\int _{-\pi }^{\pi }F\left(ze^{-i\vartheta }\right)e^{e^{i\vartheta }}\,d\vartheta \,,\end{aligned}}} provided that these integrals converge for appropriate values of z. == Tables of special generating functions == An initial listing of special mathematical series is found here. A number of useful and special sequence generating functions are found in Section 5.4 and 7.4 of Concrete Mathematics and in Section 2.5 of Wilf's Generatingfunctionology. Other special generating functions of note include the entries in the next table, which is by no means complete. == See also == Moment-generating function Probability-generating function Generating function transformation Stanley's reciprocity theorem Integer partition Combinatorial principles Cyclic sieving Z-transform Umbral calculus Coins in a fountain == Notes == == References == === Citations === Aigner, Martin (2007). A Course in Enumeration. Graduate Texts in Mathematics. Vol. 238. Springer. ISBN 978-3-540-39035-0. Doubilet, Peter; Rota, Gian-Carlo; Stanley, Richard (1972). "On the foundations of combinatorial theory. VI. The idea of generating function". Proceedings of the Sixth Berkeley Symposium on Mathematical Statistics and Probability. 2: 267–318. Zbl 0267.05002. Reprinted in Rota, Gian-Carlo (1975). "3. The idea of generating function". Finite Operator Calculus. With the collaboration of P. Doubilet, C. Greene, D. Kahaner, A. Odlyzko and R. Stanley. Academic Press. pp. 83–134. ISBN 0-12-596650-4. Zbl 0328.05007. Flajolet, Philippe; Sedgewick, Robert (2009). Analytic Combinatorics. Cambridge University Press. ISBN 978-0-521-89806-5. Zbl 1165.05001. Goulden, Ian P.; Jackson, David M. (2004). Combinatorial Enumeration. Dover Publications. ISBN 978-0486435978. Graham, Ronald L.; Knuth, Donald E.; Patashnik, Oren (1994). "Chapter 7: Generating Functions". Concrete Mathematics. A foundation for computer science (2nd ed.). Addison-Wesley. pp. 320–380. ISBN 0-201-55802-5. Zbl 0836.00001. Lando, Sergei K. (2003). Lectures on Generating Functions. American Mathematical Society. ISBN 978-0-8218-3481-7. Wilf, Herbert S. (1994). Generatingfunctionology (2nd ed.). Academic Press. ISBN 0-12-751956-4. Zbl 0831.05001. == External links == "Introduction To Ordinary Generating Functions" by Mike Zabrocki, York University, Mathematics and Statistics "Generating function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Generating Functions, Power Indices and Coin Change at cut-the-knot "Generating Functions" by Ed Pegg Jr., Wolfram Demonstrations Project, 2007.
Wikipedia/Generating_functions
In mathematics and signal processing, the advanced z-transform is an extension of the z-transform, to incorporate ideal delays that are not multiples of the sampling time. The advanced z-transform is widely applied, for example, to accurately model processing delays in digital control. It is also known as the modified z-transform. It takes the form F ( z , m ) = ∑ k = 0 ∞ f ( k T + m ) z − k {\displaystyle F(z,m)=\sum _{k=0}^{\infty }f(kT+m)z^{-k}} where T is the sampling period m (the "delay parameter") is a fraction of the sampling period [ 0 , T ] . {\displaystyle [0,T].} == Properties == If the delay parameter, m, is considered fixed then all the properties of the z-transform hold for the advanced z-transform. === Linearity === Z { ∑ k = 1 n c k f k ( t ) } = ∑ k = 1 n c k F k ( z , m ) . {\displaystyle {\mathcal {Z}}\left\{\sum _{k=1}^{n}c_{k}f_{k}(t)\right\}=\sum _{k=1}^{n}c_{k}F_{k}(z,m).} === Time shift === Z { u ( t − n T ) f ( t − n T ) } = z − n F ( z , m ) . {\displaystyle {\mathcal {Z}}\left\{u(t-nT)f(t-nT)\right\}=z^{-n}F(z,m).} === Damping === Z { f ( t ) e − a t } = e − a m F ( e a T z , m ) . {\displaystyle {\mathcal {Z}}\left\{f(t)e^{-a\,t}\right\}=e^{-a\,m}F(e^{a\,T}z,m).} === Time multiplication === Z { t y f ( t ) } = ( − T z d d z + m ) y F ( z , m ) . {\displaystyle {\mathcal {Z}}\left\{t^{y}f(t)\right\}=\left(-Tz{\frac {d}{dz}}+m\right)^{y}F(z,m).} === Final value theorem === lim k → ∞ f ( k T + m ) = lim z → 1 ( 1 − z − 1 ) F ( z , m ) . {\displaystyle \lim _{k\to \infty }f(kT+m)=\lim _{z\to 1}(1-z^{-1})F(z,m).} == Example == Consider the following example where f ( t ) = cos ⁡ ( ω t ) {\displaystyle f(t)=\cos(\omega t)} : F ( z , m ) = Z { cos ⁡ ( ω ( k T + m ) ) } = Z { cos ⁡ ( ω k T ) cos ⁡ ( ω m ) − sin ⁡ ( ω k T ) sin ⁡ ( ω m ) } = cos ⁡ ( ω m ) Z { cos ⁡ ( ω k T ) } − sin ⁡ ( ω m ) Z { sin ⁡ ( ω k T ) } = cos ⁡ ( ω m ) z ( z − cos ⁡ ( ω T ) ) z 2 − 2 z cos ⁡ ( ω T ) + 1 − sin ⁡ ( ω m ) z sin ⁡ ( ω T ) z 2 − 2 z cos ⁡ ( ω T ) + 1 = z 2 cos ⁡ ( ω m ) − z cos ⁡ ( ω ( T − m ) ) z 2 − 2 z cos ⁡ ( ω T ) + 1 . {\displaystyle {\begin{aligned}F(z,m)&={\mathcal {Z}}\left\{\cos \left(\omega \left(kT+m\right)\right)\right\}\\&={\mathcal {Z}}\left\{\cos(\omega kT)\cos(\omega m)-\sin(\omega kT)\sin(\omega m)\right\}\\&=\cos(\omega m){\mathcal {Z}}\left\{\cos(\omega kT)\right\}-\sin(\omega m){\mathcal {Z}}\left\{\sin(\omega kT)\right\}\\&=\cos(\omega m){\frac {z\left(z-\cos(\omega T)\right)}{z^{2}-2z\cos(\omega T)+1}}-\sin(\omega m){\frac {z\sin(\omega T)}{z^{2}-2z\cos(\omega T)+1}}\\&={\frac {z^{2}\cos(\omega m)-z\cos(\omega (T-m))}{z^{2}-2z\cos(\omega T)+1}}.\end{aligned}}} If m = 0 {\displaystyle m=0} then F ( z , m ) {\displaystyle F(z,m)} reduces to the transform F ( z , 0 ) = z 2 − z cos ⁡ ( ω T ) z 2 − 2 z cos ⁡ ( ω T ) + 1 , {\displaystyle F(z,0)={\frac {z^{2}-z\cos(\omega T)}{z^{2}-2z\cos(\omega T)+1}},} which is clearly just the z-transform of f ( t ) {\displaystyle f(t)} . == References == Jury, Eliahu Ibraham (1973). Theory and Application of the z-Transform Method. Krieger. ISBN 0-88275-122-0. OCLC 836240.
Wikipedia/Advanced_Z-transform
The chirp Z-transform (CZT) is a generalization of the discrete Fourier transform (DFT). While the DFT samples the Z plane at uniformly-spaced points along the unit circle, the chirp Z-transform samples along spiral arcs in the Z-plane, corresponding to straight lines in the S plane. The DFT, real DFT, and zoom DFT can be calculated as special cases of the CZT. Specifically, the chirp Z transform calculates the Z transform at a finite number of points zk along a logarithmic spiral contour, defined as: X k = ∑ n = 0 N − 1 x ( n ) z k − n {\displaystyle X_{k}=\sum _{n=0}^{N-1}x(n)z_{k}^{-n}} z k = A ⋅ W − k , k = 0 , 1 , … , M − 1 {\displaystyle z_{k}=A\cdot W^{-k},k=0,1,\dots ,M-1} where A is the complex starting point, W is the complex ratio between points, and M is the number of points to calculate. Like the DFT, the chirp Z-transform can be computed in O(n log n) operations where n = max ( M , N ) n=\max(M,N) . An O(N log N) algorithm for the inverse chirp Z-transform (ICZT) was described in 2003, and in 2019. == Bluestein's algorithm == Bluestein's algorithm expresses the CZT as a convolution and implements it efficiently using FFT/IFFT. As the DFT is a special case of the CZT, this allows the efficient calculation of discrete Fourier transform (DFT) of arbitrary sizes, including prime sizes. (The other algorithm for FFTs of prime sizes, Rader's algorithm, also works by rewriting the DFT as a convolution.) It was conceived in 1968 by Leo Bluestein. Bluestein's algorithm can be used to compute more general transforms than the DFT, based on the (unilateral) z-transform (Rabiner et al., 1969). Recall that the DFT is defined by the formula X k = ∑ n = 0 N − 1 x n e − 2 π i N n k k = 0 , … , N − 1. {\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}e^{-{\frac {2\pi i}{N}}nk}\qquad k=0,\dots ,N-1.} If we replace the product nk in the exponent by the identity n k = − ( k − n ) 2 2 + n 2 2 + k 2 2 {\displaystyle nk={\frac {-(k-n)^{2}}{2}}+{\frac {n^{2}}{2}}+{\frac {k^{2}}{2}}} we thus obtain: X k = e − π i N k 2 ∑ n = 0 N − 1 ( x n e − π i N n 2 ) e π i N ( k − n ) 2 k = 0 , … , N − 1. {\displaystyle X_{k}=e^{-{\frac {\pi i}{N}}k^{2}}\sum _{n=0}^{N-1}\left(x_{n}e^{-{\frac {\pi i}{N}}n^{2}}\right)e^{{\frac {\pi i}{N}}(k-n)^{2}}\qquad k=0,\dots ,N-1.} This summation is precisely a convolution of the two sequences an and bn defined by: a n = x n e − π i N n 2 {\displaystyle a_{n}=x_{n}e^{-{\frac {\pi i}{N}}n^{2}}} b n = e π i N n 2 , {\displaystyle b_{n}=e^{{\frac {\pi i}{N}}n^{2}},} with the output of the convolution multiplied by N phase factors bk*. That is: X k = b k ∗ ( ∑ n = 0 N − 1 a n b k − n ) k = 0 , … , N − 1. {\displaystyle X_{k}=b_{k}^{*}\left(\sum _{n=0}^{N-1}a_{n}b_{k-n}\right)\qquad k=0,\dots ,N-1.} This convolution, in turn, can be performed with a pair of FFTs (plus the pre-computed FFT of complex chirp bn) via the convolution theorem. The key point is that these FFTs are not of the same length N: such a convolution can be computed exactly from FFTs only by zero-padding it to a length greater than or equal to 2N–1. In particular, one can pad to a power of two or some other highly composite size, for which the FFT can be efficiently performed by e.g. the Cooley–Tukey algorithm in O(N log N) time. Thus, Bluestein's algorithm provides an O(N log N) way to compute prime-size DFTs, albeit several times slower than the Cooley–Tukey algorithm for composite sizes. The use of zero-padding for the convolution in Bluestein's algorithm deserves some additional comment. Suppose we zero-pad to a length M ≥ 2N–1. This means that an is extended to an array An of length M, where An = an for 0 ≤ n < N and An = 0 otherwise—the usual meaning of "zero-padding". However, because of the bk–n term in the convolution, both positive and negative values of n are required for bn (noting that b–n = bn). The periodic boundaries implied by the DFT of the zero-padded array mean that –n is equivalent to M–n. Thus, bn is extended to an array Bn of length M, where B0 = b0, Bn = BM–n = bn for 0 < n < N, and Bn = 0 otherwise. A and B are then FFTed, multiplied pointwise, and inverse FFTed to obtain the convolution of a and b, according to the usual convolution theorem. Let us also be more precise about what type of convolution is required in Bluestein's algorithm for the DFT. If the sequence bn were periodic in n with period N, then it would be a cyclic convolution of length N, and the zero-padding would be for computational convenience only. However, this is not generally the case: b n + N = e π i N ( n + N ) 2 = b n [ e π i N ( 2 N n + N 2 ) ] = ( − 1 ) N b n . {\displaystyle b_{n+N}=e^{{\frac {\pi i}{N}}(n+N)^{2}}=b_{n}\left[e^{{\frac {\pi i}{N}}(2Nn+N^{2})}\right]=(-1)^{N}b_{n}.} Therefore, for N even the convolution is cyclic, but in this case N is composite and one would normally use a more efficient FFT algorithm such as Cooley–Tukey. For N odd, however, then bn is antiperiodic and we technically have a negacyclic convolution of length N. Such distinctions disappear when one zero-pads an to a length of at least 2N−1 as described above, however. It is perhaps easiest, therefore, to think of it as a subset of the outputs of a simple linear convolution (i.e. no conceptual "extensions" of the data, periodic or otherwise). == z-transforms == Bluestein's algorithm can also be used to compute a more general transform based on the (unilateral) z-transform (Rabiner et al., 1969). In particular, it can compute any transform of the form: X k = ∑ n = 0 N − 1 x n z n k k = 0 , … , M − 1 , {\displaystyle X_{k}=\sum _{n=0}^{N-1}x_{n}z^{nk}\qquad k=0,\dots ,M-1,} for an arbitrary complex number z and for differing numbers N and M of inputs and outputs. Given Bluestein's algorithm, such a transform can be used, for example, to obtain a more finely spaced interpolation of some portion of the spectrum (although the frequency resolution is still limited by the total sampling time, similar to a Zoom FFT), enhance arbitrary poles in transfer-function analyses, etc. The algorithm was dubbed the chirp z-transform algorithm because, for the Fourier-transform case (|z| = 1), the sequence bn from above is a complex sinusoid of linearly increasing frequency, which is called a (linear) chirp in radar systems. == See also == Fractional Fourier transform == References == === General === Leo I. Bluestein, "A linear filtering approach to the computation of the discrete Fourier transform," Northeast Electronics Research and Engineering Meeting Record 10, 218-219 (1968). Lawrence R. Rabiner, Ronald W. Schafer, and Charles M. Rader, "The chirp z-transform algorithm and its application," Bell Syst. Tech. J. 48, 1249-1292 (1969). Also published in: Rabiner, Shafer, and Rader, "The chirp z-transform algorithm," IEEE Trans. Audio Electroacoustics 17 (2), 86–92 (1969). D. H. Bailey and P. N. Swarztrauber, "The fractional Fourier transform and applications," SIAM Review 33, 389-404 (1991). (Note that this terminology for the z-transform is nonstandard: a fractional Fourier transform conventionally refers to an entirely different, continuous transform.) Lawrence Rabiner, "The chirp z-transform algorithm—a lesson in serendipity," IEEE Signal Processing Magazine 21, 118-119 (March 2004). (Historical commentary.) Vladimir Sukhoy and Alexander Stoytchev: "Generalizing the inverse FFT off the unit circle", (Oct 2019). # Open access. Vladimir Sukhoy and Alexander Stoytchev: "Numerical error analysis of the ICZT algorithm for chirp contours on the unit circle", Sci Rep 10, 4852 (2020). == External links == A DSP algorithm for frequency analysis - the Chirp-Z Transform (CZT) Solving a 50-year-old puzzle in signal processing, part two
Wikipedia/Bluestein's_FFT_algorithm
In applied mathematics, the starred transform, or star transform, is a discrete-time variation of the Laplace transform, so-named because of the asterisk or "star" in the customary notation of the sampled signals. The transform is an operator of a continuous-time function x ( t ) {\displaystyle x(t)} , which is transformed to a function X ∗ ( s ) {\displaystyle X^{*}(s)} in the following manner: X ∗ ( s ) = L [ x ( t ) ⋅ δ T ( t ) ] = L [ x ∗ ( t ) ] , {\displaystyle {\begin{aligned}X^{*}(s)={\mathcal {L}}[x(t)\cdot \delta _{T}(t)]={\mathcal {L}}[x^{*}(t)],\end{aligned}}} where δ T ( t ) {\displaystyle \delta _{T}(t)} is a Dirac comb function, with period of time T. The starred transform is a convenient mathematical abstraction that represents the Laplace transform of an impulse sampled function x ∗ ( t ) {\displaystyle x^{*}(t)} , which is the output of an ideal sampler, whose input is a continuous function, x ( t ) {\displaystyle x(t)} . The starred transform is similar to the Z transform, with a simple change of variables, where the starred transform is explicitly declared in terms of the sampling period (T), while the Z transform is performed on a discrete signal and is independent of the sampling period. This makes the starred transform a de-normalized version of the one-sided Z-transform, as it restores the dependence on sampling parameter T. == Relation to Laplace transform == Since X ∗ ( s ) = L [ x ∗ ( t ) ] {\displaystyle X^{*}(s)={\mathcal {L}}[x^{*}(t)]} , where: x ∗ ( t ) = d e f x ( t ) ⋅ δ T ( t ) = x ( t ) ⋅ ∑ n = 0 ∞ δ ( t − n T ) . {\displaystyle {\begin{aligned}x^{*}(t)\ {\stackrel {\mathrm {def} }{=}}\ x(t)\cdot \delta _{T}(t)&=x(t)\cdot \sum _{n=0}^{\infty }\delta (t-nT).\end{aligned}}} Then per the convolution theorem, the starred transform is equivalent to the complex convolution of L [ x ( t ) ] = X ( s ) {\displaystyle {\mathcal {L}}[x(t)]=X(s)} and L [ δ T ( t ) ] = 1 1 − e − T s {\displaystyle {\mathcal {L}}[\delta _{T}(t)]={\frac {1}{1-e^{-Ts}}}} , hence: X ∗ ( s ) = 1 2 π j ∫ c − j ∞ c + j ∞ X ( p ) ⋅ 1 1 − e − T ( s − p ) ⋅ d p . {\displaystyle X^{*}(s)={\frac {1}{2\pi j}}\int _{c-j\infty }^{c+j\infty }{X(p)\cdot {\frac {1}{1-e^{-T(s-p)}}}\cdot dp}.} This line integration is equivalent to integration in the positive sense along a closed contour formed by such a line and an infinite semicircle that encloses the poles of X(s) in the left half-plane of p. The result of such an integration (per the residue theorem) would be: X ∗ ( s ) = ∑ λ = poles of X ( s ) Res p = λ ⁡ [ X ( p ) 1 1 − e − T ( s − p ) ] . {\displaystyle X^{*}(s)=\sum _{\lambda ={\text{poles of }}X(s)}\operatorname {Res} \limits _{p=\lambda }{\bigg [}X(p){\frac {1}{1-e^{-T(s-p)}}}{\bigg ]}.} Alternatively, the aforementioned line integration is equivalent to integration in the negative sense along a closed contour formed by such a line and an infinite semicircle that encloses the infinite poles of 1 1 − e − T ( s − p ) {\displaystyle {\frac {1}{1-e^{-T(s-p)}}}} in the right half-plane of p. The result of such an integration would be: X ∗ ( s ) = 1 T ∑ k = − ∞ ∞ X ( s − j 2 π T k ) + x ( 0 ) 2 . {\displaystyle X^{*}(s)={\frac {1}{T}}\sum _{k=-\infty }^{\infty }X\left(s-j{\tfrac {2\pi }{T}}k\right)+{\frac {x(0)}{2}}.} == Relation to Z transform == Given a Z-transform, X(z), the corresponding starred transform is a simple substitution: X ∗ ( s ) = X ( z ) | z = e s T {\displaystyle {\bigg .}X^{*}(s)=X(z){\bigg |}_{\displaystyle z=e^{sT}}} This substitution restores the dependence on T. It's interchangeable, X ( z ) = X ∗ ( s ) | e s T = z {\displaystyle {\bigg .}X(z)=X^{*}(s){\bigg |}_{\displaystyle e^{sT}=z}} X ( z ) = X ∗ ( s ) | s = ln ⁡ ( z ) T {\displaystyle {\bigg .}X(z)=X^{*}(s){\bigg |}_{\displaystyle s={\frac {\ln(z)}{T}}}} == Properties of the starred transform == Property 1: X ∗ ( s ) {\displaystyle X^{*}(s)} is periodic in s {\displaystyle s} with period j 2 π T . {\displaystyle j{\tfrac {2\pi }{T}}.} X ∗ ( s + j 2 π T k ) = X ∗ ( s ) {\displaystyle X^{*}(s+j{\tfrac {2\pi }{T}}k)=X^{*}(s)} Property 2: If X ( s ) {\displaystyle X(s)} has a pole at s = s 1 {\displaystyle s=s_{1}} , then X ∗ ( s ) {\displaystyle X^{*}(s)} must have poles at s = s 1 + j 2 π T k {\displaystyle s=s_{1}+j{\tfrac {2\pi }{T}}k} , where k = 0 , ± 1 , ± 2 , … {\displaystyle \scriptstyle k=0,\pm 1,\pm 2,\ldots } == Citations == == References == Bech, Michael M. "Digital Control Theory" (PDF). AALBORG University. Retrieved 5 February 2014. Gopal, M. (March 1989). Digital Control Engineering. John Wiley & Sons. ISBN 0852263082. Phillips and Nagle, "Digital Control System Analysis and Design", 3rd Edition, Prentice Hall, 1995. ISBN 0-13-309832-X
Wikipedia/Star_transform
In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the transform is a complex-valued function of frequency. The term Fourier transform refers to both this complex-valued function and the mathematical operation. When a distinction needs to be made, the output of the operation is sometimes called the frequency domain representation of the original function. The Fourier transform is analogous to decomposing the sound of a musical chord into the intensities of its constituent pitches. Functions that are localized in the time domain have Fourier transforms that are spread out across the frequency domain and vice versa, a phenomenon known as the uncertainty principle. The critical case for this principle is the Gaussian function, of substantial importance in probability theory and statistics as well as in the study of physical phenomena exhibiting normal distribution (e.g., diffusion). The Fourier transform of a Gaussian function is another Gaussian function. Joseph Fourier introduced sine and cosine transforms (which correspond to the imaginary and real components of the modern Fourier transform) in his study of heat transfer, where Gaussian functions appear as solutions of the heat equation. The Fourier transform can be formally defined as an improper Riemann integral, making it an integral transform, although this definition is not suitable for many applications requiring a more sophisticated integration theory. For example, many relatively simple applications use the Dirac delta function, which can be treated formally as if it were a function, but the justification requires a mathematically more sophisticated viewpoint. The Fourier transform can also be generalized to functions of several variables on Euclidean space, sending a function of 3-dimensional 'position space' to a function of 3-dimensional momentum (or a function of space and time to a function of 4-momentum). This idea makes the spatial Fourier transform very natural in the study of waves, as well as in quantum mechanics, where it is important to be able to represent wave solutions as functions of either position or momentum and sometimes both. In general, functions to which Fourier methods are applicable are complex-valued, and possibly vector-valued. Still further generalization is possible to functions on groups, which, besides the original Fourier transform on R or Rn, notably includes the discrete-time Fourier transform (DTFT, group = Z), the discrete Fourier transform (DFT, group = Z mod N) and the Fourier series or circular Fourier transform (group = S1, the unit circle ≈ closed finite interval with endpoints identified). The latter is routinely employed to handle periodic functions. The fast Fourier transform (FFT) is an algorithm for computing the DFT. == Definition == The Fourier transform of a complex-valued (Lebesgue) integrable function f ( x ) {\displaystyle f(x)} on the real line, is the complex valued function f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} , defined by the integral Evaluating the Fourier transform for all values of ξ {\displaystyle \xi } produces the frequency-domain function, and it converges at all frequencies to a continuous function tending to zero at infinity. If f ( x ) {\displaystyle f(x)} decays with all derivatives, i.e., lim | x | → ∞ f ( n ) ( x ) = 0 , ∀ n ∈ N , {\displaystyle \lim _{|x|\to \infty }f^{(n)}(x)=0,\quad \forall n\in \mathbb {N} ,} then f ^ {\displaystyle {\widehat {f}}} converges for all frequencies and, by the Riemann–Lebesgue lemma, f ^ {\displaystyle {\widehat {f}}} also decays with all derivatives. First introduced in Fourier's Analytical Theory of Heat., the corresponding inversion formula for "sufficiently nice" functions is given by the Fourier inversion theorem, i.e., The functions f {\displaystyle f} and f ^ {\displaystyle {\widehat {f}}} are referred to as a Fourier transform pair. A common notation for designating transform pairs is: f ( x ) ⟷ F f ^ ( ξ ) , {\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ {\widehat {f}}(\xi ),} for example rect ⁡ ( x ) ⟷ F sinc ⁡ ( ξ ) . {\displaystyle \operatorname {rect} (x)\ {\stackrel {\mathcal {F}}{\longleftrightarrow }}\ \operatorname {sinc} (\xi ).} By analogy, the Fourier series can be regarded as an abstract Fourier transform on the group Z {\displaystyle \mathbb {Z} } of integers. That is, the synthesis of a sequence of complex numbers c n {\displaystyle c_{n}} is defined by the Fourier transform f ( x ) = ∑ n = − ∞ ∞ c n e i 2 π n P x , {\displaystyle f(x)=\sum _{n=-\infty }^{\infty }c_{n}\,e^{i2\pi {\tfrac {n}{P}}x},} such that c n {\displaystyle c_{n}} are given by the inversion formula, i.e., the analysis c n = 1 P ∫ − P / 2 P / 2 f ( x ) e − i 2 π n P x d x , {\displaystyle c_{n}={\frac {1}{P}}\int _{-P/2}^{P/2}f(x)\,e^{-i2\pi {\frac {n}{P}}x}\,dx,} for some complex-valued, P {\displaystyle P} -periodic function f ( x ) {\displaystyle f(x)} defined on a bounded interval [ − P / 2 , P / 2 ] ∈ R {\displaystyle [-P/2,P/2]\in \mathbb {R} } . When P → ∞ , {\displaystyle P\to \infty ,} the constituent frequencies are a continuum: n P → ξ ∈ R , {\displaystyle {\tfrac {n}{P}}\to \xi \in \mathbb {R} ,} and c n → f ^ ( ξ ) ∈ C {\displaystyle c_{n}\to {\hat {f}}(\xi )\in \mathbb {C} } . In other words, on the finite interval [ − P / 2 , P / 2 ] {\displaystyle [-P/2,P/2]} the function f ( x ) {\displaystyle f(x)} has a discrete decomposition in the periodic functions e i 2 π x n / P {\displaystyle e^{i2\pi xn/P}} . On the infinite interval ( − ∞ , ∞ ) {\displaystyle (-\infty ,\infty )} the function f ( x ) {\displaystyle f(x)} has a continuous decomposition in periodic functions e i 2 π x ξ {\displaystyle e^{i2\pi x\xi }} . === Lebesgue integrable functions === A measurable function f : R → C {\displaystyle f:\mathbb {R} \to \mathbb {C} } is called (Lebesgue) integrable if the Lebesgue integral of its absolute value is finite: ‖ f ‖ 1 = ∫ R | f ( x ) | d x < ∞ . {\displaystyle \|f\|_{1}=\int _{\mathbb {R} }|f(x)|\,dx<\infty .} If f {\displaystyle f} is Lebesgue integrable then the Fourier transform, given by Eq.1, is well-defined for all ξ ∈ R {\displaystyle \xi \in \mathbb {R} } . Furthermore, f ^ ∈ L ∞ ∩ C ( R ) {\displaystyle {\widehat {f}}\in L^{\infty }\cap C(\mathbb {R} )} is bounded, uniformly continuous and (by the Riemann–Lebesgue lemma) zero at infinity. The space L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} is the space of measurable functions for which the norm ‖ f ‖ 1 {\displaystyle \|f\|_{1}} is finite, modulo the equivalence relation of equality almost everywhere. The Fourier transform on L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} is one-to-one. However, there is no easy characterization of the image, and thus no easy characterization of the inverse transform. In particular, Eq.2 is no longer valid, as it was stated only under the hypothesis that f ( x ) {\displaystyle f(x)} decayed with all derivatives. While Eq.1 defines the Fourier transform for (complex-valued) functions in L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} , it is not well-defined for other integrability classes, most importantly the space of square-integrable functions L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . For example, the function f ( x ) = ( 1 + x 2 ) − 1 / 2 {\displaystyle f(x)=(1+x^{2})^{-1/2}} is in L 2 {\displaystyle L^{2}} but not L 1 {\displaystyle L^{1}} and therefore the Lebesgue integral Eq.1 does not exist. However, the Fourier transform on the dense subspace L 1 ∩ L 2 ( R ) ⊂ L 2 ( R ) {\displaystyle L^{1}\cap L^{2}(\mathbb {R} )\subset L^{2}(\mathbb {R} )} admits a unique continuous extension to a unitary operator on L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . This extension is important in part because, unlike the case of L 1 {\displaystyle L^{1}} , the Fourier transform is an automorphism of the space L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} . In such cases, the Fourier transform can be obtained explicitly by regularizing the integral, and then passing to a limit. In practice, the integral is often regarded as an improper integral instead of a proper Lebesgue integral, but sometimes for convergence one needs to use weak limit or principal value instead of the (pointwise) limits implicit in an improper integral. Titchmarsh (1986) and Dym & McKean (1985) each gives three rigorous ways of extending the Fourier transform to square integrable functions using this procedure. A general principle in working with the L 2 {\displaystyle L^{2}} Fourier transform is that Gaussians are dense in L 1 ∩ L 2 {\displaystyle L^{1}\cap L^{2}} , and the various features of the Fourier transform, such as its unitarity, are easily inferred for Gaussians. Many of the properties of the Fourier transform, can then be proven from two facts about Gaussians: that e − π x 2 {\displaystyle e^{-\pi x^{2}}} is its own Fourier transform; and that the Gaussian integral ∫ − ∞ ∞ e − π x 2 d x = 1. {\displaystyle \int _{-\infty }^{\infty }e^{-\pi x^{2}}\,dx=1.} A feature of the L 1 {\displaystyle L^{1}} Fourier transform is that it is a homomorphism of Banach algebras from L 1 {\displaystyle L^{1}} equipped with the convolution operation to the Banach algebra of continuous functions under the L ∞ {\displaystyle L^{\infty }} (supremum) norm. The conventions chosen in this article are those of harmonic analysis, and are characterized as the unique conventions such that the Fourier transform is both unitary on L2 and an algebra homomorphism from L1 to L∞, without renormalizing the Lebesgue measure. === Angular frequency (ω) === When the independent variable ( x {\displaystyle x} ) represents time (often denoted by t {\displaystyle t} ), the transform variable ( ξ {\displaystyle \xi } ) represents frequency (often denoted by f {\displaystyle f} ). For example, if time is measured in seconds, then frequency is in hertz. The Fourier transform can also be written in terms of angular frequency, ω = 2 π ξ , {\displaystyle \omega =2\pi \xi ,} whose units are radians per second. The substitution ξ = ω 2 π {\displaystyle \xi ={\tfrac {\omega }{2\pi }}} into Eq.1 produces this convention, where function f ^ {\displaystyle {\widehat {f}}} is relabeled f 1 ^ : {\displaystyle {\widehat {f_{1}}}:} f 3 ^ ( ω ) ≜ ∫ − ∞ ∞ f ( x ) ⋅ e − i ω x d x = f 1 ^ ( ω 2 π ) , f ( x ) = 1 2 π ∫ − ∞ ∞ f 3 ^ ( ω ) ⋅ e i ω x d ω . {\displaystyle {\begin{aligned}{\widehat {f_{3}}}(\omega )&\triangleq \int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{2\pi }}\int _{-\infty }^{\infty }{\widehat {f_{3}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}} Unlike the Eq.1 definition, the Fourier transform is no longer a unitary transformation, and there is less symmetry between the formulas for the transform and its inverse. Those properties are restored by splitting the 2 π {\displaystyle 2\pi } factor evenly between the transform and its inverse, which leads to another convention: f 2 ^ ( ω ) ≜ 1 2 π ∫ − ∞ ∞ f ( x ) ⋅ e − i ω x d x = 1 2 π f 1 ^ ( ω 2 π ) , f ( x ) = 1 2 π ∫ − ∞ ∞ f 2 ^ ( ω ) ⋅ e i ω x d ω . {\displaystyle {\begin{aligned}{\widehat {f_{2}}}(\omega )&\triangleq {\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }f(x)\cdot e^{-i\omega x}\,dx={\frac {1}{\sqrt {2\pi }}}\ \ {\widehat {f_{1}}}\left({\tfrac {\omega }{2\pi }}\right),\\f(x)&={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }{\widehat {f_{2}}}(\omega )\cdot e^{i\omega x}\,d\omega .\end{aligned}}} Variations of all three conventions can be created by conjugating the complex-exponential kernel of both the forward and the reverse transform. The signs must be opposites. == Background == === History === In 1822, Fourier claimed (see Joseph Fourier § The Analytic Theory of Heat) that any function, whether continuous or discontinuous, can be expanded into a series of sines. That important work was corrected and expanded upon by others to provide the foundation for the various forms of the Fourier transform used since. === Complex sinusoids === In general, the coefficients f ^ ( ξ ) {\displaystyle {\widehat {f}}(\xi )} are complex numbers, which have two equivalent forms (see Euler's formula): f ^ ( ξ ) = A e i θ ⏟ polar coordinate form = A cos ⁡ ( θ ) + i A sin ⁡ ( θ ) ⏟ rectangular coordinate form . {\displaystyle {\widehat {f}}(\xi )=\underbrace {Ae^{i\theta }} _{\text{polar coordinate form}}=\underbrace {A\cos(\theta )+iA\sin(\theta )} _{\text{rectangular coordinate form}}.} The product with e i 2 π ξ x {\displaystyle e^{i2\pi \xi x}} (Eq.2) has these forms: f ^ ( ξ ) ⋅ e i 2 π ξ x = A e i θ ⋅ e i 2 π ξ x = A e i ( 2 π ξ x + θ ) ⏟ polar coordinate form = A cos ⁡ ( 2 π ξ x + θ ) + i A sin ⁡ ( 2 π ξ x + θ ) ⏟ rectangular coordinate form . {\displaystyle {\begin{aligned}{\widehat {f}}(\xi )\cdot e^{i2\pi \xi x}&=Ae^{i\theta }\cdot e^{i2\pi \xi x}\\&=\underbrace {Ae^{i(2\pi \xi x+\theta )}} _{\text{polar coordinate form}}\\&=\underbrace {A\cos(2\pi \xi x+\theta )+iA\sin(2\pi \xi x+\theta )} _{\text{rectangular coordinate form}}.\end{aligned}}} which conveys both amplitude and phase of frequency ξ . {\displaystyle \xi .} Likewise, the intuitive interpretation of Eq.1 is that multiplying f ( x ) {\displaystyle f(x)} by e − i 2 π ξ x {\displaystyle e^{-i2\pi \xi x}} has the effect of subtracting ξ {\displaystyle \xi } from every frequency component of function f ( x ) . {\displaystyle f(x).} Only the component that was at frequency ξ {\displaystyle \xi } can produce a non-zero value of the infinite integral, because (at least formally) all the other shifted components are oscillatory and integrate to zero. (see § Example) It is noteworthy how easily the product was simplified using the polar form, and how easily the rectangular form was deduced by an application of Euler's formula. === Negative frequency === Euler's formula introduces the possibility of negative ξ . {\displaystyle \xi .} And Eq.1 is defined ∀ ξ ∈ R . {\displaystyle \forall \xi \in \mathbb {R} .} Only certain complex-valued f ( x ) {\displaystyle f(x)} have transforms f ^ = 0 , ∀ ξ < 0 {\displaystyle {\widehat {f}}=0,\ \forall \ \xi <0} (See Analytic signal. A simple example is e i 2 π ξ 0 x ( ξ 0 > 0 ) . {\displaystyle e^{i2\pi \xi _{0}x}\ (\xi _{0}>0).} ) But negative frequency is necessary to characterize all other complex-valued f ( x ) , {\displaystyle f(x),} found in signal processing, partial differential equations, radar, nonlinear optics, quantum mechanics, and others. For a real-valued f ( x ) , {\displaystyle f(x),} Eq.1 has the symmetry property f ^ ( − ξ ) = f ^ ∗ ( ξ ) {\displaystyle {\widehat {f}}(-\xi )={\widehat {f}}^{*}(\xi )} (see § Conjugation below). This redundancy enables Eq.2 to distinguish f ( x ) = cos ⁡ ( 2 π ξ 0 x ) {\displaystyle f(x)=\cos(2\pi \xi _{0}x)} from e i 2 π ξ 0 x . {\displaystyle e^{i2\pi \xi _{0}x}.} But of course it cannot tell us the actual sign of ξ 0 , {\displaystyle \xi _{0},} because cos ⁡ ( 2 π ξ 0 x ) {\displaystyle \cos(2\pi \xi _{0}x)} and cos ⁡ ( 2 π ( − ξ 0 ) x ) {\displaystyle \cos(2\pi (-\xi _{0})x)} are indistinguishable on just the real numbers line. === Fourier transform for periodic functions === The Fourier transform of a periodic function cannot be defined using the integral formula directly. In order for integral in Eq.1 to be defined the function must be absolutely integrable. Instead it is common to use Fourier series. It is possible to extend the definition to include periodic functions by viewing them as tempered distributions. This makes it possible to see a connection between the Fourier series and the Fourier transform for periodic functions that have a convergent Fourier series. If f ( x ) {\displaystyle f(x)} is a periodic function, with period P {\displaystyle P} , that has a convergent Fourier series, then: f ^ ( ξ ) = ∑ n = − ∞ ∞ c n ⋅ δ ( ξ − n P ) , {\displaystyle {\widehat {f}}(\xi )=\sum _{n=-\infty }^{\infty }c_{n}\cdot \delta \left(\xi -{\tfrac {n}{P}}\right),} where c n {\displaystyle c_{n}} are the Fourier series coefficients of f {\displaystyle f} , and δ {\displaystyle \delta } is the Dirac delta function. In other words, the Fourier transform is a Dirac comb function whose teeth are multiplied by the Fourier series coefficients. === Sampling the Fourier transform === The Fourier transform of an integrable function f {\displaystyle f} can be sampled at regular intervals of arbitrary length 1 P . {\displaystyle {\tfrac {1}{P}}.} These samples can be deduced from one cycle of a periodic function f P {\displaystyle f_{P}} which has Fourier series coefficients proportional to those samples by the Poisson summation formula: f P ( x ) ≜ ∑ n = − ∞ ∞ f ( x + n P ) = 1 P ∑ k = − ∞ ∞ f ^ ( k P ) e i 2 π k P x , ∀ k ∈ Z {\displaystyle f_{P}(x)\triangleq \sum _{n=-\infty }^{\infty }f(x+nP)={\frac {1}{P}}\sum _{k=-\infty }^{\infty }{\widehat {f}}\left({\tfrac {k}{P}}\right)e^{i2\pi {\frac {k}{P}}x},\quad \forall k\in \mathbb {Z} } The integrability of f {\displaystyle f} ensures the periodic summation converges. Therefore, the samples f ^ ( k P ) {\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)} can be determined by Fourier series analysis: f ^ ( k P ) = ∫ P f P ( x ) ⋅ e − i 2 π k P x d x . {\displaystyle {\widehat {f}}\left({\tfrac {k}{P}}\right)=\int _{P}f_{P}(x)\cdot e^{-i2\pi {\frac {k}{P}}x}\,dx.} When f ( x ) {\displaystyle f(x)} has compact support, f P ( x ) {\displaystyle f_{P}(x)} has a finite number of terms within the interval of integration. When f ( x ) {\displaystyle f(x)} does not have compact support, numerical evaluation of f P ( x ) {\displaystyle f_{P}(x)} requires an approximation, such as tapering f ( x ) {\displaystyle f(x)} or truncating the number of terms. == Units == The frequency variable must have inverse units to the units of the original function's domain (typically named t {\displaystyle t} or x {\displaystyle x} ). For example, if t {\displaystyle t} is measured in seconds, ξ {\displaystyle \xi } should be in cycles per second or hertz. If the scale of time is in units of 2 π {\displaystyle 2\pi } seconds, then another Greek letter ω {\displaystyle \omega } is typically used instead to represent angular frequency (where ω = 2 π ξ {\displaystyle \omega =2\pi \xi } ) in units of radians per second. If using x {\displaystyle x} for units of length, then ξ {\displaystyle \xi } must be in inverse length, e.g., wavenumbers. That is to say, there are two versions of the real line: one which is the range of t {\displaystyle t} and measured in units of t , {\displaystyle t,} and the other which is the range of ξ {\displaystyle \xi } and measured in inverse units to the units of t . {\displaystyle t.} These two distinct versions of the real line cannot be equated with each other. Therefore, the Fourier transform goes from one space of functions to a different space of functions: functions which have a different domain of definition. In general, ξ {\displaystyle \xi } must always be taken to be a linear form on the space of its domain, which is to say that the second real line is the dual space of the first real line. See the article on linear algebra for a more formal explanation and for more details. This point of view becomes essential in generalizations of the Fourier transform to general symmetry groups, including the case of Fourier series. That there is no one preferred way (often, one says "no canonical way") to compare the two versions of the real line which are involved in the Fourier transform—fixing the units on one line does not force the scale of the units on the other line—is the reason for the plethora of rival conventions on the definition of the Fourier transform. The various definitions resulting from different choices of units differ by various constants. In other conventions, the Fourier transform has i in the exponent instead of −i, and vice versa for the inversion formula. This convention is common in modern physics and is the default for Wolfram Alpha, and does not mean that the frequency has become negative, since there is no canonical definition of positivity for frequency of a complex wave. It simply means that f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} is the amplitude of the wave e − i 2 π ξ x {\displaystyle e^{-i2\pi \xi x}} instead of the wave e i 2 π ξ x {\displaystyle e^{i2\pi \xi x}} (the former, with its minus sign, is often seen in the time dependence for sinusoidal plane-wave solutions of the electromagnetic wave equation, or in the time dependence for quantum wave functions). Many of the identities involving the Fourier transform remain valid in those conventions, provided all terms that explicitly involve i have it replaced by −i. In electrical engineering the letter j is typically used for the imaginary unit instead of i because i is used for current. When using dimensionless units, the constant factors might not be written in the transform definition. For instance, in probability theory, the characteristic function Φ of the probability density function f of a random variable X of continuous type is defined without a negative sign in the exponential, and since the units of x are ignored, there is no 2π either: ϕ ( λ ) = ∫ − ∞ ∞ f ( x ) e i λ x d x . {\displaystyle \phi (\lambda )=\int _{-\infty }^{\infty }f(x)e^{i\lambda x}\,dx.} In probability theory and mathematical statistics, the use of the Fourier—Stieltjes transform is preferred, because many random variables are not of continuous type, and do not possess a density function, and one must treat not functions but distributions, i.e., measures which possess "atoms". From the higher point of view of group characters, which is much more abstract, all these arbitrary choices disappear, as will be explained in the later section of this article, which treats the notion of the Fourier transform of a function on a locally compact Abelian group. == Properties == Let f ( x ) {\displaystyle f(x)} and h ( x ) {\displaystyle h(x)} represent integrable functions Lebesgue-measurable on the real line satisfying: ∫ − ∞ ∞ | f ( x ) | d x < ∞ . {\displaystyle \int _{-\infty }^{\infty }|f(x)|\,dx<\infty .} We denote the Fourier transforms of these functions as f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} and h ^ ( ξ ) {\displaystyle {\hat {h}}(\xi )} respectively. === Basic properties === The Fourier transform has the following basic properties: ==== Linearity ==== a f ( x ) + b h ( x ) ⟺ F a f ^ ( ξ ) + b h ^ ( ξ ) ; a , b ∈ C {\displaystyle a\ f(x)+b\ h(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ a\ {\widehat {f}}(\xi )+b\ {\widehat {h}}(\xi );\quad \ a,b\in \mathbb {C} } ==== Time shifting ==== f ( x − x 0 ) ⟺ F e − i 2 π x 0 ξ f ^ ( ξ ) ; x 0 ∈ R {\displaystyle f(x-x_{0})\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ e^{-i2\pi x_{0}\xi }\ {\widehat {f}}(\xi );\quad \ x_{0}\in \mathbb {R} } ==== Frequency shifting ==== e i 2 π ξ 0 x f ( x ) ⟺ F f ^ ( ξ − ξ 0 ) ; ξ 0 ∈ R {\displaystyle e^{i2\pi \xi _{0}x}f(x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(\xi -\xi _{0});\quad \ \xi _{0}\in \mathbb {R} } ==== Time scaling ==== f ( a x ) ⟺ F 1 | a | f ^ ( ξ a ) ; a ≠ 0 {\displaystyle f(ax)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\frac {1}{|a|}}{\widehat {f}}\left({\frac {\xi }{a}}\right);\quad \ a\neq 0} The case a = − 1 {\displaystyle a=-1} leads to the time-reversal property: f ( − x ) ⟺ F f ^ ( − ξ ) {\displaystyle f(-x)\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\widehat {f}}(-\xi )} ==== Symmetry ==== When the real and imaginary parts of a complex function are decomposed into their even and odd parts, there are four components, denoted below by the subscripts RE, RO, IE, and IO. And there is a one-to-one mapping between the four components of a complex time function and the four components of its complex frequency transform: T i m e d o m a i n f = f RE + f RO + i f IE + i f IO ⏟ ⇕ F ⇕ F ⇕ F ⇕ F ⇕ F F r e q u e n c y d o m a i n f ^ = f ^ RE + i f ^ IO ⏞ + i f ^ IE + f ^ RO {\displaystyle {\begin{array}{rlcccccccc}{\mathsf {Time\ domain}}&f&=&f_{_{\text{RE}}}&+&f_{_{\text{RO}}}&+&i\ f_{_{\text{IE}}}&+&\underbrace {i\ f_{_{\text{IO}}}} \\&{\Bigg \Updownarrow }{\mathcal {F}}&&{\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}&&\ \ {\Bigg \Updownarrow }{\mathcal {F}}\\{\mathsf {Frequency\ domain}}&{\widehat {f}}&=&{\widehat {f}}_{_{\text{RE}}}&+&\overbrace {i\ {\widehat {f}}_{_{\text{IO}}}\,} &+&i\ {\widehat {f}}_{_{\text{IE}}}&+&{\widehat {f}}_{_{\text{RO}}}\end{array}}} From this, various relationships are apparent, for example: The transform of a real-valued function ( f R E + f R O ) {\displaystyle (f_{_{RE}}+f_{_{RO}})} is the conjugate symmetric function f ^ R E + i f ^ I O . {\displaystyle {\hat {f}}_{RE}+i\ {\hat {f}}_{IO}.} Conversely, a conjugate symmetric transform implies a real-valued time-domain. The transform of an imaginary-valued function ( i f I E + i f I O ) {\displaystyle (i\ f_{_{IE}}+i\ f_{_{IO}})} is the conjugate antisymmetric function f ^ R O + i f ^ I E , {\displaystyle {\hat {f}}_{RO}+i\ {\hat {f}}_{IE},} and the converse is true. The transform of a conjugate symmetric function ( f R E + i f I O ) {\displaystyle (f_{_{RE}}+i\ f_{_{IO}})} is the real-valued function f ^ R E + f ^ R O , {\displaystyle {\hat {f}}_{RE}+{\hat {f}}_{RO},} and the converse is true. The transform of a conjugate antisymmetric function ( f R O + i f I E ) {\displaystyle (f_{_{RO}}+i\ f_{_{IE}})} is the imaginary-valued function i f ^ I E + i f ^ I O , {\displaystyle i\ {\hat {f}}_{IE}+i{\hat {f}}_{IO},} and the converse is true. ==== Conjugation ==== ( f ( x ) ) ∗ ⟺ F ( f ^ ( − ξ ) ) ∗ {\displaystyle {\bigl (}f(x){\bigr )}^{*}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ \left({\widehat {f}}(-\xi )\right)^{*}} (Note: the ∗ denotes complex conjugation.) In particular, if f {\displaystyle f} is real, then f ^ {\displaystyle {\widehat {f}}} is even symmetric (aka Hermitian function): f ^ ( − ξ ) = ( f ^ ( ξ ) ) ∗ . {\displaystyle {\widehat {f}}(-\xi )={\bigl (}{\widehat {f}}(\xi ){\bigr )}^{*}.} And if f {\displaystyle f} is purely imaginary, then f ^ {\displaystyle {\widehat {f}}} is odd symmetric: f ^ ( − ξ ) = − ( f ^ ( ξ ) ) ∗ . {\displaystyle {\widehat {f}}(-\xi )=-({\widehat {f}}(\xi ))^{*}.} ==== Real and imaginary parts ==== Re ⁡ { f ( x ) } ⟺ F 1 2 ( f ^ ( ξ ) + ( f ^ ( − ξ ) ) ∗ ) {\displaystyle \operatorname {Re} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2}}\left({\widehat {f}}(\xi )+{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)} Im ⁡ { f ( x ) } ⟺ F 1 2 i ( f ^ ( ξ ) − ( f ^ ( − ξ ) ) ∗ ) {\displaystyle \operatorname {Im} \{f(x)\}\ \ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ \ {\tfrac {1}{2i}}\left({\widehat {f}}(\xi )-{\bigl (}{\widehat {f}}(-\xi ){\bigr )}^{*}\right)} ==== Zero frequency component ==== Substituting ξ = 0 {\displaystyle \xi =0} in the definition, we obtain: f ^ ( 0 ) = ∫ − ∞ ∞ f ( x ) d x . {\displaystyle {\widehat {f}}(0)=\int _{-\infty }^{\infty }f(x)\,dx.} The integral of f {\displaystyle f} over its domain is known as the average value or DC bias of the function. === Uniform continuity and the Riemann–Lebesgue lemma === The Fourier transform may be defined in some cases for non-integrable functions, but the Fourier transforms of integrable functions have several strong properties. The Fourier transform f ^ {\displaystyle {\hat {f}}} of any integrable function f {\displaystyle f} is uniformly continuous and ‖ f ^ ‖ ∞ ≤ ‖ f ‖ 1 {\displaystyle \left\|{\hat {f}}\right\|_{\infty }\leq \left\|f\right\|_{1}} By the Riemann–Lebesgue lemma, f ^ ( ξ ) → 0 as | ξ | → ∞ . {\displaystyle {\hat {f}}(\xi )\to 0{\text{ as }}|\xi |\to \infty .} However, f ^ {\displaystyle {\hat {f}}} need not be integrable. For example, the Fourier transform of the rectangular function, which is integrable, is the sinc function, which is not Lebesgue integrable, because its improper integrals behave analogously to the alternating harmonic series, in converging to a sum without being absolutely convergent. It is not generally possible to write the inverse transform as a Lebesgue integral. However, when both f {\displaystyle f} and f ^ {\displaystyle {\hat {f}}} are integrable, the inverse equality f ( x ) = ∫ − ∞ ∞ f ^ ( ξ ) e i 2 π x ξ d ξ {\displaystyle f(x)=\int _{-\infty }^{\infty }{\hat {f}}(\xi )e^{i2\pi x\xi }\,d\xi } holds for almost every x. As a result, the Fourier transform is injective on L1(R). === Plancherel theorem and Parseval's theorem === Let f(x) and g(x) be integrable, and let f̂(ξ) and ĝ(ξ) be their Fourier transforms. If f(x) and g(x) are also square-integrable, then the Parseval formula follows: ⟨ f , g ⟩ L 2 = ∫ − ∞ ∞ f ( x ) g ( x ) ¯ d x = ∫ − ∞ ∞ f ^ ( ξ ) g ^ ( ξ ) ¯ d ξ , {\displaystyle \langle f,g\rangle _{L^{2}}=\int _{-\infty }^{\infty }f(x){\overline {g(x)}}\,dx=\int _{-\infty }^{\infty }{\hat {f}}(\xi ){\overline {{\hat {g}}(\xi )}}\,d\xi ,} where the bar denotes complex conjugation. The Plancherel theorem, which follows from the above, states that ‖ f ‖ L 2 2 = ∫ − ∞ ∞ | f ( x ) | 2 d x = ∫ − ∞ ∞ | f ^ ( ξ ) | 2 d ξ . {\displaystyle \|f\|_{L^{2}}^{2}=\int _{-\infty }^{\infty }\left|f(x)\right|^{2}\,dx=\int _{-\infty }^{\infty }\left|{\hat {f}}(\xi )\right|^{2}\,d\xi .} Plancherel's theorem makes it possible to extend the Fourier transform, by a continuity argument, to a unitary operator on L2(R). On L1(R) ∩ L2(R), this extension agrees with original Fourier transform defined on L1(R), thus enlarging the domain of the Fourier transform to L1(R) + L2(R) (and consequently to Lp(R) for 1 ≤ p ≤ 2). Plancherel's theorem has the interpretation in the sciences that the Fourier transform preserves the energy of the original quantity. The terminology of these formulas is not quite standardised. Parseval's theorem was proved only for Fourier series, and was first proved by Lyapunov. But Parseval's formula makes sense for the Fourier transform as well, and so even though in the context of the Fourier transform it was proved by Plancherel, it is still often referred to as Parseval's formula, or Parseval's relation, or even Parseval's theorem. See Pontryagin duality for a general formulation of this concept in the context of locally compact abelian groups. === Convolution theorem === The Fourier transform translates between convolution and multiplication of functions. If f(x) and g(x) are integrable functions with Fourier transforms f̂(ξ) and ĝ(ξ) respectively, then the Fourier transform of the convolution is given by the product of the Fourier transforms f̂(ξ) and ĝ(ξ) (under other conventions for the definition of the Fourier transform a constant factor may appear). This means that if: h ( x ) = ( f ∗ g ) ( x ) = ∫ − ∞ ∞ f ( y ) g ( x − y ) d y , {\displaystyle h(x)=(f*g)(x)=\int _{-\infty }^{\infty }f(y)g(x-y)\,dy,} where ∗ denotes the convolution operation, then: h ^ ( ξ ) = f ^ ( ξ ) g ^ ( ξ ) . {\displaystyle {\hat {h}}(\xi )={\hat {f}}(\xi )\,{\hat {g}}(\xi ).} In linear time invariant (LTI) system theory, it is common to interpret g(x) as the impulse response of an LTI system with input f(x) and output h(x), since substituting the unit impulse for f(x) yields h(x) = g(x). In this case, ĝ(ξ) represents the frequency response of the system. Conversely, if f(x) can be decomposed as the product of two square integrable functions p(x) and q(x), then the Fourier transform of f(x) is given by the convolution of the respective Fourier transforms p̂(ξ) and q̂(ξ). === Cross-correlation theorem === In an analogous manner, it can be shown that if h(x) is the cross-correlation of f(x) and g(x): h ( x ) = ( f ⋆ g ) ( x ) = ∫ − ∞ ∞ f ( y ) ¯ g ( x + y ) d y {\displaystyle h(x)=(f\star g)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}g(x+y)\,dy} then the Fourier transform of h(x) is: h ^ ( ξ ) = f ^ ( ξ ) ¯ g ^ ( ξ ) . {\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}\,{\hat {g}}(\xi ).} As a special case, the autocorrelation of function f(x) is: h ( x ) = ( f ⋆ f ) ( x ) = ∫ − ∞ ∞ f ( y ) ¯ f ( x + y ) d y {\displaystyle h(x)=(f\star f)(x)=\int _{-\infty }^{\infty }{\overline {f(y)}}f(x+y)\,dy} for which h ^ ( ξ ) = f ^ ( ξ ) ¯ f ^ ( ξ ) = | f ^ ( ξ ) | 2 . {\displaystyle {\hat {h}}(\xi )={\overline {{\hat {f}}(\xi )}}{\hat {f}}(\xi )=\left|{\hat {f}}(\xi )\right|^{2}.} === Differentiation === Suppose f(x) is an absolutely continuous differentiable function, and both f and its derivative f′ are integrable. Then the Fourier transform of the derivative is given by f ′ ^ ( ξ ) = F { d d x f ( x ) } = i 2 π ξ f ^ ( ξ ) . {\displaystyle {\widehat {f'\,}}(\xi )={\mathcal {F}}\left\{{\frac {d}{dx}}f(x)\right\}=i2\pi \xi {\hat {f}}(\xi ).} More generally, the Fourier transformation of the nth derivative f(n) is given by f ( n ) ^ ( ξ ) = F { d n d x n f ( x ) } = ( i 2 π ξ ) n f ^ ( ξ ) . {\displaystyle {\widehat {f^{(n)}}}(\xi )={\mathcal {F}}\left\{{\frac {d^{n}}{dx^{n}}}f(x)\right\}=(i2\pi \xi )^{n}{\hat {f}}(\xi ).} Analogously, F { d n d ξ n f ^ ( ξ ) } = ( i 2 π x ) n f ( x ) {\displaystyle {\mathcal {F}}\left\{{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi )\right\}=(i2\pi x)^{n}f(x)} , so F { x n f ( x ) } = ( i 2 π ) n d n d ξ n f ^ ( ξ ) . {\displaystyle {\mathcal {F}}\left\{x^{n}f(x)\right\}=\left({\frac {i}{2\pi }}\right)^{n}{\frac {d^{n}}{d\xi ^{n}}}{\hat {f}}(\xi ).} By applying the Fourier transform and using these formulas, some ordinary differential equations can be transformed into algebraic equations, which are much easier to solve. These formulas also give rise to the rule of thumb "f(x) is smooth if and only if f̂(ξ) quickly falls to 0 for |ξ| → ∞." By using the analogous rules for the inverse Fourier transform, one can also say "f(x) quickly falls to 0 for |x| → ∞ if and only if f̂(ξ) is smooth." === Eigenfunctions === The Fourier transform is a linear transform which has eigenfunctions obeying F [ ψ ] = λ ψ , {\displaystyle {\mathcal {F}}[\psi ]=\lambda \psi ,} with λ ∈ C . {\displaystyle \lambda \in \mathbb {C} .} A set of eigenfunctions is found by noting that the homogeneous differential equation [ U ( 1 2 π d d x ) + U ( x ) ] ψ ( x ) = 0 {\displaystyle \left[U\left({\frac {1}{2\pi }}{\frac {d}{dx}}\right)+U(x)\right]\psi (x)=0} leads to eigenfunctions ψ ( x ) {\displaystyle \psi (x)} of the Fourier transform F {\displaystyle {\mathcal {F}}} as long as the form of the equation remains invariant under Fourier transform. In other words, every solution ψ ( x ) {\displaystyle \psi (x)} and its Fourier transform ψ ^ ( ξ ) {\displaystyle {\hat {\psi }}(\xi )} obey the same equation. Assuming uniqueness of the solutions, every solution ψ ( x ) {\displaystyle \psi (x)} must therefore be an eigenfunction of the Fourier transform. The form of the equation remains unchanged under Fourier transform if U ( x ) {\displaystyle U(x)} can be expanded in a power series in which for all terms the same factor of either one of ± 1 , ± i {\displaystyle \pm 1,\pm i} arises from the factors i n {\displaystyle i^{n}} introduced by the differentiation rules upon Fourier transforming the homogeneous differential equation because this factor may then be cancelled. The simplest allowable U ( x ) = x {\displaystyle U(x)=x} leads to the standard normal distribution. More generally, a set of eigenfunctions is also found by noting that the differentiation rules imply that the ordinary differential equation [ W ( i 2 π d d x ) + W ( x ) ] ψ ( x ) = C ψ ( x ) {\displaystyle \left[W\left({\frac {i}{2\pi }}{\frac {d}{dx}}\right)+W(x)\right]\psi (x)=C\psi (x)} with C {\displaystyle C} constant and W ( x ) {\displaystyle W(x)} being a non-constant even function remains invariant in form when applying the Fourier transform F {\displaystyle {\mathcal {F}}} to both sides of the equation. The simplest example is provided by W ( x ) = x 2 {\displaystyle W(x)=x^{2}} which is equivalent to considering the Schrödinger equation for the quantum harmonic oscillator. The corresponding solutions provide an important choice of an orthonormal basis for L2(R) and are given by the "physicist's" Hermite functions. Equivalently one may use ψ n ( x ) = 2 4 n ! e − π x 2 H e n ( 2 x π ) , {\displaystyle \psi _{n}(x)={\frac {\sqrt[{4}]{2}}{\sqrt {n!}}}e^{-\pi x^{2}}\mathrm {He} _{n}\left(2x{\sqrt {\pi }}\right),} where Hen(x) are the "probabilist's" Hermite polynomials, defined as H e n ( x ) = ( − 1 ) n e 1 2 x 2 ( d d x ) n e − 1 2 x 2 . {\displaystyle \mathrm {He} _{n}(x)=(-1)^{n}e^{{\frac {1}{2}}x^{2}}\left({\frac {d}{dx}}\right)^{n}e^{-{\frac {1}{2}}x^{2}}.} Under this convention for the Fourier transform, we have that ψ ^ n ( ξ ) = ( − i ) n ψ n ( ξ ) . {\displaystyle {\hat {\psi }}_{n}(\xi )=(-i)^{n}\psi _{n}(\xi ).} In other words, the Hermite functions form a complete orthonormal system of eigenfunctions for the Fourier transform on L2(R). However, this choice of eigenfunctions is not unique. Because of F 4 = i d {\displaystyle {\mathcal {F}}^{4}=\mathrm {id} } there are only four different eigenvalues of the Fourier transform (the fourth roots of unity ±1 and ±i) and any linear combination of eigenfunctions with the same eigenvalue gives another eigenfunction. As a consequence of this, it is possible to decompose L2(R) as a direct sum of four spaces H0, H1, H2, and H3 where the Fourier transform acts on Hek simply by multiplication by ik. Since the complete set of Hermite functions ψn provides a resolution of the identity they diagonalize the Fourier operator, i.e. the Fourier transform can be represented by such a sum of terms weighted by the above eigenvalues, and these sums can be explicitly summed: F [ f ] ( ξ ) = ∫ d x f ( x ) ∑ n ≥ 0 ( − i ) n ψ n ( x ) ψ n ( ξ ) . {\displaystyle {\mathcal {F}}[f](\xi )=\int dxf(x)\sum _{n\geq 0}(-i)^{n}\psi _{n}(x)\psi _{n}(\xi )~.} This approach to define the Fourier transform was first proposed by Norbert Wiener. Among other properties, Hermite functions decrease exponentially fast in both frequency and time domains, and they are thus used to define a generalization of the Fourier transform, namely the fractional Fourier transform used in time–frequency analysis. In physics, this transform was introduced by Edward Condon. This change of basis functions becomes possible because the Fourier transform is a unitary transform when using the right conventions. Consequently, under the proper conditions it may be expected to result from a self-adjoint generator N {\displaystyle N} via F [ ψ ] = e − i t N ψ . {\displaystyle {\mathcal {F}}[\psi ]=e^{-itN}\psi .} The operator N {\displaystyle N} is the number operator of the quantum harmonic oscillator written as N ≡ 1 2 ( x − ∂ ∂ x ) ( x + ∂ ∂ x ) = 1 2 ( − ∂ 2 ∂ x 2 + x 2 − 1 ) . {\displaystyle N\equiv {\frac {1}{2}}\left(x-{\frac {\partial }{\partial x}}\right)\left(x+{\frac {\partial }{\partial x}}\right)={\frac {1}{2}}\left(-{\frac {\partial ^{2}}{\partial x^{2}}}+x^{2}-1\right).} It can be interpreted as the generator of fractional Fourier transforms for arbitrary values of t, and of the conventional continuous Fourier transform F {\displaystyle {\mathcal {F}}} for the particular value t = π / 2 , {\displaystyle t=\pi /2,} with the Mehler kernel implementing the corresponding active transform. The eigenfunctions of N {\displaystyle N} are the Hermite functions ψ n ( x ) {\displaystyle \psi _{n}(x)} which are therefore also eigenfunctions of F . {\displaystyle {\mathcal {F}}.} Upon extending the Fourier transform to distributions the Dirac comb is also an eigenfunction of the Fourier transform. === Inversion and periodicity === Under suitable conditions on the function f {\displaystyle f} , it can be recovered from its Fourier transform f ^ {\displaystyle {\hat {f}}} . Indeed, denoting the Fourier transform operator by F {\displaystyle {\mathcal {F}}} , so F f := f ^ {\displaystyle {\mathcal {F}}f:={\hat {f}}} , then for suitable functions, applying the Fourier transform twice simply flips the function: ( F 2 f ) ( x ) = f ( − x ) {\displaystyle \left({\mathcal {F}}^{2}f\right)(x)=f(-x)} , which can be interpreted as "reversing time". Since reversing time is two-periodic, applying this twice yields F 4 ( f ) = f {\displaystyle {\mathcal {F}}^{4}(f)=f} , so the Fourier transform operator is four-periodic, and similarly the inverse Fourier transform can be obtained by applying the Fourier transform three times: F 3 ( f ^ ) = f {\displaystyle {\mathcal {F}}^{3}\left({\hat {f}}\right)=f} . In particular the Fourier transform is invertible (under suitable conditions). More precisely, defining the parity operator P {\displaystyle {\mathcal {P}}} such that ( P f ) ( x ) = f ( − x ) {\displaystyle ({\mathcal {P}}f)(x)=f(-x)} , we have: F 0 = i d , F 1 = F , F 2 = P , F 3 = F − 1 = P ∘ F = F ∘ P , F 4 = i d {\displaystyle {\begin{aligned}{\mathcal {F}}^{0}&=\mathrm {id} ,\\{\mathcal {F}}^{1}&={\mathcal {F}},\\{\mathcal {F}}^{2}&={\mathcal {P}},\\{\mathcal {F}}^{3}&={\mathcal {F}}^{-1}={\mathcal {P}}\circ {\mathcal {F}}={\mathcal {F}}\circ {\mathcal {P}},\\{\mathcal {F}}^{4}&=\mathrm {id} \end{aligned}}} These equalities of operators require careful definition of the space of functions in question, defining equality of functions (equality at every point? equality almost everywhere?) and defining equality of operators – that is, defining the topology on the function space and operator space in question. These are not true for all functions, but are true under various conditions, which are the content of the various forms of the Fourier inversion theorem. This fourfold periodicity of the Fourier transform is similar to a rotation of the plane by 90°, particularly as the two-fold iteration yields a reversal, and in fact this analogy can be made precise. While the Fourier transform can simply be interpreted as switching the time domain and the frequency domain, with the inverse Fourier transform switching them back, more geometrically it can be interpreted as a rotation by 90° in the time–frequency domain (considering time as the x-axis and frequency as the y-axis), and the Fourier transform can be generalized to the fractional Fourier transform, which involves rotations by other angles. This can be further generalized to linear canonical transformations, which can be visualized as the action of the special linear group SL2(R) on the time–frequency plane, with the preserved symplectic form corresponding to the uncertainty principle, below. This approach is particularly studied in signal processing, under time–frequency analysis. === Connection with the Heisenberg group === The Heisenberg group is a certain group of unitary operators on the Hilbert space L2(R) of square integrable complex valued functions f on the real line, generated by the translations (Ty f)(x) = f (x + y) and multiplication by ei2πξx, (Mξ f)(x) = ei2πξx f (x). These operators do not commute, as their (group) commutator is ( M ξ − 1 T y − 1 M ξ T y f ) ( x ) = e i 2 π ξ y f ( x ) {\displaystyle \left(M_{\xi }^{-1}T_{y}^{-1}M_{\xi }T_{y}f\right)(x)=e^{i2\pi \xi y}f(x)} which is multiplication by the constant (independent of x) ei2πξy ∈ U(1) (the circle group of unit modulus complex numbers). As an abstract group, the Heisenberg group is the three-dimensional Lie group of triples (x, ξ, z) ∈ R2 × U(1), with the group law ( x 1 , ξ 1 , t 1 ) ⋅ ( x 2 , ξ 2 , t 2 ) = ( x 1 + x 2 , ξ 1 + ξ 2 , t 1 t 2 e i 2 π ( x 1 ξ 1 + x 2 ξ 2 + x 1 ξ 2 ) ) . {\displaystyle \left(x_{1},\xi _{1},t_{1}\right)\cdot \left(x_{2},\xi _{2},t_{2}\right)=\left(x_{1}+x_{2},\xi _{1}+\xi _{2},t_{1}t_{2}e^{i2\pi \left(x_{1}\xi _{1}+x_{2}\xi _{2}+x_{1}\xi _{2}\right)}\right).} Denote the Heisenberg group by H1. The above procedure describes not only the group structure, but also a standard unitary representation of H1 on a Hilbert space, which we denote by ρ : H1 → B(L2(R)). Define the linear automorphism of R2 by J ( x ξ ) = ( − ξ x ) {\displaystyle J{\begin{pmatrix}x\\\xi \end{pmatrix}}={\begin{pmatrix}-\xi \\x\end{pmatrix}}} so that J2 = −I. This J can be extended to a unique automorphism of H1: j ( x , ξ , t ) = ( − ξ , x , t e − i 2 π ξ x ) . {\displaystyle j\left(x,\xi ,t\right)=\left(-\xi ,x,te^{-i2\pi \xi x}\right).} According to the Stone–von Neumann theorem, the unitary representations ρ and ρ ∘ j are unitarily equivalent, so there is a unique intertwiner W ∈ U(L2(R)) such that ρ ∘ j = W ρ W ∗ . {\displaystyle \rho \circ j=W\rho W^{*}.} This operator W is the Fourier transform. Many of the standard properties of the Fourier transform are immediate consequences of this more general framework. For example, the square of the Fourier transform, W2, is an intertwiner associated with J2 = −I, and so we have (W2f)(x) = f (−x) is the reflection of the original function f. == Complex domain == The integral for the Fourier transform f ^ ( ξ ) = ∫ − ∞ ∞ e − i 2 π ξ t f ( t ) d t {\displaystyle {\hat {f}}(\xi )=\int _{-\infty }^{\infty }e^{-i2\pi \xi t}f(t)\,dt} can be studied for complex values of its argument ξ. Depending on the properties of f, this might not converge off the real axis at all, or it might converge to a complex analytic function for all values of ξ = σ + iτ, or something in between. The Paley–Wiener theorem says that f is smooth (i.e., n-times differentiable for all positive integers n) and compactly supported if and only if f̂ (σ + iτ) is a holomorphic function for which there exists a constant a > 0 such that for any integer n ≥ 0, | ξ n f ^ ( ξ ) | ≤ C e a | τ | {\displaystyle \left\vert \xi ^{n}{\hat {f}}(\xi )\right\vert \leq Ce^{a\vert \tau \vert }} for some constant C. (In this case, f is supported on [−a, a].) This can be expressed by saying that f̂ is an entire function which is rapidly decreasing in σ (for fixed τ) and of exponential growth in τ (uniformly in σ). (If f is not smooth, but only L2, the statement still holds provided n = 0.) The space of such functions of a complex variable is called the Paley—Wiener space. This theorem has been generalised to semisimple Lie groups. If f is supported on the half-line t ≥ 0, then f is said to be "causal" because the impulse response function of a physically realisable filter must have this property, as no effect can precede its cause. Paley and Wiener showed that then f̂ extends to a holomorphic function on the complex lower half-plane τ < 0 which tends to zero as τ goes to infinity. The converse is false and it is not known how to characterise the Fourier transform of a causal function. === Laplace transform === The Fourier transform f̂(ξ) is related to the Laplace transform F(s), which is also used for the solution of differential equations and the analysis of filters. It may happen that a function f for which the Fourier integral does not converge on the real axis at all, nevertheless has a complex Fourier transform defined in some region of the complex plane. For example, if f(t) is of exponential growth, i.e., | f ( t ) | < C e a | t | {\displaystyle \vert f(t)\vert <Ce^{a\vert t\vert }} for some constants C, a ≥ 0, then f ^ ( i τ ) = ∫ − ∞ ∞ e 2 π τ t f ( t ) d t , {\displaystyle {\hat {f}}(i\tau )=\int _{-\infty }^{\infty }e^{2\pi \tau t}f(t)\,dt,} convergent for all 2πτ < −a, is the two-sided Laplace transform of f. The more usual version ("one-sided") of the Laplace transform is F ( s ) = ∫ 0 ∞ f ( t ) e − s t d t . {\displaystyle F(s)=\int _{0}^{\infty }f(t)e^{-st}\,dt.} If f is also causal, and analytical, then: f ^ ( i τ ) = F ( − 2 π τ ) . {\displaystyle {\hat {f}}(i\tau )=F(-2\pi \tau ).} Thus, extending the Fourier transform to the complex domain means it includes the Laplace transform as a special case in the case of causal functions—but with the change of variable s = i2πξ. From another, perhaps more classical viewpoint, the Laplace transform by its form involves an additional exponential regulating term which lets it converge outside of the imaginary line where the Fourier transform is defined. As such it can converge for at most exponentially divergent series and integrals, whereas the original Fourier decomposition cannot, enabling analysis of systems with divergent or critical elements. Two particular examples from linear signal processing are the construction of allpass filter networks from critical comb and mitigating filters via exact pole-zero cancellation on the unit circle. Such designs are common in audio processing, where highly nonlinear phase response is sought for, as in reverb. Furthermore, when extended pulselike impulse responses are sought for signal processing work, the easiest way to produce them is to have one circuit which produces a divergent time response, and then to cancel its divergence through a delayed opposite and compensatory response. There, only the delay circuit in-between admits a classical Fourier description, which is critical. Both the circuits to the side are unstable, and do not admit a convergent Fourier decomposition. However, they do admit a Laplace domain description, with identical half-planes of convergence in the complex plane (or in the discrete case, the Z-plane), wherein their effects cancel. In modern mathematics the Laplace transform is conventionally subsumed under the aegis Fourier methods. Both of them are subsumed by the far more general, and more abstract, idea of harmonic analysis. === Inversion === Still with ξ = σ + i τ {\displaystyle \xi =\sigma +i\tau } , if f ^ {\displaystyle {\widehat {f}}} is complex analytic for a ≤ τ ≤ b, then ∫ − ∞ ∞ f ^ ( σ + i a ) e i 2 π ξ t d σ = ∫ − ∞ ∞ f ^ ( σ + i b ) e i 2 π ξ t d σ {\displaystyle \int _{-\infty }^{\infty }{\hat {f}}(\sigma +ia)e^{i2\pi \xi t}\,d\sigma =\int _{-\infty }^{\infty }{\hat {f}}(\sigma +ib)e^{i2\pi \xi t}\,d\sigma } by Cauchy's integral theorem. Therefore, the Fourier inversion formula can use integration along different lines, parallel to the real axis. Theorem: If f(t) = 0 for t < 0, and |f(t)| < Cea|t| for some constants C, a > 0, then f ( t ) = ∫ − ∞ ∞ f ^ ( σ + i τ ) e i 2 π ξ t d σ , {\displaystyle f(t)=\int _{-\infty }^{\infty }{\hat {f}}(\sigma +i\tau )e^{i2\pi \xi t}\,d\sigma ,} for any τ < −⁠a/2π⁠. This theorem implies the Mellin inversion formula for the Laplace transformation, f ( t ) = 1 i 2 π ∫ b − i ∞ b + i ∞ F ( s ) e s t d s {\displaystyle f(t)={\frac {1}{i2\pi }}\int _{b-i\infty }^{b+i\infty }F(s)e^{st}\,ds} for any b > a, where F(s) is the Laplace transform of f(t). The hypotheses can be weakened, as in the results of Carleson and Hunt, to f(t) e−at being L1, provided that f be of bounded variation in a closed neighborhood of t (cf. Dini test), the value of f at t be taken to be the arithmetic mean of the left and right limits, and that the integrals be taken in the sense of Cauchy principal values. L2 versions of these inversion formulas are also available. == Fourier transform on Euclidean space == The Fourier transform can be defined in any arbitrary number of dimensions n. As with the one-dimensional case, there are many conventions. For an integrable function f(x), this article takes the definition: f ^ ( ξ ) = F ( f ) ( ξ ) = ∫ R n f ( x ) e − i 2 π ξ ⋅ x d x {\displaystyle {\hat {f}}({\boldsymbol {\xi }})={\mathcal {F}}(f)({\boldsymbol {\xi }})=\int _{\mathbb {R} ^{n}}f(\mathbf {x} )e^{-i2\pi {\boldsymbol {\xi }}\cdot \mathbf {x} }\,d\mathbf {x} } where x and ξ are n-dimensional vectors, and x · ξ is the dot product of the vectors. Alternatively, ξ can be viewed as belonging to the dual vector space R n ⋆ {\displaystyle \mathbb {R} ^{n\star }} , in which case the dot product becomes the contraction of x and ξ, usually written as ⟨x, ξ⟩. All of the basic properties listed above hold for the n-dimensional Fourier transform, as do Plancherel's and Parseval's theorem. When the function is integrable, the Fourier transform is still uniformly continuous and the Riemann–Lebesgue lemma holds. === Uncertainty principle === Generally speaking, the more concentrated f(x) is, the more spread out its Fourier transform f̂(ξ) must be. In particular, the scaling property of the Fourier transform may be seen as saying: if we squeeze a function in x, its Fourier transform stretches out in ξ. It is not possible to arbitrarily concentrate both a function and its Fourier transform. The trade-off between the compaction of a function and its Fourier transform can be formalized in the form of an uncertainty principle by viewing a function and its Fourier transform as conjugate variables with respect to the symplectic form on the time–frequency domain: from the point of view of the linear canonical transformation, the Fourier transform is rotation by 90° in the time–frequency domain, and preserves the symplectic form. Suppose f(x) is an integrable and square-integrable function. Without loss of generality, assume that f(x) is normalized: ∫ − ∞ ∞ | f ( x ) | 2 d x = 1. {\displaystyle \int _{-\infty }^{\infty }|f(x)|^{2}\,dx=1.} It follows from the Plancherel theorem that f̂(ξ) is also normalized. The spread around x = 0 may be measured by the dispersion about zero defined by D 0 ( f ) = ∫ − ∞ ∞ x 2 | f ( x ) | 2 d x . {\displaystyle D_{0}(f)=\int _{-\infty }^{\infty }x^{2}|f(x)|^{2}\,dx.} In probability terms, this is the second moment of |f(x)|2 about zero. The uncertainty principle states that, if f(x) is absolutely continuous and the functions x·f(x) and f′(x) are square integrable, then D 0 ( f ) D 0 ( f ^ ) ≥ 1 16 π 2 . {\displaystyle D_{0}(f)D_{0}({\hat {f}})\geq {\frac {1}{16\pi ^{2}}}.} The equality is attained only in the case f ( x ) = C 1 e − π x 2 σ 2 ∴ f ^ ( ξ ) = σ C 1 e − π σ 2 ξ 2 {\displaystyle {\begin{aligned}f(x)&=C_{1}\,e^{-\pi {\frac {x^{2}}{\sigma ^{2}}}}\\\therefore {\hat {f}}(\xi )&=\sigma C_{1}\,e^{-\pi \sigma ^{2}\xi ^{2}}\end{aligned}}} where σ > 0 is arbitrary and C1 = ⁠4√2/√σ⁠ so that f is L2-normalized. In other words, where f is a (normalized) Gaussian function with variance σ2/2π, centered at zero, and its Fourier transform is a Gaussian function with variance σ−2/2π. Gaussian functions are examples of Schwartz functions (see the discussion on tempered distributions below). In fact, this inequality implies that: ( ∫ − ∞ ∞ ( x − x 0 ) 2 | f ( x ) | 2 d x ) ( ∫ − ∞ ∞ ( ξ − ξ 0 ) 2 | f ^ ( ξ ) | 2 d ξ ) ≥ 1 16 π 2 , ∀ x 0 , ξ 0 ∈ R . {\displaystyle \left(\int _{-\infty }^{\infty }(x-x_{0})^{2}|f(x)|^{2}\,dx\right)\left(\int _{-\infty }^{\infty }(\xi -\xi _{0})^{2}\left|{\hat {f}}(\xi )\right|^{2}\,d\xi \right)\geq {\frac {1}{16\pi ^{2}}},\quad \forall x_{0},\xi _{0}\in \mathbb {R} .} In quantum mechanics, the momentum and position wave functions are Fourier transform pairs, up to a factor of the Planck constant. With this constant properly taken into account, the inequality above becomes the statement of the Heisenberg uncertainty principle. A stronger uncertainty principle is the Hirschman uncertainty principle, which is expressed as: H ( | f | 2 ) + H ( | f ^ | 2 ) ≥ log ⁡ ( e 2 ) {\displaystyle H\left(\left|f\right|^{2}\right)+H\left(\left|{\hat {f}}\right|^{2}\right)\geq \log \left({\frac {e}{2}}\right)} where H(p) is the differential entropy of the probability density function p(x): H ( p ) = − ∫ − ∞ ∞ p ( x ) log ⁡ ( p ( x ) ) d x {\displaystyle H(p)=-\int _{-\infty }^{\infty }p(x)\log {\bigl (}p(x){\bigr )}\,dx} where the logarithms may be in any base that is consistent. The equality is attained for a Gaussian, as in the previous case. === Sine and cosine transforms === Fourier's original formulation of the transform did not use complex numbers, but rather sines and cosines. Statisticians and others still use this form. An absolutely integrable function f for which Fourier inversion holds can be expanded in terms of genuine frequencies (avoiding negative frequencies, which are sometimes considered hard to interpret physically) λ by f ( t ) = ∫ 0 ∞ ( a ( λ ) cos ⁡ ( 2 π λ t ) + b ( λ ) sin ⁡ ( 2 π λ t ) ) d λ . {\displaystyle f(t)=\int _{0}^{\infty }{\bigl (}a(\lambda )\cos(2\pi \lambda t)+b(\lambda )\sin(2\pi \lambda t){\bigr )}\,d\lambda .} This is called an expansion as a trigonometric integral, or a Fourier integral expansion. The coefficient functions a and b can be found by using variants of the Fourier cosine transform and the Fourier sine transform (the normalisations are, again, not standardised): a ( λ ) = 2 ∫ − ∞ ∞ f ( t ) cos ⁡ ( 2 π λ t ) d t {\displaystyle a(\lambda )=2\int _{-\infty }^{\infty }f(t)\cos(2\pi \lambda t)\,dt} and b ( λ ) = 2 ∫ − ∞ ∞ f ( t ) sin ⁡ ( 2 π λ t ) d t . {\displaystyle b(\lambda )=2\int _{-\infty }^{\infty }f(t)\sin(2\pi \lambda t)\,dt.} Older literature refers to the two transform functions, the Fourier cosine transform, a, and the Fourier sine transform, b. The function f can be recovered from the sine and cosine transform using f ( t ) = 2 ∫ 0 ∞ ∫ − ∞ ∞ f ( τ ) cos ⁡ ( 2 π λ ( τ − t ) ) d τ d λ . {\displaystyle f(t)=2\int _{0}^{\infty }\int _{-\infty }^{\infty }f(\tau )\cos {\bigl (}2\pi \lambda (\tau -t){\bigr )}\,d\tau \,d\lambda .} together with trigonometric identities. This is referred to as Fourier's integral formula. === Spherical harmonics === Let the set of homogeneous harmonic polynomials of degree k on Rn be denoted by Ak. The set Ak consists of the solid spherical harmonics of degree k. The solid spherical harmonics play a similar role in higher dimensions to the Hermite polynomials in dimension one. Specifically, if f(x) = e−π|x|2P(x) for some P(x) in Ak, then f̂(ξ) = i−k f(ξ). Let the set Hk be the closure in L2(Rn) of linear combinations of functions of the form f(|x|)P(x) where P(x) is in Ak. The space L2(Rn) is then a direct sum of the spaces Hk and the Fourier transform maps each space Hk to itself and is possible to characterize the action of the Fourier transform on each space Hk. Let f(x) = f0(|x|)P(x) (with P(x) in Ak), then f ^ ( ξ ) = F 0 ( | ξ | ) P ( ξ ) {\displaystyle {\hat {f}}(\xi )=F_{0}(|\xi |)P(\xi )} where F 0 ( r ) = 2 π i − k r − n + 2 k − 2 2 ∫ 0 ∞ f 0 ( s ) J n + 2 k − 2 2 ( 2 π r s ) s n + 2 k 2 d s . {\displaystyle F_{0}(r)=2\pi i^{-k}r^{-{\frac {n+2k-2}{2}}}\int _{0}^{\infty }f_{0}(s)J_{\frac {n+2k-2}{2}}(2\pi rs)s^{\frac {n+2k}{2}}\,ds.} Here J(n + 2k − 2)/2 denotes the Bessel function of the first kind with order ⁠n + 2k − 2/2⁠. When k = 0 this gives a useful formula for the Fourier transform of a radial function. This is essentially the Hankel transform. Moreover, there is a simple recursion relating the cases n + 2 and n allowing to compute, e.g., the three-dimensional Fourier transform of a radial function from the one-dimensional one. === Restriction problems === In higher dimensions it becomes interesting to study restriction problems for the Fourier transform. The Fourier transform of an integrable function is continuous and the restriction of this function to any set is defined. But for a square-integrable function the Fourier transform could be a general class of square integrable functions. As such, the restriction of the Fourier transform of an L2(Rn) function cannot be defined on sets of measure 0. It is still an active area of study to understand restriction problems in Lp for 1 < p < 2. It is possible in some cases to define the restriction of a Fourier transform to a set S, provided S has non-zero curvature. The case when S is the unit sphere in Rn is of particular interest. In this case the Tomas–Stein restriction theorem states that the restriction of the Fourier transform to the unit sphere in Rn is a bounded operator on Lp provided 1 ≤ p ≤ ⁠2n + 2/n + 3⁠. One notable difference between the Fourier transform in 1 dimension versus higher dimensions concerns the partial sum operator. Consider an increasing collection of measurable sets ER indexed by R ∈ (0,∞): such as balls of radius R centered at the origin, or cubes of side 2R. For a given integrable function f, consider the function fR defined by: f R ( x ) = ∫ E R f ^ ( ξ ) e i 2 π x ⋅ ξ d ξ , x ∈ R n . {\displaystyle f_{R}(x)=\int _{E_{R}}{\hat {f}}(\xi )e^{i2\pi x\cdot \xi }\,d\xi ,\quad x\in \mathbb {R} ^{n}.} Suppose in addition that f ∈ Lp(Rn). For n = 1 and 1 < p < ∞, if one takes ER = (−R, R), then fR converges to f in Lp as R tends to infinity, by the boundedness of the Hilbert transform. Naively one may hope the same holds true for n > 1. In the case that ER is taken to be a cube with side length R, then convergence still holds. Another natural candidate is the Euclidean ball ER = {ξ : |ξ| < R}. In order for this partial sum operator to converge, it is necessary that the multiplier for the unit ball be bounded in Lp(Rn). For n ≥ 2 it is a celebrated theorem of Charles Fefferman that the multiplier for the unit ball is never bounded unless p = 2. In fact, when p ≠ 2, this shows that not only may fR fail to converge to f in Lp, but for some functions f ∈ Lp(Rn), fR is not even an element of Lp. == Fourier transform on function spaces == The definition of the Fourier transform naturally extends from L 1 ( R ) {\displaystyle L^{1}(\mathbb {R} )} to L 1 ( R n ) {\displaystyle L^{1}(\mathbb {R} ^{n})} . That is, if f ∈ L 1 ( R n ) {\displaystyle f\in L^{1}(\mathbb {R} ^{n})} then the Fourier transform F : L 1 ( R n ) → L ∞ ( R n ) {\displaystyle {\mathcal {F}}:L^{1}(\mathbb {R} ^{n})\to L^{\infty }(\mathbb {R} ^{n})} is given by f ( x ) ↦ f ^ ( ξ ) = ∫ R n f ( x ) e − i 2 π ξ ⋅ x d x , ∀ ξ ∈ R n . {\displaystyle f(x)\mapsto {\hat {f}}(\xi )=\int _{\mathbb {R} ^{n}}f(x)e^{-i2\pi \xi \cdot x}\,dx,\quad \forall \xi \in \mathbb {R} ^{n}.} This operator is bounded as sup ξ ∈ R n | f ^ ( ξ ) | ≤ ∫ R n | f ( x ) | d x , {\displaystyle \sup _{\xi \in \mathbb {R} ^{n}}\left\vert {\hat {f}}(\xi )\right\vert \leq \int _{\mathbb {R} ^{n}}\vert f(x)\vert \,dx,} which shows that its operator norm is bounded by 1. The Riemann–Lebesgue lemma shows that if f ∈ L 1 ( R n ) {\displaystyle f\in L^{1}(\mathbb {R} ^{n})} then its Fourier transform actually belongs to the space of continuous functions which vanish at infinity, i.e., f ^ ∈ C 0 ( R n ) ⊂ L ∞ ( R n ) {\displaystyle {\hat {f}}\in C_{0}(\mathbb {R} ^{n})\subset L^{\infty }(\mathbb {R} ^{n})} . Furthermore, the image of L 1 {\displaystyle L^{1}} under F {\displaystyle {\mathcal {F}}} is a strict subset of C 0 ( R n ) {\displaystyle C_{0}(\mathbb {R} ^{n})} . Similarly to the case of one variable, the Fourier transform can be defined on L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} . The Fourier transform in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} is no longer given by an ordinary Lebesgue integral, although it can be computed by an improper integral, i.e., f ^ ( ξ ) = lim R → ∞ ∫ | x | ≤ R f ( x ) e − i 2 π ξ ⋅ x d x {\displaystyle {\hat {f}}(\xi )=\lim _{R\to \infty }\int _{|x|\leq R}f(x)e^{-i2\pi \xi \cdot x}\,dx} where the limit is taken in the L2 sense. Furthermore, F : L 2 ( R n ) → L 2 ( R n ) {\displaystyle {\mathcal {F}}:L^{2}(\mathbb {R} ^{n})\to L^{2}(\mathbb {R} ^{n})} is a unitary operator. For an operator to be unitary it is sufficient to show that it is bijective and preserves the inner product, so in this case these follow from the Fourier inversion theorem combined with the fact that for any f, g ∈ L2(Rn) we have ∫ R n f ( x ) F g ( x ) d x = ∫ R n F f ( x ) g ( x ) d x . {\displaystyle \int _{\mathbb {R} ^{n}}f(x){\mathcal {F}}g(x)\,dx=\int _{\mathbb {R} ^{n}}{\mathcal {F}}f(x)g(x)\,dx.} In particular, the image of L2(Rn) is itself under the Fourier transform. === On other Lp === For 1 < p < 2 {\displaystyle 1<p<2} , the Fourier transform can be defined on L p ( R ) {\displaystyle L^{p}(\mathbb {R} )} by Marcinkiewicz interpolation, which amounts to decomposing such functions into a fat tail part in L2 plus a fat body part in L1. In each of these spaces, the Fourier transform of a function in Lp(Rn) is in Lq(Rn), where q = ⁠p/p − 1⁠ is the Hölder conjugate of p (by the Hausdorff–Young inequality). However, except for p = 2, the image is not easily characterized. Further extensions become more technical. The Fourier transform of functions in Lp for the range 2 < p < ∞ requires the study of distributions. In fact, it can be shown that there are functions in Lp with p > 2 so that the Fourier transform is not defined as a function. === Tempered distributions === One might consider enlarging the domain of the Fourier transform from L 1 + L 2 {\displaystyle L^{1}+L^{2}} by considering generalized functions, or distributions. A distribution on R n {\displaystyle \mathbb {R} ^{n}} is a continuous linear functional on the space C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} of compactly supported smooth functions (i.e. bump functions), equipped with a suitable topology. Since C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} is dense in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} , the Plancherel theorem allows one to extend the definition of the Fourier transform to general functions in L 2 ( R n ) {\displaystyle L^{2}(\mathbb {R} ^{n})} by continuity arguments. The strategy is then to consider the action of the Fourier transform on C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} and pass to distributions by duality. The obstruction to doing this is that the Fourier transform does not map C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} to C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} . In fact the Fourier transform of an element in C c ∞ ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})} can not vanish on an open set; see the above discussion on the uncertainty principle. The Fourier transform can also be defined for tempered distributions S ′ ( R n ) {\displaystyle {\mathcal {S}}'(\mathbb {R} ^{n})} , dual to the space of Schwartz functions S ( R n ) {\displaystyle {\mathcal {S}}(\mathbb {R} ^{n})} . A Schwartz function is a smooth function that decays at infinity, along with all of its derivatives, hence C c ∞ ( R n ) ⊂ S ( R n ) {\displaystyle C_{c}^{\infty }(\mathbb {R} ^{n})\subset {\mathcal {S}}(\mathbb {R} ^{n})} and: F : C c ∞ ( R n ) → S ( R n ) ∖ C c ∞ ( R n ) . {\displaystyle {\mathcal {F}}:C_{c}^{\infty }(\mathbb {R} ^{n})\rightarrow S(\mathbb {R} ^{n})\setminus C_{c}^{\infty }(\mathbb {R} ^{n}).} The Fourier transform is an automorphism of the Schwartz space and, by duality, also an automorphism of the space of tempered distributions. The tempered distributions include well-behaved functions of polynomial growth, distributions of compact support as well as all the integrable functions mentioned above. For the definition of the Fourier transform of a tempered distribution, let f {\displaystyle f} and g {\displaystyle g} be integrable functions, and let f ^ {\displaystyle {\hat {f}}} and g ^ {\displaystyle {\hat {g}}} be their Fourier transforms respectively. Then the Fourier transform obeys the following multiplication formula, ∫ R n f ^ ( x ) g ( x ) d x = ∫ R n f ( x ) g ^ ( x ) d x . {\displaystyle \int _{\mathbb {R} ^{n}}{\hat {f}}(x)g(x)\,dx=\int _{\mathbb {R} ^{n}}f(x){\hat {g}}(x)\,dx.} Every integrable function f {\displaystyle f} defines (induces) a distribution T f {\displaystyle T_{f}} by the relation T f ( ϕ ) = ∫ R n f ( x ) ϕ ( x ) d x , ∀ ϕ ∈ S ( R n ) . {\displaystyle T_{f}(\phi )=\int _{\mathbb {R} ^{n}}f(x)\phi (x)\,dx,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).} So it makes sense to define the Fourier transform of a tempered distribution T f ∈ S ′ ( R ) {\displaystyle T_{f}\in {\mathcal {S}}'(\mathbb {R} )} by the duality: ⟨ T ^ f , ϕ ⟩ = ⟨ T f , ϕ ^ ⟩ , ∀ ϕ ∈ S ( R n ) . {\displaystyle \langle {\widehat {T}}_{f},\phi \rangle =\langle T_{f},{\widehat {\phi }}\rangle ,\quad \forall \phi \in {\mathcal {S}}(\mathbb {R} ^{n}).} Extending this to all tempered distributions T {\displaystyle T} gives the general definition of the Fourier transform. Distributions can be differentiated and the above-mentioned compatibility of the Fourier transform with differentiation and convolution remains true for tempered distributions. == Generalizations == === Fourier–Stieltjes transform on measurable spaces === The Fourier transform of a finite Borel measure μ on Rn is given by the continuous function: μ ^ ( ξ ) = ∫ R n e − i 2 π x ⋅ ξ d μ , {\displaystyle {\hat {\mu }}(\xi )=\int _{\mathbb {R} ^{n}}e^{-i2\pi x\cdot \xi }\,d\mu ,} and called the Fourier-Stieltjes transform due to its connection with the Riemann-Stieltjes integral representation of (Radon) measures. If μ {\displaystyle \mu } is the probability distribution of a random variable X {\displaystyle X} then its Fourier–Stieltjes transform is, by definition, a characteristic function. If, in addition, the probability distribution has a probability density function, this definition is subject to the usual Fourier transform. Stated more generally, when μ {\displaystyle \mu } is absolutely continuous with respect to the Lebesgue measure, i.e., d μ = f ( x ) d x , {\displaystyle d\mu =f(x)dx,} then μ ^ ( ξ ) = f ^ ( ξ ) , {\displaystyle {\hat {\mu }}(\xi )={\hat {f}}(\xi ),} and the Fourier-Stieltjes transform reduces to the usual definition of the Fourier transform. That is, the notable difference with the Fourier transform of integrable functions is that the Fourier-Stieltjes transform need not vanish at infinity, i.e., the Riemann–Lebesgue lemma fails for measures. Bochner's theorem characterizes which functions may arise as the Fourier–Stieltjes transform of a positive measure on the circle. One example of a finite Borel measure that is not a function is the Dirac measure. Its Fourier transform is a constant function (whose value depends on the form of the Fourier transform used). === Locally compact abelian groups === The Fourier transform may be generalized to any locally compact abelian group, i.e., an abelian group that is also a locally compact Hausdorff space such that the group operation is continuous. If G is a locally compact abelian group, it has a translation invariant measure μ, called Haar measure. For a locally compact abelian group G, the set of irreducible, i.e. one-dimensional, unitary representations are called its characters. With its natural group structure and the topology of uniform convergence on compact sets (that is, the topology induced by the compact-open topology on the space of all continuous functions from G {\displaystyle G} to the circle group), the set of characters Ĝ is itself a locally compact abelian group, called the Pontryagin dual of G. For a function f in L1(G), its Fourier transform is defined by f ^ ( ξ ) = ∫ G ξ ( x ) f ( x ) d μ for any ξ ∈ G ^ . {\displaystyle {\hat {f}}(\xi )=\int _{G}\xi (x)f(x)\,d\mu \quad {\text{for any }}\xi \in {\hat {G}}.} The Riemann–Lebesgue lemma holds in this case; f̂(ξ) is a function vanishing at infinity on Ĝ. The Fourier transform on T = R/Z is an example; here T is a locally compact abelian group, and the Haar measure μ on T can be thought of as the Lebesgue measure on [0,1). Consider the representation of T on the complex plane C that is a 1-dimensional complex vector space. There are a group of representations (which are irreducible since C is 1-dim) { e k : T → G L 1 ( C ) = C ∗ ∣ k ∈ Z } {\displaystyle \{e_{k}:T\rightarrow GL_{1}(C)=C^{*}\mid k\in Z\}} where e k ( x ) = e i 2 π k x {\displaystyle e_{k}(x)=e^{i2\pi kx}} for x ∈ T {\displaystyle x\in T} . The character of such representation, that is the trace of e k ( x ) {\displaystyle e_{k}(x)} for each x ∈ T {\displaystyle x\in T} and k ∈ Z {\displaystyle k\in Z} , is e i 2 π k x {\displaystyle e^{i2\pi kx}} itself. In the case of representation of finite group, the character table of the group G are rows of vectors such that each row is the character of one irreducible representation of G, and these vectors form an orthonormal basis of the space of class functions that map from G to C by Schur's lemma. Now the group T is no longer finite but still compact, and it preserves the orthonormality of character table. Each row of the table is the function e k ( x ) {\displaystyle e_{k}(x)} of x ∈ T , {\displaystyle x\in T,} and the inner product between two class functions (all functions being class functions since T is abelian) f , g ∈ L 2 ( T , d μ ) {\displaystyle f,g\in L^{2}(T,d\mu )} is defined as ⟨ f , g ⟩ = 1 | T | ∫ [ 0 , 1 ) f ( y ) g ¯ ( y ) d μ ( y ) {\textstyle \langle f,g\rangle ={\frac {1}{|T|}}\int _{[0,1)}f(y){\overline {g}}(y)d\mu (y)} with the normalizing factor | T | = 1 {\displaystyle |T|=1} . The sequence { e k ∣ k ∈ Z } {\displaystyle \{e_{k}\mid k\in Z\}} is an orthonormal basis of the space of class functions L 2 ( T , d μ ) {\displaystyle L^{2}(T,d\mu )} . For any representation V of a finite group G, χ v {\displaystyle \chi _{v}} can be expressed as the span ∑ i ⟨ χ v , χ v i ⟩ χ v i {\textstyle \sum _{i}\left\langle \chi _{v},\chi _{v_{i}}\right\rangle \chi _{v_{i}}} ( V i {\displaystyle V_{i}} are the irreps of G), such that ⟨ χ v , χ v i ⟩ = 1 | G | ∑ g ∈ G χ v ( g ) χ ¯ v i ( g ) {\textstyle \left\langle \chi _{v},\chi _{v_{i}}\right\rangle ={\frac {1}{|G|}}\sum _{g\in G}\chi _{v}(g){\overline {\chi }}_{v_{i}}(g)} . Similarly for G = T {\displaystyle G=T} and f ∈ L 2 ( T , d μ ) {\displaystyle f\in L^{2}(T,d\mu )} , f ( x ) = ∑ k ∈ Z f ^ ( k ) e k {\textstyle f(x)=\sum _{k\in Z}{\hat {f}}(k)e_{k}} . The Pontriagin dual T ^ {\displaystyle {\hat {T}}} is { e k } ( k ∈ Z ) {\displaystyle \{e_{k}\}(k\in Z)} and for f ∈ L 2 ( T , d μ ) {\displaystyle f\in L^{2}(T,d\mu )} , f ^ ( k ) = 1 | T | ∫ [ 0 , 1 ) f ( y ) e − i 2 π k y d y {\textstyle {\hat {f}}(k)={\frac {1}{|T|}}\int _{[0,1)}f(y)e^{-i2\pi ky}dy} is its Fourier transform for e k ∈ T ^ {\displaystyle e_{k}\in {\hat {T}}} . === Gelfand transform === The Fourier transform is also a special case of Gelfand transform. In this particular context, it is closely related to the Pontryagin duality map defined above. Given an abelian locally compact Hausdorff topological group G, as before we consider space L1(G), defined using a Haar measure. With convolution as multiplication, L1(G) is an abelian Banach algebra. It also has an involution * given by f ∗ ( g ) = f ( g − 1 ) ¯ . {\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}}.} Taking the completion with respect to the largest possibly C*-norm gives its enveloping C*-algebra, called the group C*-algebra C*(G) of G. (Any C*-norm on L1(G) is bounded by the L1 norm, therefore their supremum exists.) Given any abelian C*-algebra A, the Gelfand transform gives an isomorphism between A and C0(A^), where A^ is the multiplicative linear functionals, i.e. one-dimensional representations, on A with the weak-* topology. The map is simply given by a ↦ ( φ ↦ φ ( a ) ) {\displaystyle a\mapsto {\bigl (}\varphi \mapsto \varphi (a){\bigr )}} It turns out that the multiplicative linear functionals of C*(G), after suitable identification, are exactly the characters of G, and the Gelfand transform, when restricted to the dense subset L1(G) is the Fourier–Pontryagin transform. === Compact non-abelian groups === The Fourier transform can also be defined for functions on a non-abelian group, provided that the group is compact. Removing the assumption that the underlying group is abelian, irreducible unitary representations need not always be one-dimensional. This means the Fourier transform on a non-abelian group takes values as Hilbert space operators. The Fourier transform on compact groups is a major tool in representation theory and non-commutative harmonic analysis. Let G be a compact Hausdorff topological group. Let Σ denote the collection of all isomorphism classes of finite-dimensional irreducible unitary representations, along with a definite choice of representation U(σ) on the Hilbert space Hσ of finite dimension dσ for each σ ∈ Σ. If μ is a finite Borel measure on G, then the Fourier–Stieltjes transform of μ is the operator on Hσ defined by ⟨ μ ^ ξ , η ⟩ H σ = ∫ G ⟨ U ¯ g ( σ ) ξ , η ⟩ d μ ( g ) {\displaystyle \left\langle {\hat {\mu }}\xi ,\eta \right\rangle _{H_{\sigma }}=\int _{G}\left\langle {\overline {U}}_{g}^{(\sigma )}\xi ,\eta \right\rangle \,d\mu (g)} where U(σ) is the complex-conjugate representation of U(σ) acting on Hσ. If μ is absolutely continuous with respect to the left-invariant probability measure λ on G, represented as d μ = f d λ {\displaystyle d\mu =f\,d\lambda } for some f ∈ L1(λ), one identifies the Fourier transform of f with the Fourier–Stieltjes transform of μ. The mapping μ ↦ μ ^ {\displaystyle \mu \mapsto {\hat {\mu }}} defines an isomorphism between the Banach space M(G) of finite Borel measures (see rca space) and a closed subspace of the Banach space C∞(Σ) consisting of all sequences E = (Eσ) indexed by Σ of (bounded) linear operators Eσ : Hσ → Hσ for which the norm ‖ E ‖ = sup σ ∈ Σ ‖ E σ ‖ {\displaystyle \|E\|=\sup _{\sigma \in \Sigma }\left\|E_{\sigma }\right\|} is finite. The "convolution theorem" asserts that, furthermore, this isomorphism of Banach spaces is in fact an isometric isomorphism of C*-algebras into a subspace of C∞(Σ). Multiplication on M(G) is given by convolution of measures and the involution * defined by f ∗ ( g ) = f ( g − 1 ) ¯ , {\displaystyle f^{*}(g)={\overline {f\left(g^{-1}\right)}},} and C∞(Σ) has a natural C*-algebra structure as Hilbert space operators. The Peter–Weyl theorem holds, and a version of the Fourier inversion formula (Plancherel's theorem) follows: if f ∈ L2(G), then f ( g ) = ∑ σ ∈ Σ d σ tr ⁡ ( f ^ ( σ ) U g ( σ ) ) {\displaystyle f(g)=\sum _{\sigma \in \Sigma }d_{\sigma }\operatorname {tr} \left({\hat {f}}(\sigma )U_{g}^{(\sigma )}\right)} where the summation is understood as convergent in the L2 sense. The generalization of the Fourier transform to the noncommutative situation has also in part contributed to the development of noncommutative geometry. In this context, a categorical generalization of the Fourier transform to noncommutative groups is Tannaka–Krein duality, which replaces the group of characters with the category of representations. However, this loses the connection with harmonic functions. == Alternatives == In signal processing terms, a function (of time) is a representation of a signal with perfect time resolution, but no frequency information, while the Fourier transform has perfect frequency resolution, but no time information: the magnitude of the Fourier transform at a point is how much frequency content there is, but location is only given by phase (argument of the Fourier transform at a point), and standing waves are not localized in time – a sine wave continues out to infinity, without decaying. This limits the usefulness of the Fourier transform for analyzing signals that are localized in time, notably transients, or any signal of finite extent. As alternatives to the Fourier transform, in time–frequency analysis, one uses time–frequency transforms or time–frequency distributions to represent signals in a form that has some time information and some frequency information – by the uncertainty principle, there is a trade-off between these. These can be generalizations of the Fourier transform, such as the short-time Fourier transform, fractional Fourier transform, Synchrosqueezing Fourier transform, or other functions to represent signals, as in wavelet transforms and chirplet transforms, with the wavelet analog of the (continuous) Fourier transform being the continuous wavelet transform. == Example == The following figures provide a visual illustration of how the Fourier transform's integral measures whether a frequency is present in a particular function. The first image depicts the function f ( t ) = cos ⁡ ( 2 π 3 t ) e − π t 2 , {\displaystyle f(t)=\cos(2\pi \ 3t)\ e^{-\pi t^{2}},} which is a 3 Hz cosine wave (the first term) shaped by a Gaussian envelope function (the second term) that smoothly turns the wave on and off. The next 2 images show the product f ( t ) e − i 2 π 3 t , {\displaystyle f(t)e^{-i2\pi 3t},} which must be integrated to calculate the Fourier transform at +3 Hz. The real part of the integrand has a non-negative average value, because the alternating signs of f ( t ) {\displaystyle f(t)} and Re ⁡ ( e − i 2 π 3 t ) {\displaystyle \operatorname {Re} (e^{-i2\pi 3t})} oscillate at the same rate and in phase, whereas f ( t ) {\displaystyle f(t)} and Im ⁡ ( e − i 2 π 3 t ) {\displaystyle \operatorname {Im} (e^{-i2\pi 3t})} oscillate at the same rate but with orthogonal phase. The absolute value of the Fourier transform at +3 Hz is 0.5, which is relatively large. When added to the Fourier transform at -3 Hz (which is identical because we started with a real signal), we find that the amplitude of the 3 Hz frequency component is 1. However, when you try to measure a frequency that is not present, both the real and imaginary component of the integral vary rapidly between positive and negative values. For instance, the red curve is looking for 5 Hz. The absolute value of its integral is nearly zero, indicating that almost no 5 Hz component was in the signal. The general situation is usually more complicated than this, but heuristically this is how the Fourier transform measures how much of an individual frequency is present in a function f ( t ) . {\displaystyle f(t).} To re-enforce an earlier point, the reason for the response at ξ = − 3 {\displaystyle \xi =-3} Hz is because cos ⁡ ( 2 π 3 t ) {\displaystyle \cos(2\pi 3t)} and cos ⁡ ( 2 π ( − 3 ) t ) {\displaystyle \cos(2\pi (-3)t)} are indistinguishable. The transform of e i 2 π 3 t ⋅ e − π t 2 {\displaystyle e^{i2\pi 3t}\cdot e^{-\pi t^{2}}} would have just one response, whose amplitude is the integral of the smooth envelope: e − π t 2 , {\displaystyle e^{-\pi t^{2}},} whereas Re ⁡ ( f ( t ) ⋅ e − i 2 π 3 t ) {\displaystyle \operatorname {Re} (f(t)\cdot e^{-i2\pi 3t})} is e − π t 2 ( 1 + cos ⁡ ( 2 π 6 t ) ) / 2. {\displaystyle e^{-\pi t^{2}}(1+\cos(2\pi 6t))/2.} == Applications == Linear operations performed in one domain (time or frequency) have corresponding operations in the other domain, which are sometimes easier to perform. The operation of differentiation in the time domain corresponds to multiplication by the frequency, so some differential equations are easier to analyze in the frequency domain. Also, convolution in the time domain corresponds to ordinary multiplication in the frequency domain (see Convolution theorem). After performing the desired operations, transformation of the result can be made back to the time domain. Harmonic analysis is the systematic study of the relationship between the frequency and time domains, including the kinds of functions or operations that are "simpler" in one or the other, and has deep connections to many areas of modern mathematics. === Analysis of differential equations === Perhaps the most important use of the Fourier transformation is to solve partial differential equations. Many of the equations of the mathematical physics of the nineteenth century can be treated this way. Fourier studied the heat equation, which in one dimension and in dimensionless units is ∂ 2 y ( x , t ) ∂ 2 x = ∂ y ( x , t ) ∂ t . {\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial y(x,t)}{\partial t}}.} The example we will give, a slightly more difficult one, is the wave equation in one dimension, ∂ 2 y ( x , t ) ∂ 2 x = ∂ 2 y ( x , t ) ∂ 2 t . {\displaystyle {\frac {\partial ^{2}y(x,t)}{\partial ^{2}x}}={\frac {\partial ^{2}y(x,t)}{\partial ^{2}t}}.} As usual, the problem is not to find a solution: there are infinitely many. The problem is that of the so-called "boundary problem": find a solution which satisfies the "boundary conditions" y ( x , 0 ) = f ( x ) , ∂ y ( x , 0 ) ∂ t = g ( x ) . {\displaystyle y(x,0)=f(x),\qquad {\frac {\partial y(x,0)}{\partial t}}=g(x).} Here, f and g are given functions. For the heat equation, only one boundary condition can be required (usually the first one). But for the wave equation, there are still infinitely many solutions y which satisfy the first boundary condition. But when one imposes both conditions, there is only one possible solution. It is easier to find the Fourier transform ŷ of the solution than to find the solution directly. This is because the Fourier transformation takes differentiation into multiplication by the Fourier-dual variable, and so a partial differential equation applied to the original function is transformed into multiplication by polynomial functions of the dual variables applied to the transformed function. After ŷ is determined, we can apply the inverse Fourier transformation to find y. Fourier's method is as follows. First, note that any function of the forms cos ⁡ ( 2 π ξ ( x ± t ) ) or sin ⁡ ( 2 π ξ ( x ± t ) ) {\displaystyle \cos {\bigl (}2\pi \xi (x\pm t){\bigr )}{\text{ or }}\sin {\bigl (}2\pi \xi (x\pm t){\bigr )}} satisfies the wave equation. These are called the elementary solutions. Second, note that therefore any integral y ( x , t ) = ∫ 0 ∞ d ξ [ a + ( ξ ) cos ⁡ ( 2 π ξ ( x + t ) ) + a − ( ξ ) cos ⁡ ( 2 π ξ ( x − t ) ) + b + ( ξ ) sin ⁡ ( 2 π ξ ( x + t ) ) + b − ( ξ ) sin ⁡ ( 2 π ξ ( x − t ) ) ] {\displaystyle {\begin{aligned}y(x,t)=\int _{0}^{\infty }d\xi {\Bigl [}&a_{+}(\xi )\cos {\bigl (}2\pi \xi (x+t){\bigr )}+a_{-}(\xi )\cos {\bigl (}2\pi \xi (x-t){\bigr )}+{}\\&b_{+}(\xi )\sin {\bigl (}2\pi \xi (x+t){\bigr )}+b_{-}(\xi )\sin \left(2\pi \xi (x-t)\right){\Bigr ]}\end{aligned}}} satisfies the wave equation for arbitrary a+, a−, b+, b−. This integral may be interpreted as a continuous linear combination of solutions for the linear equation. Now this resembles the formula for the Fourier synthesis of a function. In fact, this is the real inverse Fourier transform of a± and b± in the variable x. The third step is to examine how to find the specific unknown coefficient functions a± and b± that will lead to y satisfying the boundary conditions. We are interested in the values of these solutions at t = 0. So we will set t = 0. Assuming that the conditions needed for Fourier inversion are satisfied, we can then find the Fourier sine and cosine transforms (in the variable x) of both sides and obtain 2 ∫ − ∞ ∞ y ( x , 0 ) cos ⁡ ( 2 π ξ x ) d x = a + + a − {\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\cos(2\pi \xi x)\,dx=a_{+}+a_{-}} and 2 ∫ − ∞ ∞ y ( x , 0 ) sin ⁡ ( 2 π ξ x ) d x = b + + b − . {\displaystyle 2\int _{-\infty }^{\infty }y(x,0)\sin(2\pi \xi x)\,dx=b_{+}+b_{-}.} Similarly, taking the derivative of y with respect to t and then applying the Fourier sine and cosine transformations yields 2 ∫ − ∞ ∞ ∂ y ( u , 0 ) ∂ t sin ⁡ ( 2 π ξ x ) d x = ( 2 π ξ ) ( − a + + a − ) {\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\sin(2\pi \xi x)\,dx=(2\pi \xi )\left(-a_{+}+a_{-}\right)} and 2 ∫ − ∞ ∞ ∂ y ( u , 0 ) ∂ t cos ⁡ ( 2 π ξ x ) d x = ( 2 π ξ ) ( b + − b − ) . {\displaystyle 2\int _{-\infty }^{\infty }{\frac {\partial y(u,0)}{\partial t}}\cos(2\pi \xi x)\,dx=(2\pi \xi )\left(b_{+}-b_{-}\right).} These are four linear equations for the four unknowns a± and b±, in terms of the Fourier sine and cosine transforms of the boundary conditions, which are easily solved by elementary algebra, provided that these transforms can be found. In summary, we chose a set of elementary solutions, parametrized by ξ, of which the general solution would be a (continuous) linear combination in the form of an integral over the parameter ξ. But this integral was in the form of a Fourier integral. The next step was to express the boundary conditions in terms of these integrals, and set them equal to the given functions f and g. But these expressions also took the form of a Fourier integral because of the properties of the Fourier transform of a derivative. The last step was to exploit Fourier inversion by applying the Fourier transformation to both sides, thus obtaining expressions for the coefficient functions a± and b± in terms of the given boundary conditions f and g. From a higher point of view, Fourier's procedure can be reformulated more conceptually. Since there are two variables, we will use the Fourier transformation in both x and t rather than operate as Fourier did, who only transformed in the spatial variables. Note that ŷ must be considered in the sense of a distribution since y(x, t) is not going to be L1: as a wave, it will persist through time and thus is not a transient phenomenon. But it will be bounded and so its Fourier transform can be defined as a distribution. The operational properties of the Fourier transformation that are relevant to this equation are that it takes differentiation in x to multiplication by i2πξ and differentiation with respect to t to multiplication by i2πf where f is the frequency. Then the wave equation becomes an algebraic equation in ŷ: ξ 2 y ^ ( ξ , f ) = f 2 y ^ ( ξ , f ) . {\displaystyle \xi ^{2}{\hat {y}}(\xi ,f)=f^{2}{\hat {y}}(\xi ,f).} This is equivalent to requiring ŷ(ξ, f) = 0 unless ξ = ±f. Right away, this explains why the choice of elementary solutions we made earlier worked so well: obviously f̂ = δ(ξ ± f) will be solutions. Applying Fourier inversion to these delta functions, we obtain the elementary solutions we picked earlier. But from the higher point of view, one does not pick elementary solutions, but rather considers the space of all distributions which are supported on the (degenerate) conic ξ2 − f2 = 0. We may as well consider the distributions supported on the conic that are given by distributions of one variable on the line ξ = f plus distributions on the line ξ = −f as follows: if Φ is any test function, ∬ y ^ ϕ ( ξ , f ) d ξ d f = ∫ s + ϕ ( ξ , ξ ) d ξ + ∫ s − ϕ ( ξ , − ξ ) d ξ , {\displaystyle \iint {\hat {y}}\phi (\xi ,f)\,d\xi \,df=\int s_{+}\phi (\xi ,\xi )\,d\xi +\int s_{-}\phi (\xi ,-\xi )\,d\xi ,} where s+, and s−, are distributions of one variable. Then Fourier inversion gives, for the boundary conditions, something very similar to what we had more concretely above (put Φ(ξ, f) = ei2π(xξ+tf), which is clearly of polynomial growth): y ( x , 0 ) = ∫ { s + ( ξ ) + s − ( ξ ) } e i 2 π ξ x + 0 d ξ {\displaystyle y(x,0)=\int {\bigl \{}s_{+}(\xi )+s_{-}(\xi ){\bigr \}}e^{i2\pi \xi x+0}\,d\xi } and ∂ y ( x , 0 ) ∂ t = ∫ { s + ( ξ ) − s − ( ξ ) } i 2 π ξ e i 2 π ξ x + 0 d ξ . {\displaystyle {\frac {\partial y(x,0)}{\partial t}}=\int {\bigl \{}s_{+}(\xi )-s_{-}(\xi ){\bigr \}}i2\pi \xi e^{i2\pi \xi x+0}\,d\xi .} Now, as before, applying the one-variable Fourier transformation in the variable x to these functions of x yields two equations in the two unknown distributions s± (which can be taken to be ordinary functions if the boundary conditions are L1 or L2). From a calculational point of view, the drawback of course is that one must first calculate the Fourier transforms of the boundary conditions, then assemble the solution from these, and then calculate an inverse Fourier transform. Closed form formulas are rare, except when there is some geometric symmetry that can be exploited, and the numerical calculations are difficult because of the oscillatory nature of the integrals, which makes convergence slow and hard to estimate. For practical calculations, other methods are often used. The twentieth century has seen the extension of these methods to all linear partial differential equations with polynomial coefficients, and by extending the notion of Fourier transformation to include Fourier integral operators, some non-linear equations as well. === Fourier-transform spectroscopy === The Fourier transform is also used in nuclear magnetic resonance (NMR) and in other kinds of spectroscopy, e.g. infrared (FTIR). In NMR an exponentially shaped free induction decay (FID) signal is acquired in the time domain and Fourier-transformed to a Lorentzian line-shape in the frequency domain. The Fourier transform is also used in magnetic resonance imaging (MRI) and mass spectrometry. === Quantum mechanics === The Fourier transform is useful in quantum mechanics in at least two different ways. To begin with, the basic conceptual structure of quantum mechanics postulates the existence of pairs of complementary variables, connected by the Heisenberg uncertainty principle. For example, in one dimension, the spatial variable q of, say, a particle, can only be measured by the quantum mechanical "position operator" at the cost of losing information about the momentum p of the particle. Therefore, the physical state of the particle can either be described by a function, called "the wave function", of q or by a function of p but not by a function of both variables. The variable p is called the conjugate variable to q. In classical mechanics, the physical state of a particle (existing in one dimension, for simplicity of exposition) would be given by assigning definite values to both p and q simultaneously. Thus, the set of all possible physical states is the two-dimensional real vector space with a p-axis and a q-axis called the phase space. In contrast, quantum mechanics chooses a polarisation of this space in the sense that it picks a subspace of one-half the dimension, for example, the q-axis alone, but instead of considering only points, takes the set of all complex-valued "wave functions" on this axis. Nevertheless, choosing the p-axis is an equally valid polarisation, yielding a different representation of the set of possible physical states of the particle. Both representations of the wavefunction are related by a Fourier transform, such that ϕ ( p ) = ∫ d q ψ ( q ) e − i p q / h , {\displaystyle \phi (p)=\int dq\,\psi (q)e^{-ipq/h},} or, equivalently, ψ ( q ) = ∫ d p ϕ ( p ) e i p q / h . {\displaystyle \psi (q)=\int dp\,\phi (p)e^{ipq/h}.} Physically realisable states are L2, and so by the Plancherel theorem, their Fourier transforms are also L2. (Note that since q is in units of distance and p is in units of momentum, the presence of the Planck constant in the exponent makes the exponent dimensionless, as it should be.) Therefore, the Fourier transform can be used to pass from one way of representing the state of the particle, by a wave function of position, to another way of representing the state of the particle: by a wave function of momentum. Infinitely many different polarisations are possible, and all are equally valid. Being able to transform states from one representation to another by the Fourier transform is not only convenient but also the underlying reason of the Heisenberg uncertainty principle. The other use of the Fourier transform in both quantum mechanics and quantum field theory is to solve the applicable wave equation. In non-relativistic quantum mechanics, the Schrödinger equation for a time-varying wave function in one-dimension, not subject to external forces, is − ∂ 2 ∂ x 2 ψ ( x , t ) = i h 2 π ∂ ∂ t ψ ( x , t ) . {\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} This is the same as the heat equation except for the presence of the imaginary unit i. Fourier methods can be used to solve this equation. In the presence of a potential, given by the potential energy function V(x), the equation becomes − ∂ 2 ∂ x 2 ψ ( x , t ) + V ( x ) ψ ( x , t ) = i h 2 π ∂ ∂ t ψ ( x , t ) . {\displaystyle -{\frac {\partial ^{2}}{\partial x^{2}}}\psi (x,t)+V(x)\psi (x,t)=i{\frac {h}{2\pi }}{\frac {\partial }{\partial t}}\psi (x,t).} The "elementary solutions", as we referred to them above, are the so-called "stationary states" of the particle, and Fourier's algorithm, as described above, can still be used to solve the boundary value problem of the future evolution of ψ given its values for t = 0. Neither of these approaches is of much practical use in quantum mechanics. Boundary value problems and the time-evolution of the wave function is not of much practical interest: it is the stationary states that are most important. In relativistic quantum mechanics, the Schrödinger equation becomes a wave equation as was usual in classical physics, except that complex-valued waves are considered. A simple example, in the absence of interactions with other particles or fields, is the free one-dimensional Klein–Gordon–Schrödinger–Fock equation, this time in dimensionless units, ( ∂ 2 ∂ x 2 + 1 ) ψ ( x , t ) = ∂ 2 ∂ t 2 ψ ( x , t ) . {\displaystyle \left({\frac {\partial ^{2}}{\partial x^{2}}}+1\right)\psi (x,t)={\frac {\partial ^{2}}{\partial t^{2}}}\psi (x,t).} This is, from the mathematical point of view, the same as the wave equation of classical physics solved above (but with a complex-valued wave, which makes no difference in the methods). This is of great use in quantum field theory: each separate Fourier component of a wave can be treated as a separate harmonic oscillator and then quantized, a procedure known as "second quantization". Fourier methods have been adapted to also deal with non-trivial interactions. Finally, the number operator of the quantum harmonic oscillator can be interpreted, for example via the Mehler kernel, as the generator of the Fourier transform F {\displaystyle {\mathcal {F}}} . === Signal processing === The Fourier transform is used for the spectral analysis of time-series. The subject of statistical signal processing does not, however, usually apply the Fourier transformation to the signal itself. Even if a real signal is indeed transient, it has been found in practice advisable to model a signal by a function (or, alternatively, a stochastic process) which is stationary in the sense that its characteristic properties are constant over all time. The Fourier transform of such a function does not exist in the usual sense, and it has been found more useful for the analysis of signals to instead take the Fourier transform of its autocorrelation function. The autocorrelation function R of a function f is defined by R f ( τ ) = lim T → ∞ 1 2 T ∫ − T T f ( t ) f ( t + τ ) d t . {\displaystyle R_{f}(\tau )=\lim _{T\rightarrow \infty }{\frac {1}{2T}}\int _{-T}^{T}f(t)f(t+\tau )\,dt.} This function is a function of the time-lag τ elapsing between the values of f to be correlated. For most functions f that occur in practice, R is a bounded even function of the time-lag τ and for typical noisy signals it turns out to be uniformly continuous with a maximum at τ = 0. The autocorrelation function, more properly called the autocovariance function unless it is normalized in some appropriate fashion, measures the strength of the correlation between the values of f separated by a time lag. This is a way of searching for the correlation of f with its own past. It is useful even for other statistical tasks besides the analysis of signals. For example, if f(t) represents the temperature at time t, one expects a strong correlation with the temperature at a time lag of 24 hours. It possesses a Fourier transform, P f ( ξ ) = ∫ − ∞ ∞ R f ( τ ) e − i 2 π ξ τ d τ . {\displaystyle P_{f}(\xi )=\int _{-\infty }^{\infty }R_{f}(\tau )e^{-i2\pi \xi \tau }\,d\tau .} This Fourier transform is called the power spectral density function of f. (Unless all periodic components are first filtered out from f, this integral will diverge, but it is easy to filter out such periodicities.) The power spectrum, as indicated by this density function P, measures the amount of variance contributed to the data by the frequency ξ. In electrical signals, the variance is proportional to the average power (energy per unit time), and so the power spectrum describes how much the different frequencies contribute to the average power of the signal. This process is called the spectral analysis of time-series and is analogous to the usual analysis of variance of data that is not a time-series (ANOVA). Knowledge of which frequencies are "important" in this sense is crucial for the proper design of filters and for the proper evaluation of measuring apparatuses. It can also be useful for the scientific analysis of the phenomena responsible for producing the data. The power spectrum of a signal can also be approximately measured directly by measuring the average power that remains in a signal after all the frequencies outside a narrow band have been filtered out. Spectral analysis is carried out for visual signals as well. The power spectrum ignores all phase relations, which is good enough for many purposes, but for video signals other types of spectral analysis must also be employed, still using the Fourier transform as a tool. == Other notations == Other common notations for f ^ ( ξ ) {\displaystyle {\hat {f}}(\xi )} include: f ~ ( ξ ) , F ( ξ ) , F ( f ) ( ξ ) , ( F f ) ( ξ ) , F ( f ) , F { f } , F ( f ( t ) ) , F { f ( t ) } . {\displaystyle {\tilde {f}}(\xi ),\ F(\xi ),\ {\mathcal {F}}\left(f\right)(\xi ),\ \left({\mathcal {F}}f\right)(\xi ),\ {\mathcal {F}}(f),\ {\mathcal {F}}\{f\},\ {\mathcal {F}}{\bigl (}f(t){\bigr )},\ {\mathcal {F}}{\bigl \{}f(t){\bigr \}}.} In the sciences and engineering it is also common to make substitutions like these: ξ → f , x → t , f → x , f ^ → X . {\displaystyle \xi \rightarrow f,\quad x\rightarrow t,\quad f\rightarrow x,\quad {\hat {f}}\rightarrow X.} So the transform pair f ( x ) ⟺ F f ^ ( ξ ) {\displaystyle f(x)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ {\hat {f}}(\xi )} can become x ( t ) ⟺ F X ( f ) {\displaystyle x(t)\ {\stackrel {\mathcal {F}}{\Longleftrightarrow }}\ X(f)} A disadvantage of the capital letter notation is when expressing a transform such as f ⋅ g ^ {\displaystyle {\widehat {f\cdot g}}} or f ′ ^ , {\displaystyle {\widehat {f'}},} which become the more awkward F { f ⋅ g } {\displaystyle {\mathcal {F}}\{f\cdot g\}} and F { f ′ } . {\displaystyle {\mathcal {F}}\{f'\}.} In some contexts such as particle physics, the same symbol f {\displaystyle f} may be used for both for a function as well as it Fourier transform, with the two only distinguished by their argument I.e. f ( k 1 + k 2 ) {\displaystyle f(k_{1}+k_{2})} would refer to the Fourier transform because of the momentum argument, while f ( x 0 + π r → ) {\displaystyle f(x_{0}+\pi {\vec {r}})} would refer to the original function because of the positional argument. Although tildes may be used as in f ~ {\displaystyle {\tilde {f}}} to indicate Fourier transforms, tildes may also be used to indicate a modification of a quantity with a more Lorentz invariant form, such as d k ~ = d k ( 2 π ) 3 2 ω {\displaystyle {\tilde {dk}}={\frac {dk}{(2\pi )^{3}2\omega }}} , so care must be taken. Similarly, f ^ {\displaystyle {\hat {f}}} often denotes the Hilbert transform of f {\displaystyle f} . The interpretation of the complex function f̂(ξ) may be aided by expressing it in polar coordinate form f ^ ( ξ ) = A ( ξ ) e i φ ( ξ ) {\displaystyle {\hat {f}}(\xi )=A(\xi )e^{i\varphi (\xi )}} in terms of the two real functions A(ξ) and φ(ξ) where: A ( ξ ) = | f ^ ( ξ ) | , {\displaystyle A(\xi )=\left|{\hat {f}}(\xi )\right|,} is the amplitude and φ ( ξ ) = arg ⁡ ( f ^ ( ξ ) ) , {\displaystyle \varphi (\xi )=\arg \left({\hat {f}}(\xi )\right),} is the phase (see arg function). Then the inverse transform can be written: f ( x ) = ∫ − ∞ ∞ A ( ξ ) e i ( 2 π ξ x + φ ( ξ ) ) d ξ , {\displaystyle f(x)=\int _{-\infty }^{\infty }A(\xi )\ e^{i{\bigl (}2\pi \xi x+\varphi (\xi ){\bigr )}}\,d\xi ,} which is a recombination of all the frequency components of f(x). Each component is a complex sinusoid of the form e2πixξ whose amplitude is A(ξ) and whose initial phase angle (at x = 0) is φ(ξ). The Fourier transform may be thought of as a mapping on function spaces. This mapping is here denoted F and F(f) is used to denote the Fourier transform of the function f. This mapping is linear, which means that F can also be seen as a linear transformation on the function space and implies that the standard notation in linear algebra of applying a linear transformation to a vector (here the function f) can be used to write F f instead of F(f). Since the result of applying the Fourier transform is again a function, we can be interested in the value of this function evaluated at the value ξ for its variable, and this is denoted either as F f(ξ) or as (F f)(ξ). Notice that in the former case, it is implicitly understood that F is applied first to f and then the resulting function is evaluated at ξ, not the other way around. In mathematics and various applied sciences, it is often necessary to distinguish between a function f and the value of f when its variable equals x, denoted f(x). This means that a notation like F(f(x)) formally can be interpreted as the Fourier transform of the values of f at x. Despite this flaw, the previous notation appears frequently, often when a particular function or a function of a particular variable is to be transformed. For example, F ( rect ⁡ ( x ) ) = sinc ⁡ ( ξ ) {\displaystyle {\mathcal {F}}{\bigl (}\operatorname {rect} (x){\bigr )}=\operatorname {sinc} (\xi )} is sometimes used to express that the Fourier transform of a rectangular function is a sinc function, or F ( f ( x + x 0 ) ) = F ( f ( x ) ) e i 2 π x 0 ξ {\displaystyle {\mathcal {F}}{\bigl (}f(x+x_{0}){\bigr )}={\mathcal {F}}{\bigl (}f(x){\bigr )}\,e^{i2\pi x_{0}\xi }} is used to express the shift property of the Fourier transform. Notice, that the last example is only correct under the assumption that the transformed function is a function of x, not of x0. As discussed above, the characteristic function of a random variable is the same as the Fourier–Stieltjes transform of its distribution measure, but in this context it is typical to take a different convention for the constants. Typically characteristic function is defined E ( e i t ⋅ X ) = ∫ e i t ⋅ x d μ X ( x ) . {\displaystyle E\left(e^{it\cdot X}\right)=\int e^{it\cdot x}\,d\mu _{X}(x).} As in the case of the "non-unitary angular frequency" convention above, the factor of 2π appears in neither the normalizing constant nor the exponent. Unlike any of the conventions appearing above, this convention takes the opposite sign in the exponent. == Computation methods == The appropriate computation method largely depends how the original mathematical function is represented and the desired form of the output function. In this section we consider both functions of a continuous variable, f ( x ) , {\displaystyle f(x),} and functions of a discrete variable (i.e. ordered pairs of x {\displaystyle x} and f {\displaystyle f} values). For discrete-valued x , {\displaystyle x,} the transform integral becomes a summation of sinusoids, which is still a continuous function of frequency ( ξ {\displaystyle \xi } or ω {\displaystyle \omega } ). When the sinusoids are harmonically related (i.e. when the x {\displaystyle x} -values are spaced at integer multiples of an interval), the transform is called discrete-time Fourier transform (DTFT). === Discrete Fourier transforms and fast Fourier transforms === Sampling the DTFT at equally-spaced values of frequency is the most common modern method of computation. Efficient procedures, depending on the frequency resolution needed, are described at Discrete-time Fourier transform § Sampling the DTFT. The discrete Fourier transform (DFT), used there, is usually computed by a fast Fourier transform (FFT) algorithm. === Analytic integration of closed-form functions === Tables of closed-form Fourier transforms, such as § Square-integrable functions, one-dimensional and § Table of discrete-time Fourier transforms, are created by mathematically evaluating the Fourier analysis integral (or summation) into another closed-form function of frequency ( ξ {\displaystyle \xi } or ω {\displaystyle \omega } ). When mathematically possible, this provides a transform for a continuum of frequency values. Many computer algebra systems such as Matlab and Mathematica that are capable of symbolic integration are capable of computing Fourier transforms analytically. For example, to compute the Fourier transform of cos(6πt) e−πt2 one might enter the command integrate cos(6*pi*t) exp(−pi*t^2) exp(-i*2*pi*f*t) from -inf to inf into Wolfram Alpha. === Numerical integration of closed-form continuous functions === Discrete sampling of the Fourier transform can also be done by numerical integration of the definition at each value of frequency for which transform is desired. The numerical integration approach works on a much broader class of functions than the analytic approach. === Numerical integration of a series of ordered pairs === If the input function is a series of ordered pairs, numerical integration reduces to just a summation over the set of data pairs. The DTFT is a common subcase of this more general situation. == Tables of important Fourier transforms == The following tables record some closed-form Fourier transforms. For functions f(x) and g(x) denote their Fourier transforms by f̂ and ĝ. Only the three most common conventions are included. It may be useful to notice that entry 105 gives a relationship between the Fourier transform of a function and the original function, which can be seen as relating the Fourier transform and its inverse. === Functional relationships, one-dimensional === The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix). === Square-integrable functions, one-dimensional === The Fourier transforms in this table may be found in Campbell & Foster (1948), Erdélyi (1954), or Kammler (2000, appendix). === Distributions, one-dimensional === The Fourier transforms in this table may be found in Erdélyi (1954) or Kammler (2000, appendix). === Two-dimensional functions === === Formulas for general n-dimensional functions === == See also == == Notes == == Citations == == References == == External links == Media related to Fourier transformation at Wikimedia Commons Encyclopedia of Mathematics Weisstein, Eric W. "Fourier Transform". MathWorld. Fourier Transform in Crystallography
Wikipedia/Continuous-time_Fourier_transform
In mathematics a transformation of a sequence's generating function provides a method of converting the generating function for one sequence into a generating function enumerating another. These transformations typically involve integral formulas applied to a sequence generating function (see integral transformations) or weighted sums over the higher-order derivatives of these functions (see derivative transformations). Given a sequence, { f n } n = 0 ∞ {\displaystyle \{f_{n}\}_{n=0}^{\infty }} , the ordinary generating function (OGF) of the sequence, denoted F ( z ) {\displaystyle F(z)} , and the exponential generating function (EGF) of the sequence, denoted F ^ ( z ) {\displaystyle {\widehat {F}}(z)} , are defined by the formal power series F ( z ) = ∑ n = 0 ∞ f n z n = f 0 + f 1 z + f 2 z 2 + ⋯ {\displaystyle F(z)=\sum _{n=0}^{\infty }f_{n}z^{n}=f_{0}+f_{1}z+f_{2}z^{2}+\cdots } F ^ ( z ) = ∑ n = 0 ∞ f n n ! z n = f 0 0 ! + f 1 1 ! z + f 2 2 ! z 2 + ⋯ . {\displaystyle {\widehat {F}}(z)=\sum _{n=0}^{\infty }{\frac {f_{n}}{n!}}z^{n}={\frac {f_{0}}{0!}}+{\frac {f_{1}}{1!}}z+{\frac {f_{2}}{2!}}z^{2}+\cdots .} In this article, we use the convention that the ordinary (exponential) generating function for a sequence { f n } {\displaystyle \{f_{n}\}} is denoted by the uppercase function F ( z ) {\displaystyle F(z)} / F ^ ( z ) {\displaystyle {\widehat {F}}(z)} for some fixed or formal z {\displaystyle z} when the context of this notation is clear. Additionally, we use the bracket notation for coefficient extraction from the Concrete Mathematics reference which is given by [ z n ] F ( z ) := f n {\displaystyle [z^{n}]F(z):=f_{n}} . The main article gives examples of generating functions for many sequences. Other examples of generating function variants include Dirichlet generating functions (DGFs), Lambert series, and Newton series. In this article we focus on transformations of generating functions in mathematics and keep a running list of useful transformations and transformation formulas. == Extracting arithmetic progressions of a sequence == Series multisection provides formulas for generating functions enumerating the sequence { f a n + b } {\displaystyle \{f_{an+b}\}} given an ordinary generating function F ( z ) {\displaystyle F(z)} where a , b ∈ N {\displaystyle a,b\in \mathbb {N} } , a ≥ 2 {\displaystyle a\geq 2} , and 0 ≤ b < a {\displaystyle 0\leq b<a} . In the first two cases where ( a , b ) := ( 2 , 0 ) , ( 2 , 1 ) {\displaystyle (a,b):=(2,0),(2,1)} , we can expand these arithmetic progression generating functions directly in terms of F ( z ) {\displaystyle F(z)} : ∑ n ≥ 0 f 2 n z 2 n = 1 2 ( F ( z ) + F ( − z ) ) {\displaystyle \sum _{n\geq 0}f_{2n}z^{2n}={\frac {1}{2}}\left(F(z)+F(-z)\right)} ∑ n ≥ 0 f 2 n + 1 z 2 n + 1 = 1 2 ( F ( z ) − F ( − z ) ) . {\displaystyle \sum _{n\geq 0}f_{2n+1}z^{2n+1}={\frac {1}{2}}\left(F(z)-F(-z)\right).} More generally, suppose that a ≥ 3 {\displaystyle a\geq 3} and that ω a := exp ⁡ ( 2 π ı a ) {\displaystyle \omega _{a}:=\exp \left({\frac {2\pi \imath }{a}}\right)} denotes the a t h {\displaystyle a^{th}} primitive root of unity. Then we have the following formula, often known as the root of unity filter: ∑ n ≥ 0 f a n + b z a n + b = 1 a × ∑ m = 0 a − 1 ω a − m b F ( ω a m z ) . {\displaystyle \sum _{n\geq 0}f_{an+b}z^{an+b}={\frac {1}{a}}\times \sum _{m=0}^{a-1}\omega _{a}^{-mb}F\left(\omega _{a}^{m}z\right).} For integers m ≥ 1 {\displaystyle m\geq 1} , another useful formula providing somewhat reversed floored arithmetic progressions are generated by the identity ∑ n ≥ 0 f ⌊ n m ⌋ z n = 1 − z m 1 − z F ( z m ) = ( 1 + z + ⋯ + z m − 2 + z m − 1 ) F ( z m ) . {\displaystyle \sum _{n\geq 0}f_{\lfloor {\frac {n}{m}}\rfloor }z^{n}={\frac {1-z^{m}}{1-z}}F(z^{m})=\left(1+z+\cdots +z^{m-2}+z^{m-1}\right)F(z^{m}).} == Powers of an OGF and composition with functions == The exponential Bell polynomials, B n , k ( x 1 , … , x n ) := n ! ⋅ [ t n u k ] Φ ( t , u ) {\displaystyle B_{n,k}(x_{1},\ldots ,x_{n}):=n!\cdot [t^{n}u^{k}]\Phi (t,u)} , are defined by the exponential generating function Φ ( t , u ) = exp ⁡ ( u × ∑ m ≥ 1 x m t m m ! ) = 1 + ∑ n ≥ 1 { ∑ k = 1 n B n , k ( x 1 , x 2 , … ) u k } t n n ! . {\displaystyle \Phi (t,u)=\exp \left(u\times \sum _{m\geq 1}x_{m}{\frac {t^{m}}{m!}}\right)=1+\sum _{n\geq 1}\left\{\sum _{k=1}^{n}B_{n,k}(x_{1},x_{2},\ldots )u^{k}\right\}{\frac {t^{n}}{n!}}.} The next formulas for powers, logarithms, and compositions of formal power series are expanded by these polynomials with variables in the coefficients of the original generating functions. The formula for the exponential of a generating function is given implicitly through the Bell polynomials by the EGF for these polynomials defined in the previous formula for some sequence of { x i } {\displaystyle \{x_{i}\}} . === Reciprocals of an OGF (special case of the powers formula) === The power series for the reciprocal of a generating function, F ( z ) {\displaystyle F(z)} , is expanded by 1 F ( z ) = 1 f 0 − f 1 f 0 2 z + ( f 1 2 − f 0 f 2 ) f 0 3 z 2 − f 1 3 − 2 f 0 f 1 f 2 + f 0 2 f 3 f 0 4 + ⋯ . {\displaystyle {\frac {1}{F(z)}}={\frac {1}{f_{0}}}-{\frac {f_{1}}{f_{0}^{2}}}z+{\frac {\left(f_{1}^{2}-f_{0}f_{2}\right)}{f_{0}^{3}}}z^{2}-{\frac {f_{1}^{3}-2f_{0}f_{1}f_{2}+f_{0}^{2}f_{3}}{f_{0}^{4}}}+\cdots .} If we let b n := [ z n ] 1 / F ( z ) {\displaystyle b_{n}:=[z^{n}]1/F(z)} denote the coefficients in the expansion of the reciprocal generating function, then we have the following recurrence relation: b n = − 1 f 0 ( f 1 b n − 1 + f 2 b n − 2 + ⋯ + f n b 0 ) , n ≥ 1. {\displaystyle b_{n}=-{\frac {1}{f_{0}}}\left(f_{1}b_{n-1}+f_{2}b_{n-2}+\cdots +f_{n}b_{0}\right),n\geq 1.} === Powers of an OGF === Let m ∈ C {\displaystyle m\in \mathbb {C} } be fixed, suppose that f 0 = 1 {\displaystyle f_{0}=1} , and denote b n ( m ) := [ z n ] F ( z ) m {\displaystyle b_{n}^{(m)}:=[z^{n}]F(z)^{m}} . Then we have a series expansion for F ( z ) m {\displaystyle F(z)^{m}} given by F ( z ) m = 1 + m f 1 z + m ( ( m − 1 ) f 1 2 + 2 f 2 ) z 2 2 + ( m ( m − 1 ) ( m − 2 ) f 1 3 + 6 m ( m − 1 ) f 2 + 6 m f 3 ) z 3 6 + ⋯ , {\displaystyle F(z)^{m}=1+mf_{1}z+m\left((m-1)f_{1}^{2}+2f_{2}\right){\frac {z^{2}}{2}}+\left(m(m-1)(m-2)f_{1}^{3}+6m(m-1)f_{2}+6mf_{3}\right){\frac {z^{3}}{6}}+\cdots ,} and the coefficients b n ( m ) {\displaystyle b_{n}^{(m)}} satisfy a recurrence relation of the form n ⋅ b n ( m ) = ( m − n + 1 ) f 1 b n − 1 ( m ) + ( 2 m − n + 2 ) f 2 b n − 2 ( m ) + ⋯ + ( ( n − 1 ) m − 1 ) f n − 1 b 1 ( m ) + n m f n , n ≥ 1. {\displaystyle n\cdot b_{n}^{(m)}=(m-n+1)f_{1}b_{n-1}^{(m)}+(2m-n+2)f_{2}b_{n-2}^{(m)}+\cdots +((n-1)m-1)f_{n-1}b_{1}^{(m)}+nmf_{n},n\geq 1.} Another formula for the coefficients, b n ( m ) {\displaystyle b_{n}^{(m)}} , is expanded by the Bell polynomials as F ( z ) m = f 0 m + ∑ n ≥ 1 ( ∑ 1 ≤ k ≤ n ( m ) k f 0 m − k B n , k ( f 1 ⋅ 1 ! , f 2 ⋅ 2 ! , … ) ) z n n ! , {\displaystyle F(z)^{m}=f_{0}^{m}+\sum _{n\geq 1}\left(\sum _{1\leq k\leq n}(m)_{k}f_{0}^{m-k}B_{n,k}(f_{1}\cdot 1!,f_{2}\cdot 2!,\ldots )\right){\frac {z^{n}}{n!}},} where ( r ) n {\displaystyle (r)_{n}} denotes the Pochhammer symbol. === Logarithms of an OGF === If we let f 0 = 1 {\displaystyle f_{0}=1} and define q n := [ z n ] log ⁡ F ( z ) {\displaystyle q_{n}:=[z^{n}]\log F(z)} , then we have a power series expansion for the composite generating function given by log ⁡ F ( z ) = f 1 + ( 2 f 2 − f 1 2 ) z 2 + ( 3 f 3 − 3 f 1 f 2 + f 1 3 ) z 2 3 + ⋯ , {\displaystyle \log F(z)=f_{1}+\left(2f_{2}-f_{1}^{2}\right){\frac {z}{2}}+\left(3f_{3}-3f_{1}f_{2}+f_{1}^{3}\right){\frac {z^{2}}{3}}+\cdots ,} where the coefficients, q n {\displaystyle q_{n}} , in the previous expansion satisfy the recurrence relation given by n ⋅ q n = n f n − ( n − 1 ) f 1 q n − 1 − ( n − 2 ) f 2 q n − 2 − ⋯ − f n − 1 q 1 , {\displaystyle n\cdot q_{n}=nf_{n}-(n-1)f_{1}q_{n-1}-(n-2)f_{2}q_{n-2}-\cdots -f_{n-1}q_{1},} and a corresponding formula expanded by the Bell polynomials in the form of the power series coefficients of the following generating function: log ⁡ F ( z ) = ∑ n ≥ 1 ( ∑ 1 ≤ k ≤ n ( − 1 ) k − 1 ( k − 1 ) ! B n , k ( f 1 ⋅ 1 ! , f 2 ⋅ 2 ! , … ) ) z n n ! . {\displaystyle \log F(z)=\sum _{n\geq 1}\left(\sum _{1\leq k\leq n}(-1)^{k-1}(k-1)!B_{n,k}(f_{1}\cdot 1!,f_{2}\cdot 2!,\ldots )\right){\frac {z^{n}}{n!}}.} === Faà di Bruno's formula === Let F ^ ( z ) {\displaystyle {\widehat {F}}(z)} denote the EGF of the sequence, { f n } n ≥ 0 {\displaystyle \{f_{n}\}_{n\geq 0}} , and suppose that G ^ ( z ) {\displaystyle {\widehat {G}}(z)} is the EGF of the sequence, { g n } n ≥ 0 {\displaystyle \{g_{n}\}_{n\geq 0}} . Faà di Bruno's formula implies that the sequence, { h n } n ≥ 0 {\displaystyle \{h_{n}\}_{n\geq 0}} , generated by the composition H ^ ( z ) := F ^ ( G ^ ( z ) ) {\displaystyle {\widehat {H}}(z):={\widehat {F}}({\widehat {G}}(z))} , can be expressed in terms of the exponential Bell polynomials as follows: h n = ∑ 1 ≤ k ≤ n f k ⋅ B n , k ( g 1 , g 2 , ⋯ , g n − k + 1 ) + f 0 ⋅ δ n , 0 . {\displaystyle h_{n}=\sum _{1\leq k\leq n}f_{k}\cdot B_{n,k}(g_{1},g_{2},\cdots ,g_{n-k+1})+f_{0}\cdot \delta _{n,0}.} == Integral transformations == === OGF ⟷ EGF conversion formulas === We have the following integral formulas for a , b ∈ Z + {\displaystyle a,b\in \mathbb {Z} ^{+}} which can be applied termwise with respect to z {\displaystyle z} when z {\displaystyle z} is taken to be any formal power series variable: ∑ n ≥ 0 f n z n = ∫ 0 ∞ F ^ ( t z ) e − t d t = z − 1 L [ F ^ ] ( z − 1 ) {\displaystyle \sum _{n\geq 0}f_{n}z^{n}=\int _{0}^{\infty }{\widehat {F}}(tz)e^{-t}dt=z^{-1}{\mathcal {L}}[{\widehat {F}}](z^{-1})} ∑ n ≥ 0 Γ ( a n + b ) ⋅ f n z n = ∫ 0 ∞ t b − 1 e − t F ( t a z ) d t . {\displaystyle \sum _{n\geq 0}\Gamma (an+b)\cdot f_{n}z^{n}=\int _{0}^{\infty }t^{b-1}e^{-t}F(t^{a}z)dt.} ∑ n ≥ 0 f n n ! z n = 1 2 π ∫ − π π F ( z e − ı ϑ ) e e ı ϑ d ϑ . {\displaystyle \sum _{n\geq 0}{\frac {f_{n}}{n!}}z^{n}={\frac {1}{2\pi }}\int _{-\pi }^{\pi }F\left(ze^{-\imath \vartheta }\right)e^{e^{\imath \vartheta }}d\vartheta .} Notice that the first and last of these integral formulas are used to convert between the EGF to the OGF of a sequence, and from the OGF to the EGF of a sequence whenever these integrals are convergent. The first integral formula corresponds to the Laplace transform (or sometimes the formal Laplace–Borel transformation) of generating functions, denoted by L [ F ] ( z ) {\displaystyle {\mathcal {L}}[F](z)} , defined in. Other integral representations for the gamma function in the second of the previous formulas can of course also be used to construct similar integral transformations. One particular formula results in the case of the double factorial function example given immediately below in this section. The last integral formula is compared to Hankel's loop integral for the reciprocal gamma function applied termwise to the power series for F ( z ) {\displaystyle F(z)} . ==== Example: A double factorial integral for the EGF of the Stirling numbers of the second kind ==== The single factorial function, ( 2 n ) ! {\displaystyle (2n)!} , is expressed as a product of two double factorial functions of the form ( 2 n ) ! = ( 2 n ) ! ! × ( 2 n − 1 ) ! ! = 4 n ⋅ n ! π × Γ ( n + 1 2 ) , {\displaystyle (2n)!=(2n)!!\times (2n-1)!!={\frac {4^{n}\cdot n!}{\sqrt {\pi }}}\times \Gamma \left(n+{\frac {1}{2}}\right),} where an integral for the double factorial function, or rational gamma function, is given by 1 2 ⋅ ( 2 n − 1 ) ! ! = 2 n 4 π Γ ( n + 1 2 ) = 1 2 π × ∫ 0 ∞ e − t 2 / 2 t 2 n d t , {\displaystyle {\frac {1}{2}}\cdot (2n-1)!!={\frac {2^{n}}{\sqrt {4\pi }}}\Gamma \left(n+{\frac {1}{2}}\right)={\frac {1}{\sqrt {2\pi }}}\times \int _{0}^{\infty }e^{-t^{2}/2}t^{2n}\,dt,} for natural numbers n ≥ 0 {\displaystyle n\geq 0} . This integral representation of ( 2 n − 1 ) ! ! {\displaystyle (2n-1)!!} then implies that for fixed non-zero q ∈ C {\displaystyle q\in \mathbb {C} } and any integral powers k ≥ 0 {\displaystyle k\geq 0} , we have the formula log ⁡ ( q ) k k ! = 1 ( 2 k ) ! × [ ∫ 0 ∞ 2 e − t 2 / 2 2 π ( 2 log ⁡ ( q ) ⋅ t ) 2 k d t ] . {\displaystyle {\frac {\log(q)^{k}}{k!}}={\frac {1}{(2k)!}}\times \left[\int _{0}^{\infty }{\frac {2e^{-t^{2}/2}}{\sqrt {2\pi }}}\left({\sqrt {2\log(q)}}\cdot t\right)^{2k}\,dt\right].} Thus for any prescribed integer j ≥ 0 {\displaystyle j\geq 0} , we can use the previous integral representation together with the formula for extracting arithmetic progressions from a sequence OGF given above, to formulate the next integral representation for the so-termed modified Stirling number EGF as ∑ n ≥ 0 { 2 n j } log ⁡ ( q ) n n ! = ∫ 0 ∞ e − t 2 / 2 2 π ⋅ j ! [ ∑ b = ± 1 ( e b 2 log ⁡ ( q ) ⋅ t − 1 ) j ] d t , {\displaystyle \sum _{n\geq 0}\left\{{\begin{matrix}2n\\j\end{matrix}}\right\}{\frac {\log(q)^{n}}{n!}}=\int _{0}^{\infty }{\frac {e^{-t^{2}/2}}{{\sqrt {2\pi }}\cdot j!}}\left[\sum _{b=\pm 1}\left(e^{b{\sqrt {2\log(q)}}\cdot t}-1\right)^{j}\right]dt,} which is convergent provided suitable conditions on the parameter 0 < | q | < 1 {\displaystyle 0<|q|<1} . ==== Example: An EGF formula for the higher-order derivatives of the geometric series ==== For fixed non-zero c , z ∈ C {\displaystyle c,z\in \mathbb {C} } defined such that | c z | < 1 {\displaystyle |cz|<1} , let the geometric series over the non-negative integral powers of ( c z ) n {\displaystyle (cz)^{n}} be denoted by G ( z ) := 1 / ( 1 − c z ) {\displaystyle G(z):=1/(1-cz)} . The corresponding higher-order j t h {\displaystyle j^{th}} derivatives of the geometric series with respect to z {\displaystyle z} are denoted by the sequence of functions G j ( z ) := ( c z ) j 1 − c z × ( d d z ) ( j ) [ G ( z ) ] , {\displaystyle G_{j}(z):={\frac {(cz)^{j}}{1-cz}}\times \left({\frac {d}{dz}}\right)^{(j)}\left[G(z)\right],} for non-negative integers j ≥ 0 {\displaystyle j\geq 0} . These j t h {\displaystyle j^{th}} derivatives of the ordinary geometric series can be shown, for example by induction, to satisfy an explicit closed-form formula given by G j ( z ) = ( c z ) j ⋅ j ! ( 1 − c z ) j + 2 , {\displaystyle G_{j}(z)={\frac {(cz)^{j}\cdot j!}{(1-cz)^{j+2}}},} for any j ≥ 0 {\displaystyle j\geq 0} whenever | c z | < 1 {\displaystyle |cz|<1} . As an example of the third OGF ⟼ {\displaystyle \longmapsto } EGF conversion formula cited above, we can compute the following corresponding exponential forms of the generating functions G j ( z ) {\displaystyle G_{j}(z)} : G ^ j ( z ) = 1 2 π ∫ − π + π G j ( z e − ı t ) e e ı t d t = ( c z ) j e c z ( j + 1 ) ( j + 1 + z ) . {\displaystyle {\widehat {G}}_{j}(z)={\frac {1}{2\pi }}\int _{-\pi }^{+\pi }G_{j}\left(ze^{-\imath t}\right)e^{e^{\imath t}}dt={\frac {(cz)^{j}e^{cz}}{(j+1)}}\left(j+1+z\right).} === Fractional integrals and derivatives === Fractional integrals and fractional derivatives (see the main article) form another generalized class of integration and differentiation operations that can be applied to the OGF of a sequence to form the corresponding OGF of a transformed sequence. For ℜ ( α ) > 0 {\displaystyle \Re (\alpha )>0} we define the fractional integral operator (of order α {\displaystyle \alpha } ) by the integral transformation I α F ( z ) = 1 Γ ( α ) ∫ 0 z ( z − t ) α − 1 F ( t ) d t , {\displaystyle I^{\alpha }F(z)={\frac {1}{\Gamma (\alpha )}}\int _{0}^{z}(z-t)^{\alpha -1}F(t)dt,} which corresponds to the (formal) power series given by I α F ( z ) = ∑ n ≥ 0 n ! Γ ( n + α + 1 ) f n z n + α . {\displaystyle I^{\alpha }F(z)=\sum _{n\geq 0}{\frac {n!}{\Gamma (n+\alpha +1)}}f_{n}z^{n+\alpha }.} For fixed α , β ∈ C {\displaystyle \alpha ,\beta \in \mathbb {C} } defined such that ℜ ( α ) , ℜ ( β ) > 0 {\displaystyle \Re (\alpha ),\Re (\beta )>0} , we have that the operators I α I β = I α + β {\displaystyle I^{\alpha }I^{\beta }=I^{\alpha +\beta }} . Moreover, for fixed α ∈ C {\displaystyle \alpha \in \mathbb {C} } and integers n {\displaystyle n} satisfying 0 < ℜ ( α ) < n {\displaystyle 0<\Re (\alpha )<n} we can define the notion of the fractional derivative satisfying the properties that D α F ( z ) = d ( n ) d z ( n ) I n − α F ( z ) , {\displaystyle D^{\alpha }F(z)={\frac {d^{(n)}}{dz^{(n)}}}I^{n-\alpha }F(z),} and D k I α = D n I α + n − k {\displaystyle D^{k}I^{\alpha }=D^{n}I^{\alpha +n-k}} for k = 1 , 2 , … , n , {\displaystyle k=1,2,\ldots ,n,} where we have the semigroup property that D α D β = D α + β {\displaystyle D^{\alpha }D^{\beta }=D^{\alpha +\beta }} only when none of α , β , α + β {\displaystyle \alpha ,\beta ,\alpha +\beta } is integer-valued. === Polylogarithm series transformations === For fixed s ∈ Z + {\displaystyle s\in \mathbb {Z} ^{+}} , we have that (compare to the special case of the integral formula for the Nielsen generalized polylogarithm function defined in) ∑ n ≥ 0 f n ( n + 1 ) s z n = ( − 1 ) s − 1 ( s − 1 ) ! ∫ 0 1 log s − 1 ⁡ ( t ) F ( t z ) d t . {\displaystyle \sum _{n\geq 0}{\frac {f_{n}}{(n+1)^{s}}}z^{n}={\frac {(-1)^{s-1}}{(s-1)!}}\int _{0}^{1}\log ^{s-1}(t)F(tz)dt.} Notice that if we set g n ≡ f n + 1 {\displaystyle g_{n}\equiv f_{n+1}} , the integral with respect to the generating function, G ( z ) {\displaystyle G(z)} , in the last equation when z ≡ 1 {\displaystyle z\equiv 1} corresponds to the Dirichlet generating function, or DGF, F ~ ( s ) {\displaystyle {\widetilde {F}}(s)} , of the sequence of { f n } {\displaystyle \{f_{n}\}} provided that the integral converges. This class of polylogarithm-related integral transformations is related to the derivative-based zeta series transformations defined in the next sections. === Square series generating function transformations === For fixed non-zero q , c , z ∈ C {\displaystyle q,c,z\in \mathbb {C} } such that | q | < 1 {\displaystyle |q|<1} and | c z | < 1 {\displaystyle |cz|<1} , we have the following integral representations for the so-termed square series generating function associated with the sequence { f n } {\displaystyle \{f_{n}\}} , which can be integrated termwise with respect to z {\displaystyle z} : ∑ n ≥ 0 q n 2 f n ⋅ ( c z ) n = 1 2 π ∫ 0 ∞ e − t 2 / 2 [ F ( e t 2 log ⁡ ( q ) ⋅ c z ) + F ( e − t 2 log ⁡ ( q ) ⋅ c z ) ] d t . {\displaystyle \sum _{n\geq 0}q^{n^{2}}f_{n}\cdot (cz)^{n}={\frac {1}{\sqrt {2\pi }}}\int _{0}^{\infty }e^{-t^{2}/2}\left[F\left(e^{t{\sqrt {2\log(q)}}}\cdot cz\right)+F\left(e^{-t{\sqrt {2\log(q)}}}\cdot cz\right)\right]dt.} This result, which is proved in the reference, follows from a variant of the double factorial function transformation integral for the Stirling numbers of the second kind given as an example above. In particular, since q n 2 = exp ⁡ ( n 2 ⋅ log ⁡ ( q ) ) = 1 + n 2 log ⁡ ( q ) + n 4 log ⁡ ( q ) 2 2 ! + n 6 log ⁡ ( q ) 3 3 ! + ⋯ , {\displaystyle q^{n^{2}}=\exp(n^{2}\cdot \log(q))=1+n^{2}\log(q)+n^{4}{\frac {\log(q)^{2}}{2!}}+n^{6}{\frac {\log(q)^{3}}{3!}}+\cdots ,} we can use a variant of the positive-order derivative-based OGF transformations defined in the next sections involving the Stirling numbers of the second kind to obtain an integral formula for the generating function of the sequence, { S ( 2 n , j ) / n ! } {\displaystyle \left\{S(2n,j)/n!\right\}} , and then perform a sum over the j t h {\displaystyle j^{th}} derivatives of the formal OGF, F ( z ) {\displaystyle F(z)} to obtain the result in the previous equation where the arithmetic progression generating function at hand is denoted by ∑ n ≥ 0 { 2 n j } z 2 n ( 2 n ) ! = 1 2 j ! ( ( e z − 1 ) j + ( e − z − 1 ) j ) , {\displaystyle \sum _{n\geq 0}\left\{{\begin{matrix}2n\\j\end{matrix}}\right\}{\frac {z^{2n}}{(2n)!}}={\frac {1}{2j!}}\left((e^{z}-1)^{j}+(e^{-z}-1)^{j}\right),} for each fixed j ∈ N {\displaystyle j\in \mathbb {N} } . == Hadamard products and diagonal generating functions == We have an integral representation for the Hadamard product of two generating functions, F ( z ) {\displaystyle F(z)} and G ( z ) {\displaystyle G(z)} , stated in the following form: ( F ⊙ G ) ( z ) := ∑ n ≥ 0 f n g n z n = 1 2 π ∫ 0 2 π F ( z e i t ) G ( z e − i t ) d t , {\displaystyle (F\odot G)(z):=\sum _{n\geq 0}f_{n}g_{n}z^{n}={\frac {1}{2\pi }}\int _{0}^{2\pi }F\left({\sqrt {z}}e^{it}\right)G\left({\sqrt {z}}e^{-it}\right)dt,} where i is the imaginary unit. More information about Hadamard products as diagonal generating functions of multivariate sequences and/or generating functions and the classes of generating functions these diagonal OGFs belong to is found in Stanley's book. The reference also provides nested coefficient extraction formulas of the form diag ⁡ ( F 1 ⋯ F k ) := ∑ n ≥ 0 f 1 , n ⋯ f k , n z n = [ x k − 1 0 ⋯ x 2 0 x 1 0 ] F k ( z x k − 1 ) F k − 1 ( x k − 1 x k − 2 ) ⋯ F 2 ( x 2 x 1 ) F 1 ( x 1 ) , {\displaystyle \operatorname {diag} \left(F_{1}\cdots F_{k}\right):=\sum _{n\geq 0}f_{1,n}\cdots f_{k,n}z^{n}=[x_{k-1}^{0}\cdots x_{2}^{0}x_{1}^{0}]F_{k}\left({\frac {z}{x_{k-1}}}\right)F_{k-1}\left({\frac {x_{k-1}}{x_{k-2}}}\right)\cdots F_{2}\left({\frac {x_{2}}{x_{1}}}\right)F_{1}(x_{1}),} which are particularly useful in the cases where the component sequence generating functions, F i ( z ) {\displaystyle F_{i}(z)} , can be expanded in a Laurent series, or fractional series, in z {\displaystyle z} , such as in the special case where all of the component generating functions are rational, which leads to an algebraic form of the corresponding diagonal generating function. === Example: Hadamard products of rational generating functions === In general, the Hadamard product of two rational generating functions is itself rational. This is seen by noticing that the coefficients of a rational generating function form quasi-polynomial terms of the form f n = p 1 ( n ) ρ 1 n + ⋯ + p ℓ ( n ) ρ ℓ n , {\displaystyle f_{n}=p_{1}(n)\rho _{1}^{n}+\cdots +p_{\ell }(n)\rho _{\ell }^{n},} where the reciprocal roots, ρ i ∈ C {\displaystyle \rho _{i}\in \mathbb {C} } , are fixed scalars and where p i ( n ) {\displaystyle p_{i}(n)} is a polynomial in n {\displaystyle n} for all 1 ≤ i ≤ ℓ {\displaystyle 1\leq i\leq \ell } . For example, the Hadamard product of the two generating functions F ( z ) := 1 1 + a 1 z + a 2 z 2 {\displaystyle F(z):={\frac {1}{1+a_{1}z+a_{2}z^{2}}}} and G ( z ) := 1 1 + b 1 z + b 2 z 2 {\displaystyle G(z):={\frac {1}{1+b_{1}z+b_{2}z^{2}}}} is given by the rational generating function formula ( F ⊙ G ) ( z ) = 1 − a 2 b 2 z 2 1 − a 1 b 1 z + ( a 2 b 1 2 + a 1 2 b 2 − a 2 b 2 ) z 2 − a 1 a 2 b 1 b 2 z 3 + a 2 2 b 2 2 z 4 . {\displaystyle (F\odot G)(z)={\frac {1-a_{2}b_{2}z^{2}}{1-a_{1}b_{1}z+\left(a_{2}b_{1}^{2}+a_{1}^{2}b_{2}-a_{2}b_{2}\right)z^{2}-a_{1}a_{2}b_{1}b_{2}z^{3}+a_{2}^{2}b_{2}^{2}z^{4}}}.} === Example: Factorial (approximate Laplace) transformations === Ordinary generating functions for generalized factorial functions formed as special cases of the generalized rising factorial product functions, or Pochhammer k-symbol, defined by p n ( α , R ) := R ( R + α ) ⋯ ( R + ( n − 1 ) α ) = α n ⋅ ( R α ) n , {\displaystyle p_{n}(\alpha ,R):=R(R+\alpha )\cdots (R+(n-1)\alpha )=\alpha ^{n}\cdot \left({\frac {R}{\alpha }}\right)_{n},} where R {\displaystyle R} is fixed, α ≠ 0 {\displaystyle \alpha \neq 0} , and ( x ) n {\displaystyle (x)_{n}} denotes the Pochhammer symbol are generated (at least formally) by the Jacobi-type J-fractions (or special forms of continued fractions) established in the reference. If we let Conv h ⁡ ( α , R ; z ) := FP h ⁡ ( α , R ; z ) / FQ h ⁡ ( α , R ; z ) {\displaystyle \operatorname {Conv} _{h}(\alpha ,R;z):=\operatorname {FP} _{h}(\alpha ,R;z)/\operatorname {FQ} _{h}(\alpha ,R;z)} denote the h th {\displaystyle h^{\text{th}}} convergent to these infinite continued fractions where the component convergent functions are defined for all integers h ≥ 2 {\displaystyle h\geq 2} by FP h ⁡ ( α , R ; z ) = ∑ n = 0 h − 1 [ ∑ k = 0 n ( h k ) ( 1 − h − R α ) k ( R α ) n − k ] ( α z ) n , {\displaystyle \operatorname {FP} _{h}(\alpha ,R;z)=\sum _{n=0}^{h-1}\left[\sum _{k=0}^{n}{\binom {h}{k}}\left(1-h-{\frac {R}{\alpha }}\right)_{k}\left({\frac {R}{\alpha }}\right)_{n-k}\right](\alpha z)^{n},} and FQ h ⁡ ( α , R ; z ) = ( − α z ) h ⋅ h ! × L h ( R / α − 1 ) ( ( α z ) − 1 ) = ∑ k = 0 h ( h k ) [ ∏ j = 0 k − 1 ( R + ( j − 1 − j ) α ) ] ( − z ) k , {\displaystyle {\begin{aligned}\operatorname {FQ} _{h}(\alpha ,R;z)&=(-\alpha z)^{h}\cdot h!\times L_{h}^{\left(R/\alpha -1\right)}\left((\alpha z)^{-1}\right)\\&=\sum _{k=0}^{h}{\binom {h}{k}}\left[\prod _{j=0}^{k-1}(R+(j-1-j)\alpha )\right](-z)^{k},\end{aligned}}} where L n ( β ) ( x ) {\displaystyle L_{n}^{(\beta )}(x)} denotes an associated Laguerre polynomial, then we have that the h t h {\displaystyle h^{th}} convergent function, Conv h ⁡ ( α , R ; z ) {\displaystyle \operatorname {Conv} _{h}(\alpha ,R;z)} , exactly enumerates the product sequences, p n ( α , R ) {\displaystyle p_{n}(\alpha ,R)} , for all 0 ≤ n < 2 h {\displaystyle 0\leq n<2h} . For each h ≥ 2 {\displaystyle h\geq 2} , the h t h {\displaystyle h^{th}} convergent function is expanded as a finite sum involving only paired reciprocals of the Laguerre polynomials in the form of Conv h ⁡ ( α , R ; z ) = ∑ i = 0 h − 1 ( R α + i − 1 i ) × ( − α z ) − 1 ( i + 1 ) ⋅ L i ( R / α − 1 ) ( ( α z ) − 1 ) L i + 1 ( R / α − 1 ) ( ( α z ) − 1 ) {\displaystyle \operatorname {Conv} _{h}(\alpha ,R;z)=\sum _{i=0}^{h-1}{\binom {{\frac {R}{\alpha }}+i-1}{i}}\times {\frac {(-\alpha z)^{-1}}{(i+1)\cdot L_{i}^{\left(R/\alpha -1\right)}\left((\alpha z)^{-1}\right)L_{i+1}^{\left(R/\alpha -1\right)}\left((\alpha z)^{-1}\right)}}} Moreover, since the single factorial function is given by both n ! = p n ( 1 , 1 ) {\displaystyle n!=p_{n}(1,1)} and n ! = p n ( − 1 , n ) {\displaystyle n!=p_{n}(-1,n)} , we can generate the single factorial function terms using the approximate rational convergent generating functions up to order 2 h {\displaystyle 2h} . This observation suggests an approach to approximating the exact (formal) Laplace–Borel transform usually given in terms of the integral representation from the previous section by a Hadamard product, or diagonal-coefficient, generating function. In particular, given any OGF G ( z ) {\displaystyle G(z)} we can form the approximate Laplace transform, which is 2 h {\displaystyle 2h} -order accurate, by the diagonal coefficient extraction formula stated above given by L ~ h [ G ] ( z ) := [ x 0 ] Conv h ⁡ ( 1 , 1 ; z x ) G ( x ) = 1 2 π ∫ 0 2 π Conv h ⁡ ( 1 , 1 ; z e I t ) G ( z e − I t ) d t . {\displaystyle {\begin{aligned}{\widetilde {\mathcal {L}}}_{h}[G](z)&:=[x^{0}]\operatorname {Conv} _{h}\left(1,1;{\frac {z}{x}}\right)G(x)\\&\ ={\frac {1}{2\pi }}\int _{0}^{2\pi }\operatorname {Conv} _{h}\left(1,1;{\sqrt {z}}e^{It}\right)G\left({\sqrt {z}}e^{-It}\right)dt.\end{aligned}}} Examples of sequences enumerated through these diagonal coefficient generating functions arising from the sequence factorial function multiplier provided by the rational convergent functions include n ! 2 = [ z n ] [ x 0 ] Conv h ⁡ ( − 1 , n ; z x ) Conv h ⁡ ( − 1 , n ; x ) , h ≥ n ( 2 n n ) = [ x 1 0 x 2 0 z n ] Conv h ⁡ ( − 2 , 2 n ; z x 2 ) Conv h ⁡ ( − 2 , 2 n − 1 ; x 2 x 1 ) I 0 ( 2 x 1 ) ( 3 n n ) ( 2 n n ) = [ x 1 0 x 2 0 z n ] Conv h ⁡ ( − 3 , 3 n − 1 ; 3 z x 2 ) Conv h ⁡ ( − 3 , 3 n − 2 ; x 2 x 1 ) I 0 ( 2 x 1 ) ! n = n ! × ∑ i = 0 n ( − 1 ) i i ! = [ z n x 0 ] ( e − x ( 1 − x ) Conv n ⁡ ( − 1 , n ; z x ) ) af ⁡ ( n ) = ∑ k = 1 n ( − 1 ) n − k k ! = [ z n ] ( Conv n ⁡ ( 1 , 1 ; z ) − 1 1 + z ) ( t − 1 ) n P n ( t + 1 t − 1 ) = ∑ k = 0 n ( n k ) 2 t k = [ x 1 0 x 2 0 ] [ z n ] ( Conv n ⁡ ( 1 , 1 ; z x 1 ) Conv n ⁡ ( 1 , 1 ; x 1 x 2 ) I 0 ( 2 t ⋅ x 2 ) I 0 ( 2 x 2 ) ) , n ≥ 1 ( 2 n − 1 ) ! ! = ∑ k = 1 n ( n − 1 ) ! ( k − 1 ) ! k ⋅ ( 2 k − 3 ) ! ! = [ x 1 0 x 2 0 x 3 n − 1 ] ( Conv n ⁡ ( 1 , 1 ; x 3 x 2 ) Conv n ⁡ ( 2 , 1 ; x 2 x 1 ) ( x 1 + 1 ) e x 1 ( 1 − x 2 ) ) , {\displaystyle {\begin{aligned}n!^{2}&=[z^{n}][x^{0}]\operatorname {Conv} _{h}\left(-1,n;{\frac {z}{x}}\right)\operatorname {Conv} _{h}\left(-1,n;x\right),h\geq n\\{\binom {2n}{n}}&=[x_{1}^{0}x_{2}^{0}z^{n}]\operatorname {Conv} _{h}\left(-2,2n;{\frac {z}{x_{2}}}\right)\operatorname {Conv} _{h}\left(-2,2n-1;{\frac {x_{2}}{x_{1}}}\right)I_{0}(2{\sqrt {x_{1}}})\\{\binom {3n}{n}}{\binom {2n}{n}}&=[x_{1}^{0}x_{2}^{0}z^{n}]\operatorname {Conv} _{h}\left(-3,3n-1;{\frac {3z}{x_{2}}}\right)\operatorname {Conv} _{h}\left(-3,3n-2;{\frac {x_{2}}{x_{1}}}\right)I_{0}(2{\sqrt {x_{1}}})\\!n&=n!\times \sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}}=[z^{n}x^{0}]\left({\frac {e^{-x}}{(1-x)}}\operatorname {Conv} _{n}\left(-1,n;{\frac {z}{x}}\right)\right)\\\operatorname {af} (n)&=\sum _{k=1}^{n}(-1)^{n-k}k!=[z^{n}]\left({\frac {\operatorname {Conv} _{n}(1,1;z)-1}{1+z}}\right)\\(t-1)^{n}P_{n}\left({\frac {t+1}{t-1}}\right)&=\sum _{k=0}^{n}{\binom {n}{k}}^{2}t^{k}\\&=[x_{1}^{0}x_{2}^{0}][z^{n}]\left(\operatorname {Conv} _{n}\left(1,1;{\frac {z}{x_{1}}}\right)\operatorname {Conv} _{n}\left(1,1;{\frac {x_{1}}{x_{2}}}\right)I_{0}(2{\sqrt {t\cdot x_{2}}})I_{0}(2{\sqrt {x_{2}}})\right),n\geq 1\\(2n-1)!!&=\sum _{k=1}^{n}{\frac {(n-1)!}{(k-1)!}}k\cdot (2k-3)!!\\&=[x_{1}^{0}x_{2}^{0}x_{3}^{n-1}]\left(\operatorname {Conv} _{n}\left(1,1;{\frac {x_{3}}{x_{2}}}\right)\operatorname {Conv} _{n}\left(2,1;{\frac {x_{2}}{x_{1}}}\right){\frac {(x_{1}+1)e^{x_{1}}}{(1-x_{2})}}\right),\end{aligned}}} where I 0 ( z ) {\displaystyle I_{0}(z)} denotes a modified Bessel function, ! n {\displaystyle !n} denotes the subfactorial function, af ⁡ ( n ) {\displaystyle \operatorname {af} (n)} denotes the alternating factorial function, and P n ( x ) {\displaystyle P_{n}(x)} is a Legendre polynomial. Other examples of sequences enumerated through applications of these rational Hadamard product generating functions given in the article include the Barnes G-function, combinatorial sums involving the double factorial function, sums of powers sequences, and sequences of binomials. == Derivative transformations == === Positive and negative-order zeta series transformations === For fixed k ∈ Z + {\displaystyle k\in \mathbb {Z} ^{+}} , we have that if the sequence OGF F ( z ) {\displaystyle F(z)} has j t h {\displaystyle j^{th}} derivatives of all required orders for 1 ≤ j ≤ k {\displaystyle 1\leq j\leq k} , that the positive-order zeta series transformation is given by ∑ n ≥ 0 n k f n z n = ∑ j = 0 k { k j } z j F ( j ) ( z ) , {\displaystyle \sum _{n\geq 0}n^{k}f_{n}z^{n}=\sum _{j=0}^{k}\left\{{\begin{matrix}k\\j\end{matrix}}\right\}z^{j}F^{(j)}(z),} where { n k } {\displaystyle \scriptstyle {\left\{{\begin{matrix}n\\k\end{matrix}}\right\}}} denotes a Stirling number of the second kind. In particular, we have the following special case identity when f n ≡ 1 ∀ n {\displaystyle f_{n}\equiv 1\forall n} when ⟨ n m ⟩ {\displaystyle \scriptstyle {\left\langle {\begin{matrix}n\\m\end{matrix}}\right\rangle }} denotes the triangle of first-order Eulerian numbers: ∑ n ≥ 0 n k z n = ∑ j = 0 k { k j } z j ⋅ j ! ( 1 − z ) j + 1 = 1 ( 1 − z ) k + 1 × ∑ 0 ≤ m < k ⟨ k m ⟩ z m + 1 . {\displaystyle \sum _{n\geq 0}n^{k}z^{n}=\sum _{j=0}^{k}\left\{{\begin{matrix}k\\j\end{matrix}}\right\}{\frac {z^{j}\cdot j!}{(1-z)^{j+1}}}={\frac {1}{(1-z)^{k+1}}}\times \sum _{0\leq m<k}\left\langle {\begin{matrix}k\\m\end{matrix}}\right\rangle z^{m+1}.} We can also expand the negative-order zeta series transformations by a similar procedure to the above expansions given in terms of the j t h {\displaystyle j^{th}} -order derivatives of some F ( z ) ∈ C ∞ {\displaystyle F(z)\in C^{\infty }} and an infinite, non-triangular set of generalized Stirling numbers in reverse, or generalized Stirling numbers of the second kind defined within this context. In particular, for integers k , j ≥ 0 {\displaystyle k,j\geq 0} , define these generalized classes of Stirling numbers of the second kind by the formula { k + 2 j } ∗ := 1 j ! × ∑ m = 1 j ( j m ) ( − 1 ) j − m m k . {\displaystyle \left\{{\begin{matrix}k+2\\j\end{matrix}}\right\}_{\ast }:={\frac {1}{j!}}\times \sum _{m=1}^{j}{\binom {j}{m}}{\frac {(-1)^{j-m}}{m^{k}}}.} Then for k ∈ Z + {\displaystyle k\in \mathbb {Z} ^{+}} and some prescribed OGF, F ( z ) ∈ C ∞ {\displaystyle F(z)\in C^{\infty }} , i.e., so that the higher-order j t h {\displaystyle j^{th}} derivatives of F ( z ) {\displaystyle F(z)} exist for all j ≥ 0 {\displaystyle j\geq 0} , we have that ∑ n ≥ 1 f n n k z n = ∑ j ≥ 1 { k + 2 j } ∗ z j F ( j ) ( z ) . {\displaystyle \sum _{n\geq 1}{\frac {f_{n}}{n^{k}}}z^{n}=\sum _{j\geq 1}\left\{{\begin{matrix}k+2\\j\end{matrix}}\right\}_{\ast }z^{j}F^{(j)}(z).} A table of the first few zeta series transformation coefficients, { k j } ∗ {\displaystyle \scriptstyle {\left\{{\begin{matrix}k\\j\end{matrix}}\right\}_{\ast }}} , appears below. These weighted-harmonic-number expansions are almost identical to the known formulas for the Stirling numbers of the first kind up to the leading sign on the weighted harmonic number terms in the expansions. ==== Examples of the negative-order zeta series transformations ==== The next series related to the polylogarithm functions (the dilogarithm and trilogarithm functions, respectively), the alternating zeta function and the Riemann zeta function are formulated from the previous negative-order series results found in the references. In particular, when s := 2 {\displaystyle s:=2} (or equivalently, when k := 4 {\displaystyle k:=4} in the table above), we have the following special case series for the dilogarithm and corresponding constant value of the alternating zeta function: Li 2 ( z ) = ∑ j ≥ 1 ( − 1 ) j − 1 2 ( H j 2 + H j ( 2 ) ) z j ( 1 − z ) j + 1 ζ ∗ ( 2 ) = π 2 12 = ∑ j ≥ 1 ( H j 2 + H j ( 2 ) ) 4 ⋅ 2 j . {\displaystyle {\begin{aligned}{\text{Li}}_{2}(z)&=\sum _{j\geq 1}{\frac {(-1)^{j-1}}{2}}\left(H_{j}^{2}+H_{j}^{(2)}\right){\frac {z^{j}}{(1-z)^{j+1}}}\\\zeta ^{\ast }(2)&={\frac {\pi ^{2}}{12}}=\sum _{j\geq 1}{\frac {\left(H_{j}^{2}+H_{j}^{(2)}\right)}{4\cdot 2^{j}}}.\end{aligned}}} When s := 3 {\displaystyle s:=3} (or when k := 5 {\displaystyle k:=5} in the notation used in the previous subsection), we similarly obtain special case series for these functions given by Li 3 ( z ) = ∑ j ≥ 1 ( − 1 ) j − 1 6 ( H j 3 + 3 H j H j ( 2 ) + 2 H j ( 3 ) ) z j ( 1 − z ) j + 1 ζ ∗ ( 3 ) = 3 4 ζ ( 3 ) = ∑ j ≥ 1 ( H j 3 + 3 H j H j ( 2 ) + 2 H j ( 3 ) ) 12 ⋅ 2 j = 1 6 log ⁡ ( 2 ) 3 + ∑ j ≥ 0 H j H j ( 2 ) 2 j + 1 . {\displaystyle {\begin{aligned}{\text{Li}}_{3}(z)&=\sum _{j\geq 1}{\frac {(-1)^{j-1}}{6}}\left(H_{j}^{3}+3H_{j}H_{j}^{(2)}+2H_{j}^{(3)}\right){\frac {z^{j}}{(1-z)^{j+1}}}\\\zeta ^{\ast }(3)&={\frac {3}{4}}\zeta (3)=\sum _{j\geq 1}{\frac {\left(H_{j}^{3}+3H_{j}H_{j}^{(2)}+2H_{j}^{(3)}\right)}{12\cdot 2^{j}}}\\&={\frac {1}{6}}\log(2)^{3}+\sum _{j\geq 0}{\frac {H_{j}H_{j}^{(2)}}{2^{j+1}}}.\end{aligned}}} It is known that the first-order harmonic numbers have a closed-form exponential generating function expanded in terms of the natural logarithm, the incomplete gamma function, and the exponential integral given by ∑ n ≥ 0 H n n ! z n = e z ( E 1 ( z ) + γ + log ⁡ z ) = e z ( Γ ( 0 , z ) + γ + log ⁡ z ) . {\displaystyle \sum _{n\geq 0}{\frac {H_{n}}{n!}}z^{n}=e^{z}\left({\mbox{E}}_{1}(z)+\gamma +\log z\right)=e^{z}\left(\Gamma (0,z)+\gamma +\log z\right).} Additional series representations for the r-order harmonic number exponential generating functions for integers r ≥ 2 {\displaystyle r\geq 2} are formed as special cases of these negative-order derivative-based series transformation results. For example, the second-order harmonic numbers have a corresponding exponential generating function expanded by the series ∑ n ≥ 0 H n ( 2 ) n ! z n = ∑ j ≥ 1 H j 2 + H j ( 2 ) 2 ⋅ ( j + 1 ) ! z j e z ( j + 1 + z ) . {\displaystyle \sum _{n\geq 0}{\frac {H_{n}^{(2)}}{n!}}z^{n}=\sum _{j\geq 1}{\frac {H_{j}^{2}+H_{j}^{(2)}}{2\cdot (j+1)!}}z^{j}e^{z}\left(j+1+z\right).} === Generalized negative-order zeta series transformations === A further generalization of the negative-order series transformations defined above is related to more Hurwitz-zeta-like, or Lerch-transcendent-like, generating functions. Specifically, if we define the even more general parametrized Stirling numbers of the second kind by { k + 2 j } ( α , β ) ∗ := 1 j ! × ∑ 0 ≤ m ≤ j ( j m ) ( − 1 ) j − m ( α m + β ) k {\displaystyle \left\{{\begin{matrix}k+2\\j\end{matrix}}\right\}_{(\alpha ,\beta )^{\ast }}:={\frac {1}{j!}}\times \sum _{0\leq m\leq j}{\binom {j}{m}}{\frac {(-1)^{j-m}}{(\alpha m+\beta )^{k}}}} , for non-zero α , β ∈ C {\displaystyle \alpha ,\beta \in \mathbb {C} } such that − β α ∉ Z + {\displaystyle -{\frac {\beta }{\alpha }}\notin \mathbb {Z} ^{+}} , and some fixed k ≥ 1 {\displaystyle k\geq 1} , we have that ∑ n ≥ 1 f n ( α n + β ) k z n = ∑ j ≥ 1 { k + 2 j } ( α , β ) ∗ z j F ( j ) ( z ) . {\displaystyle \sum _{n\geq 1}{\frac {f_{n}}{(\alpha n+\beta )^{k}}}z^{n}=\sum _{j\geq 1}\left\{{\begin{matrix}k+2\\j\end{matrix}}\right\}_{(\alpha ,\beta )^{\ast }}z^{j}F^{(j)}(z).} Moreover, for any integers u , u 0 ≥ 0 {\displaystyle u,u_{0}\geq 0} , we have the partial series approximations to the full infinite series in the previous equation given by ∑ n = 1 u f n ( α n + β ) k z n = [ w u ] ( ∑ j = 1 u + u 0 { k + 2 j } ( α , β ) ∗ ( w z ) j F ( j ) ( w z ) 1 − w ) . {\displaystyle \sum _{n=1}^{u}{\frac {f_{n}}{(\alpha n+\beta )^{k}}}z^{n}=[w^{u}]\left(\sum _{j=1}^{u+u_{0}}\left\{{\begin{matrix}k+2\\j\end{matrix}}\right\}_{(\alpha ,\beta )^{\ast }}{\frac {(wz)^{j}F^{(j)}(wz)}{1-w}}\right).} ==== Examples of the generalized negative-order zeta series transformations ==== Series for special constants and zeta-related functions resulting from these generalized derivative-based series transformations typically involve the generalized r-order harmonic numbers defined by H n ( r ) ( α , β ) := ∑ 1 ≤ k ≤ n ( α k + β ) − r {\displaystyle H_{n}^{(r)}(\alpha ,\beta ):=\sum _{1\leq k\leq n}(\alpha k+\beta )^{-r}} for integers r ≥ 1 {\displaystyle r\geq 1} . A pair of particular series expansions for the following constants when n ∈ Z + {\displaystyle n\in \mathbb {Z} ^{+}} is fixed follow from special cases of BBP-type identities as 4 3 π 9 = ∑ j ≥ 0 8 9 j + 1 ( 2 ( j + 1 3 1 3 ) − 1 + 1 2 ( j + 2 3 2 3 ) − 1 ) log ⁡ ( n 2 − n + 1 n 2 ) = ∑ j ≥ 0 1 ( n 2 + 1 ) j + 1 ( 2 3 ⋅ ( j + 1 ) − n 2 ( j + 1 3 1 3 ) − 1 + n 2 ( j + 2 3 2 3 ) − 1 ) . {\displaystyle {\begin{aligned}{\frac {4{\sqrt {3}}\pi }{9}}&=\sum _{j\geq 0}{\frac {8}{9^{j+1}}}\left(2{\binom {j+{\frac {1}{3}}}{\frac {1}{3}}}^{-1}+{\frac {1}{2}}{\binom {j+{\frac {2}{3}}}{\frac {2}{3}}}^{-1}\right)\\\log \left({\frac {n^{2}-n+1}{n^{2}}}\right)&=\sum _{j\geq 0}{\frac {1}{(n^{2}+1)^{j+1}}}\left({\frac {2}{3\cdot (j+1)}}-n^{2}{\binom {j+{\frac {1}{3}}}{\frac {1}{3}}}^{-1}+{\frac {n}{2}}{\binom {j+{\frac {2}{3}}}{\frac {2}{3}}}^{-1}\right).\end{aligned}}} Several other series for the zeta-function-related cases of the Legendre chi function, the polygamma function, and the Riemann zeta function include χ 1 ( z ) = ∑ j ≥ 0 ( j + 1 2 1 2 ) − 1 z ⋅ ( − z 2 ) j ( 1 − z 2 ) j + 1 χ 2 ( z ) = ∑ j ≥ 0 ( j + 1 2 1 2 ) − 1 ( 1 + H j ( 1 ) ( 2 , 1 ) ) z ⋅ ( − z 2 ) j ( 1 − z 2 ) j + 1 ∑ k ≥ 0 ( − 1 ) k ( z + k ) 2 = ∑ j ≥ 0 ( j + z z ) − 1 ( 1 z 2 + 1 z H j ( 1 ) ( 2 , z ) ) 1 2 j + 1 13 18 ζ ( 3 ) = ∑ i = 1 , 2 ∑ j ≥ 0 ( j + i 3 i 3 ) − 1 ( 1 i 3 + 1 i 2 H j ( 1 ) ( 3 , i ) + 1 2 i ( H j ( 1 ) ( 3 , i ) 2 + H j ( 2 ) ( 3 , i ) ) ) ( − 1 ) i + 1 2 j + 1 . {\displaystyle {\begin{aligned}\chi _{1}(z)&=\sum _{j\geq 0}{\binom {j+{\frac {1}{2}}}{\frac {1}{2}}}^{-1}{\frac {z\cdot (-z^{2})^{j}}{(1-z^{2})^{j+1}}}\\\chi _{2}(z)&=\sum _{j\geq 0}{\binom {j+{\frac {1}{2}}}{\frac {1}{2}}}^{-1}\left(1+H_{j}^{(1)}(2,1)\right){\frac {z\cdot (-z^{2})^{j}}{(1-z^{2})^{j+1}}}\\\sum _{k\geq 0}{\frac {(-1)^{k}}{(z+k)^{2}}}&=\sum _{j\geq 0}{\binom {j+z}{z}}^{-1}\left({\frac {1}{z^{2}}}+{\frac {1}{z}}H_{j}^{(1)}(2,z)\right){\frac {1}{2^{j+1}}}\\{\frac {13}{18}}\zeta (3)&=\sum _{i=1,2}\sum _{j\geq 0}{\binom {j+{\frac {i}{3}}}{\frac {i}{3}}}^{-1}\left({\frac {1}{i^{3}}}+{\frac {1}{i^{2}}}H_{j}^{(1)}(3,i)+{\frac {1}{2i}}\left(H_{j}^{(1)}(3,i)^{2}+H_{j}^{(2)}(3,i)\right)\right){\frac {(-1)^{i+1}}{2^{j+1}}}.\end{aligned}}} Additionally, we can give another new explicit series representation of the inverse tangent function through its relation to the Fibonacci numbers expanded as in the references by tan − 1 ⁡ ( x ) = 5 2 ı × ∑ b = ± 1 ∑ j ≥ 0 b 5 ( j + 1 2 j ) − 1 [ ( b ı φ t / 5 ) j ( 1 − b ı φ t 5 ) j + 1 − ( b ı Φ t / 5 ) j ( 1 + b ı Φ t 5 ) j + 1 ] , {\displaystyle \tan ^{-1}(x)={\frac {\sqrt {5}}{2\imath }}\times \sum _{b=\pm 1}\sum _{j\geq 0}{\frac {b}{\sqrt {5}}}{\binom {j+{\frac {1}{2}}}{j}}^{-1}\left[{\frac {\left(b\imath \varphi t/{\sqrt {5}}\right)^{j}}{\left(1-{\frac {b\imath \varphi t}{\sqrt {5}}}\right)^{j+1}}}-{\frac {\left(b\imath \Phi t/{\sqrt {5}}\right)^{j}}{\left(1+{\frac {b\imath \Phi t}{\sqrt {5}}}\right)^{j+1}}}\right],} for t ≡ 2 x / ( 1 + 1 + 4 5 x 2 ) {\displaystyle t\equiv 2x/\left(1+{\sqrt {1+{\frac {4}{5}}x^{2}}}\right)} and where the golden ratio (and its reciprocal) are respectively defined by φ , Φ := 1 2 ( 1 ± 5 ) {\displaystyle \varphi ,\Phi :={\frac {1}{2}}\left(1\pm {\sqrt {5}}\right)} . == Inversion relations and generating function identities == === Inversion relations === An inversion relation is a pair of equations of the form g n = ∑ k = 0 n A n , k ⋅ f k ⟷ f n = ∑ k = 0 n B n , k ⋅ g k , {\displaystyle g_{n}=\sum _{k=0}^{n}A_{n,k}\cdot f_{k}\quad \longleftrightarrow \quad f_{n}=\sum _{k=0}^{n}B_{n,k}\cdot g_{k},} which is equivalent to the orthogonality relation ∑ k = j n A n , k ⋅ B k , j = δ n , j . {\displaystyle \sum _{k=j}^{n}A_{n,k}\cdot B_{k,j}=\delta _{n,j}.} Given two sequences, { f n } {\displaystyle \{f_{n}\}} and { g n } {\displaystyle \{g_{n}\}} , related by an inverse relation of the previous form, we sometimes seek to relate the OGFs and EGFs of the pair of sequences by functional equations implied by the inversion relation. This goal in some respects mirrors the more number theoretic (Lambert series) generating function relation guaranteed by the Möbius inversion formula, which provides that whenever a n = ∑ d | n b d ⟷ b n = ∑ d | n μ ( n d ) a d , {\displaystyle a_{n}=\sum _{d|n}b_{d}\quad \longleftrightarrow \quad b_{n}=\sum _{d|n}\mu \left({\frac {n}{d}}\right)a_{d},} the generating functions for the sequences, { a n } {\displaystyle \{a_{n}\}} and { b n } {\displaystyle \{b_{n}\}} , are related by the Möbius transform given by ∑ n ≥ 1 a n z n = ∑ n ≥ 1 b n z n 1 − z n . {\displaystyle \sum _{n\geq 1}a_{n}z^{n}=\sum _{n\geq 1}{\frac {b_{n}z^{n}}{1-z^{n}}}.} Similarly, the Euler transform of generating functions for two sequences, { a n } {\displaystyle \{a_{n}\}} and { b n } {\displaystyle \{b_{n}\}} , satisfying the relation 1 + ∑ n ≥ 1 b n z n = ∏ i ≥ 1 1 ( 1 − z i ) a i , {\displaystyle 1+\sum _{n\geq 1}b_{n}z^{n}=\prod _{i\geq 1}{\frac {1}{(1-z^{i})^{a_{i}}}},} is given in the form of 1 + B ( z ) = exp ⁡ ( ∑ k ≥ 1 A ( z k ) k ) , {\displaystyle 1+B(z)=\exp \left(\sum _{k\geq 1}{\frac {A(z^{k})}{k}}\right),} where the corresponding inversion formulas between the two sequences is given in the reference. The remainder of the results and examples given in this section sketch some of the more well-known generating function transformations provided by sequences related by inversion formulas (the binomial transform and the Stirling transform), and provides several tables of known inversion relations of various types cited in Riordan's Combinatorial Identities book. In many cases, we omit the corresponding functional equations implied by the inversion relationships between two sequences (this part of the article needs more work). === The binomial transform === The first inversion relation provided below implicit to the binomial transform is perhaps the simplest of all inversion relations we will consider in this section. For any two sequences, { f n } {\displaystyle \{f_{n}\}} and { g n } {\displaystyle \{g_{n}\}} , related by the inversion formulas g n = ∑ k = 0 n ( n k ) ( − 1 ) k f k ⟷ f n = ∑ k = 0 n ( n k ) ( − 1 ) k g k , {\displaystyle g_{n}=\sum _{k=0}^{n}{\binom {n}{k}}(-1)^{k}f_{k}\quad \longleftrightarrow \quad f_{n}=\sum _{k=0}^{n}{\binom {n}{k}}(-1)^{k}g_{k},} we have functional equations between the OGFs and EGFs of these sequences provided by the binomial transform in the forms of G ( z ) = 1 1 − z F ( − z 1 − z ) {\displaystyle G(z)={\frac {1}{1-z}}F\left({\frac {-z}{1-z}}\right)} and G ^ ( z ) = e z F ^ ( − z ) . {\displaystyle {\widehat {G}}(z)=e^{z}{\widehat {F}}(-z).} === The Stirling transform === For any pair of sequences, { f n } {\displaystyle \{f_{n}\}} and { g n } {\displaystyle \{g_{n}\}} , related by the Stirling number inversion formula g n = ∑ k = 1 n { n k } f k ⟷ f n = ∑ k = 1 n [ n k ] ( − 1 ) n − k g k , {\displaystyle g_{n}=\sum _{k=1}^{n}\left\{{\begin{matrix}n\\k\end{matrix}}\right\}f_{k}\quad \longleftrightarrow \quad f_{n}=\sum _{k=1}^{n}\left[{\begin{matrix}n\\k\end{matrix}}\right](-1)^{n-k}g_{k},} these inversion relations between the two sequences translate into functional equations between the sequence EGFs given by the Stirling transform as G ^ ( z ) = F ^ ( e z − 1 ) {\displaystyle {\widehat {G}}(z)={\widehat {F}}\left(e^{z}-1\right)} and F ^ ( z ) = G ^ ( log ⁡ ( 1 + z ) ) . {\displaystyle {\widehat {F}}(z)={\widehat {G}}\left(\log(1+z)\right).} === Tables of inversion pairs from Riordan's book === These tables appear in chapters 2 and 3 in Riordan's book providing an introduction to inverse relations with many examples, though which does not stress functional equations between the generating functions of sequences related by these inversion relations. The interested reader is encouraged to pick up a copy of the original book for more details. ==== Several forms of the simplest inverse relations ==== ==== Gould classes of inverse relations ==== The terms, A n , k {\displaystyle A_{n,k}} and B n , k {\displaystyle B_{n,k}} , in the inversion formulas of the form a n = ∑ k A n , k ⋅ b k ⟷ b n = ∑ k B n , k ⋅ ( − 1 ) n − k a k , {\displaystyle a_{n}=\sum _{k}A_{n,k}\cdot b_{k}\quad \longleftrightarrow \quad b_{n}=\sum _{k}B_{n,k}\cdot (-1)^{n-k}a_{k},} forming several special cases of Gould classes of inverse relations are given in the next table. For classes 1 and 2, the range on the sum satisfies k ∈ [ 0 , n ] {\displaystyle k\in [0,n]} , and for classes 3 and 4 the bounds on the summation are given by k = n , n + 1 , … {\displaystyle k=n,n+1,\ldots } . These terms are also somewhat simplified from their original forms in the table by the identities ( p + q n − k n − k ) − q × ( p + q n − k − 1 n − k − 1 ) = p + q k − k p + q n − k ( p + q n − k n − k ) {\displaystyle {\binom {p+qn-k}{n-k}}-q\times {\binom {p+qn-k-1}{n-k-1}}={\frac {p+qk-k}{p+qn-k}}{\binom {p+qn-k}{n-k}}} ( p + q k − k n − k ) + q × ( p + q k − k n − 1 − k ) = p + q n − n + 1 p + q k − n + 1 ( p + q k − k n − k ) . {\displaystyle {\binom {p+qk-k}{n-k}}+q\times {\binom {p+qk-k}{n-1-k}}={\frac {p+qn-n+1}{p+qk-n+1}}{\binom {p+qk-k}{n-k}}.} ==== The simpler Chebyshev inverse relations ==== The so-termed simpler cases of the Chebyshev classes of inverse relations in the subsection below are given in the next table. The formulas in the table are simplified somewhat by the following identities: ( n − k k ) + ( n − k − 1 k − 1 ) = n n − k ( n − k k ) ( n k ) − ( n k − 1 ) = n + 1 − k n + 1 − 2 k ( n k ) ( n + 2 k k ) − ( n + 2 k k − 1 ) = n + 1 n + 1 + k ( n + 2 k k ) ( n + k − 1 k ) − ( n + k − 1 k − 1 ) = n − k n + k ( n + k k ) . {\displaystyle {\begin{aligned}{\binom {n-k}{k}}+{\binom {n-k-1}{k-1}}&={\frac {n}{n-k}}{\binom {n-k}{k}}\\{\binom {n}{k}}-{\binom {n}{k-1}}&={\frac {n+1-k}{n+1-2k}}{\binom {n}{k}}\\{\binom {n+2k}{k}}-{\binom {n+2k}{k-1}}&={\frac {n+1}{n+1+k}}{\binom {n+2k}{k}}\\{\binom {n+k-1}{k}}-{\binom {n+k-1}{k-1}}&={\frac {n-k}{n+k}}{\binom {n+k}{k}}.\end{aligned}}} Additionally the inversion relations given in the table also hold when n ⟼ n + p {\displaystyle n\longmapsto n+p} in any given relation. ==== Chebyshev classes of inverse relations ==== The terms, A n , k {\displaystyle A_{n,k}} and B n , k {\displaystyle B_{n,k}} , in the inversion formulas of the form a n = ∑ k A n , k ⋅ b n + c k ⟷ b n = ∑ k B n , k ⋅ ( − 1 ) k a n + c k , {\displaystyle a_{n}=\sum _{k}A_{n,k}\cdot b_{n+ck}\quad \longleftrightarrow \quad b_{n}=\sum _{k}B_{n,k}\cdot (-1)^{k}a_{n+ck},} for non-zero integers c {\displaystyle c} forming several special cases of Chebyshev classes of inverse relations are given in the next table. Additionally, these inversion relations also hold when n ⟼ n + p {\displaystyle n\longmapsto n+p} for some p = 0 , 1 , 2 , … , {\displaystyle p=0,1,2,\ldots ,} or when the sign factor of ( − 1 ) k {\displaystyle (-1)^{k}} is shifted from the terms B n , k {\displaystyle B_{n,k}} to the terms A n , k {\displaystyle A_{n,k}} . The formulas given in the previous table are simplified somewhat by the identities ( n + c k + k k ) − ( c + 1 ) ( n + c k + k − 1 k − 1 ) = n n + c k + k ( n + c k + k k ) ( n k ) + ( c + 1 ) ( n k − 1 ) = n + 1 + c k n + 1 − k ( n k ) ( n − 1 + k k ) + c ( n − 1 + k k − 1 ) = n + c k n ( n − 1 + k k ) ( n + c k k ) − ( c − 1 ) ( n + c k k − 1 ) = n + 1 n + 1 + c k − k ( n + c k k ) . {\displaystyle {\begin{aligned}{\binom {n+ck+k}{k}}-(c+1){\binom {n+ck+k-1}{k-1}}&={\frac {n}{n+ck+k}}{\binom {n+ck+k}{k}}\\{\binom {n}{k}}+(c+1){\binom {n}{k-1}}&={\frac {n+1+ck}{n+1-k}}{\binom {n}{k}}\\{\binom {n-1+k}{k}}+c{\binom {n-1+k}{k-1}}&={\frac {n+ck}{n}}{\binom {n-1+k}{k}}\\{\binom {n+ck}{k}}-(c-1){\binom {n+ck}{k-1}}&={\frac {n+1}{n+1+ck-k}}{\binom {n+ck}{k}}.\end{aligned}}} ==== The simpler Legendre inverse relations ==== ==== Legendre–Chebyshev classes of inverse relations ==== The Legendre–Chebyshev classes of inverse relations correspond to inversion relations of the form a n = ∑ k A n , k ⋅ b k ⟷ b n = ∑ k B n , k ⋅ ( − 1 ) n − k a k , {\displaystyle a_{n}=\sum _{k}A_{n,k}\cdot b_{k}\quad \longleftrightarrow \quad b_{n}=\sum _{k}B_{n,k}\cdot (-1)^{n-k}a_{k},} where the terms, A n , k {\displaystyle A_{n,k}} and B n , k {\displaystyle B_{n,k}} , implicitly depend on some fixed non-zero c ∈ Z {\displaystyle c\in \mathbb {Z} } . In general, given a class of Chebyshev inverse pairs of the form a n = ∑ k A n , k ⋅ b n − c k ⟷ b n = ∑ k B n , k ⋅ ( − 1 ) k a n − c k , {\displaystyle a_{n}=\sum _{k}A_{n,k}\cdot b_{n-ck}\quad \longleftrightarrow \quad b_{n}=\sum _{k}B_{n,k}\cdot (-1)^{k}a_{n-ck},} if c {\displaystyle c} a prime, the substitution of n ⟼ c n + p {\displaystyle n\longmapsto cn+p} , a c n + p ⟼ A n {\displaystyle a_{cn+p}\longmapsto A_{n}} , and b c n + p ⟼ B n {\displaystyle b_{cn+p}\longmapsto B_{n}} (possibly replacing k ⟼ n − k {\displaystyle k\longmapsto n-k} ) leads to a Legendre–Chebyshev pair of the form A n = ∑ k A c n + p , k B n − k ⟷ B n = ∑ k B c n + p , k ( − 1 ) k A n − k . {\displaystyle A_{n}=\sum _{k}A_{cn+p,k}B_{n-k}\quad \longleftrightarrow \quad B_{n}=\sum _{k}B_{cn+p,k}(-1)^{k}A_{n-k}.} Similarly, if the positive integer c := d e {\displaystyle c:=de} is composite, we can derive inversion pairs of the form A n = ∑ k A d n + p , k B n − e k ⟷ B n = ∑ k B d n + p , k ( − 1 ) k A n − e k . {\displaystyle A_{n}=\sum _{k}A_{dn+p,k}B_{n-ek}\quad \longleftrightarrow \quad B_{n}=\sum _{k}B_{dn+p,k}(-1)^{k}A_{n-ek}.} The next table summarizes several generalized classes of Legendre–Chebyshev inverse relations for some non-zero integer c {\displaystyle c} . ==== Abel inverse relations ==== Abel inverse relations correspond to Abel inverse pairs of the form a n = ∑ k = 0 n ( n k ) A n k b k ⟷ b n = ∑ k = 0 n ( n k ) B n k ( − 1 ) n − k a k , {\displaystyle a_{n}=\sum _{k=0}^{n}{\binom {n}{k}}A_{nk}b_{k}\quad \longleftrightarrow \quad b_{n}=\sum _{k=0}^{n}{\binom {n}{k}}B_{nk}(-1)^{n-k}a_{k},} where the terms, A n k {\displaystyle A_{nk}} and B n k {\displaystyle B_{nk}} , may implicitly vary with some indeterminate summation parameter x {\displaystyle x} . These relations also still hold if the binomial coefficient substitution of ( n k ) ⟼ ( n + p k + p ) {\displaystyle {\binom {n}{k}}\longmapsto {\binom {n+p}{k+p}}} is performed for some non-negative integer p {\displaystyle p} . The next table summarizes several notable forms of these Abel inverse relations. ==== Inverse relations derived from ordinary generating functions ==== If we let the convolved Fibonacci numbers, f k ( ± p ) {\displaystyle f_{k}^{(\pm p)}} , be defined by f n ( p ) = ∑ j ≥ 0 ( p + n − j − 1 n − j ) ( n − j j ) f n ( − p ) = ∑ j ≥ 0 ( p n + j ) ( n − j j ) ( − 1 ) n − j , {\displaystyle {\begin{aligned}f_{n}^{(p)}&=\sum _{j\geq 0}{\binom {p+n-j-1}{n-j}}{\binom {n-j}{j}}\\f_{n}^{(-p)}&=\sum _{j\geq 0}{\binom {p}{n+j}}{\binom {n-j}{j}}(-1)^{n-j},\end{aligned}}} we have the next table of inverse relations which are obtained from properties of ordinary sequence generating functions proved as in section 3.3 of Riordan's book. Note that relations 3, 4, 5, and 6 in the table may be transformed according to the substitutions a n − k ⟼ a n − q k {\displaystyle a_{n-k}\longmapsto a_{n-qk}} and b n − k ⟼ b n − q k {\displaystyle b_{n-k}\longmapsto b_{n-qk}} for some fixed non-zero integer q ≥ 1 {\displaystyle q\geq 1} . ==== Inverse relations derived from exponential generating functions ==== Let B n {\displaystyle B_{n}} and E n {\displaystyle E_{n}} denote the Bernoulli numbers and Euler numbers, respectively, and suppose that the sequences, { d 2 n } {\displaystyle \{d_{2n}\}} , { e 2 n } {\displaystyle \{e_{2n}\}} , and { f 2 n } {\displaystyle \{f_{2n}\}} are defined by the following exponential generating functions: ∑ n ≥ 0 d 2 n z 2 n ( 2 n ) ! = 2 z e z − e − z ∑ n ≥ 0 e 2 n z 2 n ( 2 n ) ! = z 2 e z + e − z − 2 ∑ n ≥ 0 f 2 n z 2 n ( 2 n ) ! = z 3 3 ( e z − e − z − 2 z ) . {\displaystyle {\begin{aligned}\sum _{n\geq 0}{\frac {d_{2n}z^{2n}}{(2n)!}}&={\frac {2z}{e^{z}-e^{-z}}}\\\sum _{n\geq 0}{\frac {e_{2n}z^{2n}}{(2n)!}}&={\frac {z^{2}}{e^{z}+e^{-z}-2}}\\\sum _{n\geq 0}{\frac {f_{2n}z^{2n}}{(2n)!}}&={\frac {z^{3}}{3(e^{z}-e^{-z}-2z)}}.\end{aligned}}} The next table summarizes several notable cases of inversion relations obtained from exponential generating functions in section 3.4 of Riordan's book. ==== Multinomial inverses ==== The inverse relations used in formulating the binomial transform cited in the previous subsection are generalized to corresponding two-index inverse relations for sequences of two indices, and to multinomial inversion formulas for sequences of j ≥ 3 {\displaystyle j\geq 3} indices involving the binomial coefficients in Riordan. In particular, we have the form of a two-index inverse relation given by a m n = ∑ j = 0 m ∑ k = 0 n ( m j ) ( n k ) ( − 1 ) j + k b j k ⟷ b m n = ∑ j = 0 m ∑ k = 0 n ( m j ) ( n k ) ( − 1 ) j + k a j k , {\displaystyle a_{mn}=\sum _{j=0}^{m}\sum _{k=0}^{n}{\binom {m}{j}}{\binom {n}{k}}(-1)^{j+k}b_{jk}\quad \longleftrightarrow \quad b_{mn}=\sum _{j=0}^{m}\sum _{k=0}^{n}{\binom {m}{j}}{\binom {n}{k}}(-1)^{j+k}a_{jk},} and the more general form of a multinomial pair of inversion formulas given by a n 1 n 2 ⋯ n j = ∑ k 1 , … , k j ( n 1 k 1 ) ⋯ ( n j k j ) ( − 1 ) k 1 + ⋯ + k j b k 1 k 2 ⋯ k j ⟷ b n 1 n 2 ⋯ n j = ∑ k 1 , … , k j ( n 1 k 1 ) ⋯ ( n j k j ) ( − 1 ) k 1 + ⋯ + k j a k 1 k 2 ⋯ k j . {\displaystyle a_{n_{1}n_{2}\cdots n_{j}}=\sum _{k_{1},\ldots ,k_{j}}{\binom {n_{1}}{k_{1}}}\cdots {\binom {n_{j}}{k_{j}}}(-1)^{k_{1}+\cdots +k_{j}}b_{k_{1}k_{2}\cdots k_{j}}\quad \longleftrightarrow \quad b_{n_{1}n_{2}\cdots n_{j}}=\sum _{k_{1},\ldots ,k_{j}}{\binom {n_{1}}{k_{1}}}\cdots {\binom {n_{j}}{k_{j}}}(-1)^{k_{1}+\cdots +k_{j}}a_{k_{1}k_{2}\cdots k_{j}}.} == Notes == == References == Comtet, L. (1974). Advanced Combinatorics (PDF). D. Reidel Publishing Company. ISBN 9027703809. Archived from the original (PDF) on 2017-06-24. Retrieved 2017-02-10. Flajolet and Sedgewick (2010). Analytic Combinatorics. Cambridge University Press. ISBN 978-0-521-89806-5. Graham, Knuth and Patashnik (1994). Concrete Mathematics: A Foundation for Computer Science (2nd ed.). Addison-Wesley. ISBN 0201558025. Knuth, D. E. (1997). The Art of Computer Programming: Fundamental Algorithms. Vol. 1. Addison-Wesley. ISBN 0-201-89683-4. Lando, S. K. (2002). Lectures on Generating Functions. American Mathematical Society. ISBN 0-8218-3481-9. Oliver, Lozier, Boisvert and Clark (2010). NIST Handbook of Mathematical Functions. Cambridge University Press. ISBN 978-0-521-14063-8.{{cite book}}: CS1 maint: multiple names: authors list (link) Riordan, J. (1968). Combinatorial Identities. Wiley and Sons. Roman, S. (1984). The Umbral Calculus. Dover Publications. ISBN 0-486-44139-3. Schmidt, M. D. (3 Nov 2016). "Zeta Series Generating Function Transformations Related to Generalized Stirling Numbers and Partial Sums of the Hurwitz Zeta Function". arXiv:1611.00957 [math.CO]. Schmidt, M. D. (30 Oct 2016). "Zeta Series Generating Function Transformations Related to Polylogarithm Functions and the k-Order Harmonic Numbers". arXiv:1610.09666 [math.CO]. Schmidt, M. D. (2017). "Jacobi-Type Continued Fractions for the Ordinary Generating Functions of Generalized Factorial Functions". Journal of Integer Sequences. 20. arXiv:1610.09691. Schmidt, M. D. (9 Sep 2016). "Square Series Generating Function Transformations". arXiv:1609.02803 [math.NT]. Stanley, R. P. (1999). Enumerative Combinatorics. Vol. 2. Cambridge University Press. ISBN 978-0-521-78987-5. == External links == Why don't they teach Newton's calculus of 'What comes next?' - Mathologer
Wikipedia/Generating_function_transformation
In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the instantaneous rate of change, the ratio of the instantaneous change in the dependent variable to that of the independent variable. The process of finding a derivative is called differentiation. There are multiple different notations for differentiation. Leibniz notation, named after Gottfried Wilhelm Leibniz, is represented as the ratio of two differentials, whereas prime notation is written by adding a prime mark. Higher order notations represent repeated differentiation, and they are usually denoted in Leibniz notation by adding superscripts to the differentials, and in prime notation by adding additional prime marks. The higher order derivatives can be applied in physics; for example, while the first derivative of the position of a moving object with respect to time is the object's velocity, how the position changes as time advances, the second derivative is the object's acceleration, how the velocity changes as time advances. Derivatives can be generalized to functions of several real variables. In this case, the derivative is reinterpreted as a linear transformation whose graph is (after an appropriate translation) the best linear approximation to the graph of the original function. The Jacobian matrix is the matrix that represents this linear transformation with respect to the basis given by the choice of independent and dependent variables. It can be calculated in terms of the partial derivatives with respect to the independent variables. For a real-valued function of several variables, the Jacobian matrix reduces to the gradient vector. == Definition == === As a limit === A function of a real variable f ( x ) {\displaystyle f(x)} is differentiable at a point a {\displaystyle a} of its domain, if its domain contains an open interval containing ⁠ a {\displaystyle a} ⁠, and the limit L = lim h → 0 f ( a + h ) − f ( a ) h {\displaystyle L=\lim _{h\to 0}{\frac {f(a+h)-f(a)}{h}}} exists. This means that, for every positive real number ⁠ ε {\displaystyle \varepsilon } ⁠, there exists a positive real number δ {\displaystyle \delta } such that, for every h {\displaystyle h} such that | h | < δ {\displaystyle |h|<\delta } and h ≠ 0 {\displaystyle h\neq 0} then f ( a + h ) {\displaystyle f(a+h)} is defined, and | L − f ( a + h ) − f ( a ) h | < ε , {\displaystyle \left|L-{\frac {f(a+h)-f(a)}{h}}\right|<\varepsilon ,} where the vertical bars denote the absolute value. This is an example of the (ε, δ)-definition of limit. If the function f {\displaystyle f} is differentiable at ⁠ a {\displaystyle a} ⁠, that is if the limit L {\displaystyle L} exists, then this limit is called the derivative of f {\displaystyle f} at a {\displaystyle a} . Multiple notations for the derivative exist. The derivative of f {\displaystyle f} at a {\displaystyle a} can be denoted ⁠ f ′ ( a ) {\displaystyle f'(a)} ⁠, read as "⁠ f {\displaystyle f} ⁠ prime of ⁠ a {\displaystyle a} ⁠"; or it can be denoted ⁠ d f d x ( a ) {\displaystyle \textstyle {\frac {df}{dx}}(a)} ⁠, read as "the derivative of f {\displaystyle f} with respect to x {\displaystyle x} at ⁠ a {\displaystyle a} ⁠" or "⁠ d f {\displaystyle df} ⁠ by (or over) d x {\displaystyle dx} at ⁠ a {\displaystyle a} ⁠". See § Notation below. If f {\displaystyle f} is a function that has a derivative at every point in its domain, then a function can be defined by mapping every point x {\displaystyle x} to the value of the derivative of f {\displaystyle f} at x {\displaystyle x} . This function is written f ′ {\displaystyle f'} and is called the derivative function or the derivative of ⁠ f {\displaystyle f} ⁠. The function f {\displaystyle f} sometimes has a derivative at most, but not all, points of its domain. The function whose value at a {\displaystyle a} equals f ′ ( a ) {\displaystyle f'(a)} whenever f ′ ( a ) {\displaystyle f'(a)} is defined and elsewhere is undefined is also called the derivative of ⁠ f {\displaystyle f} ⁠. It is still a function, but its domain may be smaller than the domain of f {\displaystyle f} . For example, let f {\displaystyle f} be the squaring function: f ( x ) = x 2 {\displaystyle f(x)=x^{2}} . Then the quotient in the definition of the derivative is f ( a + h ) − f ( a ) h = ( a + h ) 2 − a 2 h = a 2 + 2 a h + h 2 − a 2 h = 2 a + h . {\displaystyle {\frac {f(a+h)-f(a)}{h}}={\frac {(a+h)^{2}-a^{2}}{h}}={\frac {a^{2}+2ah+h^{2}-a^{2}}{h}}=2a+h.} The division in the last step is valid as long as h ≠ 0 {\displaystyle h\neq 0} . The closer h {\displaystyle h} is to ⁠ 0 {\displaystyle 0} ⁠, the closer this expression becomes to the value 2 a {\displaystyle 2a} . The limit exists, and for every input a {\displaystyle a} the limit is 2 a {\displaystyle 2a} . So, the derivative of the squaring function is the doubling function: ⁠ f ′ ( x ) = 2 x {\displaystyle f'(x)=2x} ⁠. The ratio in the definition of the derivative is the slope of the line through two points on the graph of the function ⁠ f {\displaystyle f} ⁠, specifically the points ( a , f ( a ) ) {\displaystyle (a,f(a))} and ( a + h , f ( a + h ) ) {\displaystyle (a+h,f(a+h))} . As h {\displaystyle h} is made smaller, these points grow closer together, and the slope of this line approaches the limiting value, the slope of the tangent to the graph of f {\displaystyle f} at a {\displaystyle a} . In other words, the derivative is the slope of the tangent. === Using infinitesimals === One way to think of the derivative d f d x ( a ) {\textstyle {\frac {df}{dx}}(a)} is as the ratio of an infinitesimal change in the output of the function f {\displaystyle f} to an infinitesimal change in its input. In order to make this intuition rigorous, a system of rules for manipulating infinitesimal quantities is required. The system of hyperreal numbers is a way of treating infinite and infinitesimal quantities. The hyperreals are an extension of the real numbers that contain numbers greater than anything of the form 1 + 1 + ⋯ + 1 {\displaystyle 1+1+\cdots +1} for any finite number of terms. Such numbers are infinite, and their reciprocals are infinitesimals. The application of hyperreal numbers to the foundations of calculus is called nonstandard analysis. This provides a way to define the basic concepts of calculus such as the derivative and integral in terms of infinitesimals, thereby giving a precise meaning to the d {\displaystyle d} in the Leibniz notation. Thus, the derivative of f ( x ) {\displaystyle f(x)} becomes f ′ ( x ) = st ⁡ ( f ( x + d x ) − f ( x ) d x ) {\displaystyle f'(x)=\operatorname {st} \left({\frac {f(x+dx)-f(x)}{dx}}\right)} for an arbitrary infinitesimal ⁠ d x {\displaystyle dx} ⁠, where st {\displaystyle \operatorname {st} } denotes the standard part function, which "rounds off" each finite hyperreal to the nearest real. Taking the squaring function f ( x ) = x 2 {\displaystyle f(x)=x^{2}} as an example again, f ′ ( x ) = st ⁡ ( x 2 + 2 x ⋅ d x + ( d x ) 2 − x 2 d x ) = st ⁡ ( 2 x ⋅ d x + ( d x ) 2 d x ) = st ⁡ ( 2 x ⋅ d x d x + ( d x ) 2 d x ) = st ⁡ ( 2 x + d x ) = 2 x . {\displaystyle {\begin{aligned}f'(x)&=\operatorname {st} \left({\frac {x^{2}+2x\cdot dx+(dx)^{2}-x^{2}}{dx}}\right)\\&=\operatorname {st} \left({\frac {2x\cdot dx+(dx)^{2}}{dx}}\right)\\&=\operatorname {st} \left({\frac {2x\cdot dx}{dx}}+{\frac {(dx)^{2}}{dx}}\right)\\&=\operatorname {st} \left(2x+dx\right)\\&=2x.\end{aligned}}} == Continuity and differentiability == If f {\displaystyle f} is differentiable at ⁠ a {\displaystyle a} ⁠, then f {\displaystyle f} must also be continuous at a {\displaystyle a} . As an example, choose a point a {\displaystyle a} and let f {\displaystyle f} be the step function that returns the value 1 for all x {\displaystyle x} less than ⁠ a {\displaystyle a} ⁠, and returns a different value 10 for all x {\displaystyle x} greater than or equal to a {\displaystyle a} . The function f {\displaystyle f} cannot have a derivative at a {\displaystyle a} . If h {\displaystyle h} is negative, then a + h {\displaystyle a+h} is on the low part of the step, so the secant line from a {\displaystyle a} to a + h {\displaystyle a+h} is very steep; as h {\displaystyle h} tends to zero, the slope tends to infinity. If h {\displaystyle h} is positive, then a + h {\displaystyle a+h} is on the high part of the step, so the secant line from a {\displaystyle a} to a + h {\displaystyle a+h} has slope zero. Consequently, the secant lines do not approach any single slope, so the limit of the difference quotient does not exist. However, even if a function is continuous at a point, it may not be differentiable there. For example, the absolute value function given by f ( x ) = | x | {\displaystyle f(x)=|x|} is continuous at ⁠ x = 0 {\displaystyle x=0} ⁠, but it is not differentiable there. If h {\displaystyle h} is positive, then the slope of the secant line from 0 to h {\displaystyle h} is one; if h {\displaystyle h} is negative, then the slope of the secant line from 0 {\displaystyle 0} to h {\displaystyle h} is ⁠ − 1 {\displaystyle -1} ⁠. This can be seen graphically as a "kink" or a "cusp" in the graph at x = 0 {\displaystyle x=0} . Even a function with a smooth graph is not differentiable at a point where its tangent is vertical: For instance, the function given by f ( x ) = x 1 / 3 {\displaystyle f(x)=x^{1/3}} is not differentiable at x = 0 {\displaystyle x=0} . In summary, a function that has a derivative is continuous, but there are continuous functions that do not have a derivative. Most functions that occur in practice have derivatives at all points or almost every point. Early in the history of calculus, many mathematicians assumed that a continuous function was differentiable at most points. Under mild conditions (for example, if the function is a monotone or a Lipschitz function), this is true. However, in 1872, Weierstrass found the first example of a function that is continuous everywhere but differentiable nowhere. This example is now known as the Weierstrass function. In 1931, Stefan Banach proved that the set of functions that have a derivative at some point is a meager set in the space of all continuous functions. Informally, this means that hardly any random continuous functions have a derivative at even one point. == Notation == One common way of writing the derivative of a function is Leibniz notation, introduced by Gottfried Wilhelm Leibniz in 1675, which denotes a derivative as the quotient of two differentials, such as d y {\displaystyle dy} and ⁠ d x {\displaystyle dx} ⁠. It is still commonly used when the equation y = f ( x ) {\displaystyle y=f(x)} is viewed as a functional relationship between dependent and independent variables. The first derivative is denoted by ⁠ d y d x {\displaystyle \textstyle {\frac {dy}{dx}}} ⁠, read as "the derivative of y {\displaystyle y} with respect to ⁠ x {\displaystyle x} ⁠". This derivative can alternately be treated as the application of a differential operator to a function, d y d x = d d x f ( x ) . {\textstyle {\frac {dy}{dx}}={\frac {d}{dx}}f(x).} Higher derivatives are expressed using the notation d n y d x n {\textstyle {\frac {d^{n}y}{dx^{n}}}} for the n {\displaystyle n} -th derivative of y = f ( x ) {\displaystyle y=f(x)} . These are abbreviations for multiple applications of the derivative operator; for example, d 2 y d x 2 = d d x ( d d x f ( x ) ) . {\textstyle {\frac {d^{2}y}{dx^{2}}}={\frac {d}{dx}}{\Bigl (}{\frac {d}{dx}}f(x){\Bigr )}.} Unlike some alternatives, Leibniz notation involves explicit specification of the variable for differentiation, in the denominator, which removes ambiguity when working with multiple interrelated quantities. The derivative of a composed function can be expressed using the chain rule: if u = g ( x ) {\displaystyle u=g(x)} and y = f ( g ( x ) ) {\displaystyle y=f(g(x))} then d y d x = d y d u ⋅ d u d x . {\textstyle {\frac {dy}{dx}}={\frac {dy}{du}}\cdot {\frac {du}{dx}}.} Another common notation for differentiation is by using the prime mark in the symbol of a function ⁠ f ( x ) {\displaystyle f(x)} ⁠. This notation, due to Joseph-Louis Lagrange, is now known as prime notation. The first derivative is written as ⁠ f ′ ( x ) {\displaystyle f'(x)} ⁠, read as "⁠ f {\displaystyle f} ⁠ prime of ⁠ x {\displaystyle x} ⁠", or ⁠ y ′ {\displaystyle y'} ⁠, read as "⁠ y {\displaystyle y} ⁠ prime". Similarly, the second and the third derivatives can be written as f ″ {\displaystyle f''} and ⁠ f ‴ {\displaystyle f'''} ⁠, respectively. For denoting the number of higher derivatives beyond this point, some authors use Roman numerals in superscript, whereas others place the number in parentheses, such as f i v {\displaystyle f^{\mathrm {iv} }} or ⁠ f ( 4 ) {\displaystyle f^{(4)}} ⁠. The latter notation generalizes to yield the notation f ( n ) {\displaystyle f^{(n)}} for the ⁠ n {\displaystyle n} ⁠th derivative of ⁠ f {\displaystyle f} ⁠. In Newton's notation or the dot notation, a dot is placed over a symbol to represent a time derivative. If y {\displaystyle y} is a function of ⁠ t {\displaystyle t} ⁠, then the first and second derivatives can be written as y ˙ {\displaystyle {\dot {y}}} and ⁠ y ¨ {\displaystyle {\ddot {y}}} ⁠, respectively. This notation is used exclusively for derivatives with respect to time or arc length. It is typically used in differential equations in physics and differential geometry. However, the dot notation becomes unmanageable for high-order derivatives (of order 4 or more) and cannot deal with multiple independent variables. Another notation is D-notation, which represents the differential operator by the symbol ⁠ D {\displaystyle D} ⁠. The first derivative is written D f ( x ) {\displaystyle Df(x)} and higher derivatives are written with a superscript, so the n {\displaystyle n} -th derivative is ⁠ D n f ( x ) {\displaystyle D^{n}f(x)} ⁠. This notation is sometimes called Euler notation, although it seems that Leonhard Euler did not use it, and the notation was introduced by Louis François Antoine Arbogast. To indicate a partial derivative, the variable differentiated by is indicated with a subscript, for example given the function ⁠ u = f ( x , y ) {\displaystyle u=f(x,y)} ⁠, its partial derivative with respect to x {\displaystyle x} can be written D x u {\displaystyle D_{x}u} or ⁠ D x f ( x , y ) {\displaystyle D_{x}f(x,y)} ⁠. Higher partial derivatives can be indicated by superscripts or multiple subscripts, e.g. D x y f ( x , y ) = ∂ ∂ y ( ∂ ∂ x f ( x , y ) ) {\textstyle D_{xy}f(x,y)={\frac {\partial }{\partial y}}{\Bigl (}{\frac {\partial }{\partial x}}f(x,y){\Bigr )}} and ⁠ D x 2 f ( x , y ) = ∂ ∂ x ( ∂ ∂ x f ( x , y ) ) {\displaystyle \textstyle D_{x}^{2}f(x,y)={\frac {\partial }{\partial x}}{\Bigl (}{\frac {\partial }{\partial x}}f(x,y){\Bigr )}} ⁠. == Rules of computation == In principle, the derivative of a function can be computed from the definition by considering the difference quotient and computing its limit. Once the derivatives of a few simple functions are known, the derivatives of other functions are more easily computed using rules for obtaining derivatives of more complicated functions from simpler ones. This process of finding a derivative is known as differentiation. === Rules for basic functions === The following are the rules for the derivatives of the most common basic functions. Here, a {\displaystyle a} is a real number, and e {\displaystyle e} is the base of the natural logarithm, approximately 2.71828. Derivatives of powers: d d x x a = a x a − 1 {\displaystyle {\frac {d}{dx}}x^{a}=ax^{a-1}} Functions of exponential, natural logarithm, and logarithm with general base: d d x e x = e x {\displaystyle {\frac {d}{dx}}e^{x}=e^{x}} d d x a x = a x ln ⁡ ( a ) {\displaystyle {\frac {d}{dx}}a^{x}=a^{x}\ln(a)} , for a > 0 {\displaystyle a>0} d d x ln ⁡ ( x ) = 1 x {\displaystyle {\frac {d}{dx}}\ln(x)={\frac {1}{x}}} , for x > 0 {\displaystyle x>0} d d x log a ⁡ ( x ) = 1 x ln ⁡ ( a ) {\displaystyle {\frac {d}{dx}}\log _{a}(x)={\frac {1}{x\ln(a)}}} , for x , a > 0 {\displaystyle x,a>0} Trigonometric functions: d d x sin ⁡ ( x ) = cos ⁡ ( x ) {\displaystyle {\frac {d}{dx}}\sin(x)=\cos(x)} d d x cos ⁡ ( x ) = − sin ⁡ ( x ) {\displaystyle {\frac {d}{dx}}\cos(x)=-\sin(x)} d d x tan ⁡ ( x ) = sec 2 ⁡ ( x ) = 1 cos 2 ⁡ ( x ) = 1 + tan 2 ⁡ ( x ) {\displaystyle {\frac {d}{dx}}\tan(x)=\sec ^{2}(x)={\frac {1}{\cos ^{2}(x)}}=1+\tan ^{2}(x)} Inverse trigonometric functions: d d x arcsin ⁡ ( x ) = 1 1 − x 2 {\displaystyle {\frac {d}{dx}}\arcsin(x)={\frac {1}{\sqrt {1-x^{2}}}}} , for − 1 < x < 1 {\displaystyle -1<x<1} d d x arccos ⁡ ( x ) = − 1 1 − x 2 {\displaystyle {\frac {d}{dx}}\arccos(x)=-{\frac {1}{\sqrt {1-x^{2}}}}} , for − 1 < x < 1 {\displaystyle -1<x<1} d d x arctan ⁡ ( x ) = 1 1 + x 2 {\displaystyle {\frac {d}{dx}}\arctan(x)={\frac {1}{1+x^{2}}}} === Rules for combined functions === Given that the f {\displaystyle f} and g {\displaystyle g} are the functions. The following are some of the most basic rules for deducing the derivative of functions from derivatives of basic functions. Constant rule: if f {\displaystyle f} is constant, then for all ⁠ x {\displaystyle x} ⁠, f ′ ( x ) = 0. {\displaystyle f'(x)=0.} Sum rule: ( α f + β g ) ′ = α f ′ + β g ′ {\displaystyle (\alpha f+\beta g)'=\alpha f'+\beta g'} for all functions f {\displaystyle f} and g {\displaystyle g} and all real numbers α {\displaystyle \alpha } and ⁠ β {\displaystyle \beta } ⁠. Product rule: ( f g ) ′ = f ′ g + f g ′ {\displaystyle (fg)'=f'g+fg'} for all functions f {\displaystyle f} and ⁠ g {\displaystyle g} ⁠. As a special case, this rule includes the fact ( α f ) ′ = α f ′ {\displaystyle (\alpha f)'=\alpha f'} whenever α {\displaystyle \alpha } is a constant because α ′ f = 0 ⋅ f = 0 {\displaystyle \alpha 'f=0\cdot f=0} by the constant rule. Quotient rule: ( f g ) ′ = f ′ g − f g ′ g 2 {\displaystyle \left({\frac {f}{g}}\right)'={\frac {f'g-fg'}{g^{2}}}} for all functions f {\displaystyle f} and g {\displaystyle g} at all inputs where g ≠ 0. Chain rule for composite functions: If ⁠ f ( x ) = h ( g ( x ) ) {\displaystyle f(x)=h(g(x))} ⁠, then f ′ ( x ) = h ′ ( g ( x ) ) ⋅ g ′ ( x ) . {\displaystyle f'(x)=h'(g(x))\cdot g'(x).} === Computation example === The derivative of the function given by f ( x ) = x 4 + sin ⁡ ( x 2 ) − ln ⁡ ( x ) e x + 7 {\displaystyle f(x)=x^{4}+\sin \left(x^{2}\right)-\ln(x)e^{x}+7} is f ′ ( x ) = 4 x ( 4 − 1 ) + d ( x 2 ) d x cos ⁡ ( x 2 ) − d ( ln ⁡ x ) d x e x − ln ⁡ ( x ) d ( e x ) d x + 0 = 4 x 3 + 2 x cos ⁡ ( x 2 ) − 1 x e x − ln ⁡ ( x ) e x . {\displaystyle {\begin{aligned}f'(x)&=4x^{(4-1)}+{\frac {d\left(x^{2}\right)}{dx}}\cos \left(x^{2}\right)-{\frac {d\left(\ln {x}\right)}{dx}}e^{x}-\ln(x){\frac {d\left(e^{x}\right)}{dx}}+0\\&=4x^{3}+2x\cos \left(x^{2}\right)-{\frac {1}{x}}e^{x}-\ln(x)e^{x}.\end{aligned}}} Here the second term was computed using the chain rule and the third term using the product rule. The known derivatives of the elementary functions x 2 {\displaystyle x^{2}} , x 4 {\displaystyle x^{4}} , sin ⁡ ( x ) {\displaystyle \sin(x)} , ln ⁡ ( x ) {\displaystyle \ln(x)} , and exp ⁡ ( x ) = e x {\displaystyle \exp(x)=e^{x}} , as well as the constant 7 {\displaystyle 7} , were also used. == Higher-order derivatives == Higher order derivatives are the result of differentiating a function repeatedly. Given that f {\displaystyle f} is a differentiable function, the derivative of f {\displaystyle f} is the first derivative, denoted as ⁠ f ′ {\displaystyle f'} ⁠. The derivative of f ′ {\displaystyle f'} is the second derivative, denoted as ⁠ f ″ {\displaystyle f''} ⁠, and the derivative of f ″ {\displaystyle f''} is the third derivative, denoted as ⁠ f ‴ {\displaystyle f'''} ⁠. By continuing this process, if it exists, the ⁠ n {\displaystyle n} ⁠th derivative is the derivative of the ⁠ ( n − 1 ) {\displaystyle (n-1)} ⁠th derivative or the derivative of order ⁠ n {\displaystyle n} ⁠. As has been discussed above, the generalization of derivative of a function f {\displaystyle f} may be denoted as ⁠ f ( n ) {\displaystyle f^{(n)}} ⁠. A function that has k {\displaystyle k} successive derivatives is called k {\displaystyle k} times differentiable. If the k {\displaystyle k} -th derivative is continuous, then the function is said to be of differentiability class ⁠ C k {\displaystyle C^{k}} ⁠. A function that has infinitely many derivatives is called infinitely differentiable or smooth. Any polynomial function is infinitely differentiable; taking derivatives repeatedly will eventually result in a constant function, and all subsequent derivatives of that function are zero. One application of higher-order derivatives is in physics. Suppose that a function represents the position of an object at the time. The first derivative of that function is the velocity of an object with respect to time, the second derivative of the function is the acceleration of an object with respect to time, and the third derivative is the jerk. == In other dimensions == === Vector-valued functions === A vector-valued function y {\displaystyle \mathbf {y} } of a real variable sends real numbers to vectors in some vector space R n {\displaystyle \mathbb {R} ^{n}} . A vector-valued function can be split up into its coordinate functions y 1 ( t ) , y 2 ( t ) , … , y n ( t ) {\displaystyle y_{1}(t),y_{2}(t),\dots ,y_{n}(t)} , meaning that y = ( y 1 ( t ) , y 2 ( t ) , … , y n ( t ) ) {\displaystyle \mathbf {y} =(y_{1}(t),y_{2}(t),\dots ,y_{n}(t))} . This includes, for example, parametric curves in R 2 {\displaystyle \mathbb {R} ^{2}} or R 3 {\displaystyle \mathbb {R} ^{3}} . The coordinate functions are real-valued functions, so the above definition of derivative applies to them. The derivative of y ( t ) {\displaystyle \mathbf {y} (t)} is defined to be the vector, called the tangent vector, whose coordinates are the derivatives of the coordinate functions. That is, y ′ ( t ) = lim h → 0 y ( t + h ) − y ( t ) h , {\displaystyle \mathbf {y} '(t)=\lim _{h\to 0}{\frac {\mathbf {y} (t+h)-\mathbf {y} (t)}{h}},} if the limit exists. The subtraction in the numerator is the subtraction of vectors, not scalars. If the derivative of y {\displaystyle \mathbf {y} } exists for every value of ⁠ t {\displaystyle t} ⁠, then y ′ {\displaystyle \mathbf {y} '} is another vector-valued function. === Partial derivatives === Functions can depend upon more than one variable. A partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant. Partial derivatives are used in vector calculus and differential geometry. As with ordinary derivatives, multiple notations exist: the partial derivative of a function f ( x , y , … ) {\displaystyle f(x,y,\dots )} with respect to the variable x {\displaystyle x} is variously denoted by among other possibilities. It can be thought of as the rate of change of the function in the x {\displaystyle x} -direction. Here ∂ is a rounded d called the partial derivative symbol. To distinguish it from the letter d, ∂ is sometimes pronounced "der", "del", or "partial" instead of "dee". For example, let ⁠ f ( x , y ) = x 2 + x y + y 2 {\displaystyle f(x,y)=x^{2}+xy+y^{2}} ⁠, then the partial derivative of function f {\displaystyle f} with respect to both variables x {\displaystyle x} and y {\displaystyle y} are, respectively: ∂ f ∂ x = 2 x + y , ∂ f ∂ y = x + 2 y . {\displaystyle {\frac {\partial f}{\partial x}}=2x+y,\qquad {\frac {\partial f}{\partial y}}=x+2y.} In general, the partial derivative of a function f ( x 1 , … , x n ) {\displaystyle f(x_{1},\dots ,x_{n})} in the direction x i {\displaystyle x_{i}} at the point ( a 1 , … , a n ) {\displaystyle (a_{1},\dots ,a_{n})} is defined to be: ∂ f ∂ x i ( a 1 , … , a n ) = lim h → 0 f ( a 1 , … , a i + h , … , a n ) − f ( a 1 , … , a i , … , a n ) h . {\displaystyle {\frac {\partial f}{\partial x_{i}}}(a_{1},\ldots ,a_{n})=\lim _{h\to 0}{\frac {f(a_{1},\ldots ,a_{i}+h,\ldots ,a_{n})-f(a_{1},\ldots ,a_{i},\ldots ,a_{n})}{h}}.} This is fundamental for the study of the functions of several real variables. Let f ( x 1 , … , x n ) {\displaystyle f(x_{1},\dots ,x_{n})} be such a real-valued function. If all partial derivatives f {\displaystyle f} with respect to x j {\displaystyle x_{j}} are defined at the point ⁠ ( a 1 , … , a n ) {\displaystyle (a_{1},\dots ,a_{n})} ⁠, these partial derivatives define the vector ∇ f ( a 1 , … , a n ) = ( ∂ f ∂ x 1 ( a 1 , … , a n ) , … , ∂ f ∂ x n ( a 1 , … , a n ) ) , {\displaystyle \nabla f(a_{1},\ldots ,a_{n})=\left({\frac {\partial f}{\partial x_{1}}}(a_{1},\ldots ,a_{n}),\ldots ,{\frac {\partial f}{\partial x_{n}}}(a_{1},\ldots ,a_{n})\right),} which is called the gradient of f {\displaystyle f} at a {\displaystyle a} . If f {\displaystyle f} is differentiable at every point in some domain, then the gradient is a vector-valued function ∇ f {\displaystyle \nabla f} that maps the point ( a 1 , … , a n ) {\displaystyle (a_{1},\dots ,a_{n})} to the vector ∇ f ( a 1 , … , a n ) {\displaystyle \nabla f(a_{1},\dots ,a_{n})} . Consequently, the gradient determines a vector field. === Directional derivatives === If f {\displaystyle f} is a real-valued function on ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠, then the partial derivatives of f {\displaystyle f} measure its variation in the direction of the coordinate axes. For example, if f {\displaystyle f} is a function of x {\displaystyle x} and ⁠ y {\displaystyle y} ⁠, then its partial derivatives measure the variation in f {\displaystyle f} in the x {\displaystyle x} and y {\displaystyle y} direction. However, they do not directly measure the variation of f {\displaystyle f} in any other direction, such as along the diagonal line ⁠ y = x {\displaystyle y=x} ⁠. These are measured using directional derivatives. Given a vector ⁠ v = ( v 1 , … , v n ) {\displaystyle \mathbf {v} =(v_{1},\ldots ,v_{n})} ⁠, then the directional derivative of f {\displaystyle f} in the direction of v {\displaystyle \mathbf {v} } at the point x {\displaystyle \mathbf {x} } is: D v f ( x ) = lim h → 0 f ( x + h v ) − f ( x ) h . {\displaystyle D_{\mathbf {v} }{f}(\mathbf {x} )=\lim _{h\rightarrow 0}{\frac {f(\mathbf {x} +h\mathbf {v} )-f(\mathbf {x} )}{h}}.} If all the partial derivatives of f {\displaystyle f} exist and are continuous at ⁠ x {\displaystyle \mathbf {x} } ⁠, then they determine the directional derivative of f {\displaystyle f} in the direction v {\displaystyle \mathbf {v} } by the formula: D v f ( x ) = ∑ j = 1 n v j ∂ f ∂ x j . {\displaystyle D_{\mathbf {v} }{f}(\mathbf {x} )=\sum _{j=1}^{n}v_{j}{\frac {\partial f}{\partial x_{j}}}.} === Total derivative and Jacobian matrix === When f {\displaystyle f} is a function from an open subset of R n {\displaystyle \mathbb {R} ^{n}} to ⁠ R m {\displaystyle \mathbb {R} ^{m}} ⁠, then the directional derivative of f {\displaystyle f} in a chosen direction is the best linear approximation to f {\displaystyle f} at that point and in that direction. However, when ⁠ n > 1 {\displaystyle n>1} ⁠, no single directional derivative can give a complete picture of the behavior of f {\displaystyle f} . The total derivative gives a complete picture by considering all directions at once. That is, for any vector v {\displaystyle \mathbf {v} } starting at ⁠ a {\displaystyle \mathbf {a} } ⁠, the linear approximation formula holds: f ( a + v ) ≈ f ( a ) + f ′ ( a ) v . {\displaystyle f(\mathbf {a} +\mathbf {v} )\approx f(\mathbf {a} )+f'(\mathbf {a} )\mathbf {v} .} Similarly with the single-variable derivative, f ′ ( a ) {\displaystyle f'(\mathbf {a} )} is chosen so that the error in this approximation is as small as possible. The total derivative of f {\displaystyle f} at a {\displaystyle \mathbf {a} } is the unique linear transformation f ′ ( a ) : R n → R m {\displaystyle f'(\mathbf {a} )\colon \mathbb {R} ^{n}\to \mathbb {R} ^{m}} such that lim h → 0 ‖ f ( a + h ) − ( f ( a ) + f ′ ( a ) h ) ‖ ‖ h ‖ = 0. {\displaystyle \lim _{\mathbf {h} \to 0}{\frac {\lVert f(\mathbf {a} +\mathbf {h} )-(f(\mathbf {a} )+f'(\mathbf {a} )\mathbf {h} )\rVert }{\lVert \mathbf {h} \rVert }}=0.} Here h {\displaystyle \mathbf {h} } is a vector in ⁠ R n {\displaystyle \mathbb {R} ^{n}} ⁠, so the norm in the denominator is the standard length on R n {\displaystyle \mathbb {R} ^{n}} . However, f ′ ( a ) h {\displaystyle f'(\mathbf {a} )\mathbf {h} } is a vector in ⁠ R m {\displaystyle \mathbb {R} ^{m}} ⁠, and the norm in the numerator is the standard length on R m {\displaystyle \mathbb {R} ^{m}} . If v {\displaystyle v} is a vector starting at ⁠ a {\displaystyle a} ⁠, then f ′ ( a ) v {\displaystyle f'(\mathbf {a} )\mathbf {v} } is called the pushforward of v {\displaystyle \mathbf {v} } by f {\displaystyle f} . If the total derivative exists at ⁠ a {\displaystyle \mathbf {a} } ⁠, then all the partial derivatives and directional derivatives of f {\displaystyle f} exist at ⁠ a {\displaystyle \mathbf {a} } ⁠, and for all ⁠ v {\displaystyle \mathbf {v} } ⁠, f ′ ( a ) v {\displaystyle f'(\mathbf {a} )\mathbf {v} } is the directional derivative of f {\displaystyle f} in the direction ⁠ v {\displaystyle \mathbf {v} } ⁠. If f {\displaystyle f} is written using coordinate functions, so that ⁠ f = ( f 1 , f 2 , … , f m ) {\displaystyle f=(f_{1},f_{2},\dots ,f_{m})} ⁠, then the total derivative can be expressed using the partial derivatives as a matrix. This matrix is called the Jacobian matrix of f {\displaystyle f} at a {\displaystyle \mathbf {a} } : f ′ ( a ) = Jac a = ( ∂ f i ∂ x j ) i j . {\displaystyle f'(\mathbf {a} )=\operatorname {Jac} _{\mathbf {a} }=\left({\frac {\partial f_{i}}{\partial x_{j}}}\right)_{ij}.} == Generalizations == The concept of a derivative can be extended to many other settings. The common thread is that the derivative of a function at a point serves as a linear approximation of the function at that point. An important generalization of the derivative concerns complex functions of complex variables, such as functions from (a domain in) the complex numbers C {\displaystyle \mathbb {C} } to ⁠ C {\displaystyle \mathbb {C} } ⁠. The notion of the derivative of such a function is obtained by replacing real variables with complex variables in the definition. If C {\displaystyle \mathbb {C} } is identified with R 2 {\displaystyle \mathbb {R} ^{2}} by writing a complex number z {\displaystyle z} as ⁠ x + i y {\displaystyle x+iy} ⁠ then a differentiable function from C {\displaystyle \mathbb {C} } to C {\displaystyle \mathbb {C} } is certainly differentiable as a function from R 2 {\displaystyle \mathbb {R} ^{2}} to R 2 {\displaystyle \mathbb {R} ^{2}} (in the sense that its partial derivatives all exist), but the converse is not true in general: the complex derivative only exists if the real derivative is complex linear and this imposes relations between the partial derivatives called the Cauchy–Riemann equations – see holomorphic functions. Another generalization concerns functions between differentiable or smooth manifolds. Intuitively speaking such a manifold M {\displaystyle M} is a space that can be approximated near each point x {\displaystyle x} by a vector space called its tangent space: the prototypical example is a smooth surface in ⁠ R 3 {\displaystyle \mathbb {R} ^{3}} ⁠. The derivative (or differential) of a (differentiable) map f : M → N {\displaystyle f:M\to N} between manifolds, at a point x {\displaystyle x} in ⁠ M {\displaystyle M} ⁠, is then a linear map from the tangent space of M {\displaystyle M} at x {\displaystyle x} to the tangent space of N {\displaystyle N} at ⁠ f ( x ) {\displaystyle f(x)} ⁠. The derivative function becomes a map between the tangent bundles of M {\displaystyle M} and ⁠ N {\displaystyle N} ⁠. This definition is used in differential geometry. Differentiation can also be defined for maps between vector space, such as Banach space, in which those generalizations are the Gateaux derivative and the Fréchet derivative. One deficiency of the classical derivative is that very many functions are not differentiable. Nevertheless, there is a way of extending the notion of the derivative so that all continuous functions and many other functions can be differentiated using a concept known as the weak derivative. The idea is to embed the continuous functions in a larger space called the space of distributions and only require that a function is differentiable "on average". Properties of the derivative have inspired the introduction and study of many similar objects in algebra and topology; an example is differential algebra. Here, it consists of the derivation of some topics in abstract algebra, such as rings, ideals, field, and so on. The discrete equivalent of differentiation is finite differences. The study of differential calculus is unified with the calculus of finite differences in time scale calculus. The arithmetic derivative involves the function that is defined for the integers by the prime factorization. This is an analogy with the product rule. == See also == Covariant derivative Derivation Exterior derivative Functional derivative Integral Lie derivative == Notes == == References == == External links == "Derivative", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Khan Academy: "Newton, Leibniz, and Usain Bolt" Weisstein, Eric W. "Derivative". MathWorld. Online Derivative Calculator from Wolfram Alpha.
Wikipedia/Differentiation_(calculus)
In mathematics, time-scale calculus is a unification of the theory of difference equations with that of differential equations, unifying integral and differential calculus with the calculus of finite differences, offering a formalism for studying hybrid systems. It has applications in any field that requires simultaneous modelling of discrete and continuous data. It gives a new definition of a derivative such that if one differentiates a function defined on the real numbers then the definition is equivalent to standard differentiation, but if one uses a function defined on the integers then it is equivalent to the forward difference operator. == History == Time-scale calculus was introduced in 1988 by the German mathematician Stefan Hilger. However, similar ideas have been used before and go back at least to the introduction of the Riemann–Stieltjes integral, which unifies sums and integrals. == Dynamic equations == Many results concerning differential equations carry over quite easily to corresponding results for difference equations, while other results seem to be completely different from their continuous counterparts. The study of dynamic equations on time scales reveals such discrepancies, and helps avoid proving results twice—once for differential equations and once again for difference equations. The general idea is to prove a result for a dynamic equation where the domain of the unknown function is a so-called time scale (also known as a time-set), which may be an arbitrary closed subset of the reals. In this way, results apply not only to the set of real numbers or set of integers but to more general time scales such as a Cantor set. The three most popular examples of calculus on time scales are differential calculus, difference calculus, and quantum calculus. Dynamic equations on a time scale have a potential for applications such as in population dynamics. For example, they can model insect populations that evolve continuously while in season, die out in winter while their eggs are incubating or dormant, and then hatch in a new season, giving rise to a non-overlapping population. == Formal definitions == A time scale (or measure chain) is a closed subset of the real line R {\displaystyle \mathbb {R} } . The common notation for a general time scale is T {\displaystyle \mathbb {T} } . The two most commonly encountered examples of time scales are the real numbers R {\displaystyle \mathbb {R} } and the discrete time scale h Z {\displaystyle h\mathbb {Z} } . A single point in a time scale is defined as: t : t ∈ T {\displaystyle t:t\in \mathbb {T} } === Operations on time scales === The forward jump and backward jump operators represent the closest point in the time scale on the right and left of a given point t {\displaystyle t} , respectively. Formally: σ ( t ) = inf { s ∈ T : s > t } {\displaystyle \sigma (t)=\inf\{s\in \mathbb {T} :s>t\}} (forward shift/jump operator) ρ ( t ) = sup { s ∈ T : s < t } {\displaystyle \rho (t)=\sup\{s\in \mathbb {T} :s<t\}} (backward shift/jump operator) The graininess μ {\displaystyle \mu } is the distance from a point to the closest point on the right and is given by: μ ( t ) = σ ( t ) − t . {\displaystyle \mu (t)=\sigma (t)-t.} For a right-dense t {\displaystyle t} , σ ( t ) = t {\displaystyle \sigma (t)=t} and μ ( t ) = 0 {\displaystyle \mu (t)=0} . For a left-dense t {\displaystyle t} , ρ ( t ) = t . {\displaystyle \rho (t)=t.} === Classification of points === For any t ∈ T {\displaystyle t\in \mathbb {T} } , t {\displaystyle t} is: left dense if ρ ( t ) = t {\displaystyle \rho (t)=t} right dense if σ ( t ) = t {\displaystyle \sigma (t)=t} left scattered if ρ ( t ) < t {\displaystyle \rho (t)<t} right scattered if σ ( t ) > t {\displaystyle \sigma (t)>t} dense if both left dense and right dense isolated if both left scattered and right scattered As illustrated by the figure at right: Point t 1 {\displaystyle t_{1}} is dense Point t 2 {\displaystyle t_{2}} is left dense and right scattered Point t 3 {\displaystyle t_{3}} is isolated Point t 4 {\displaystyle t_{4}} is left scattered and right dense === Continuity === Continuity of a time scale is redefined as equivalent to density. A time scale is said to be right-continuous at point t {\displaystyle t} if it is right dense at point t {\displaystyle t} . Similarly, a time scale is said to be left-continuous at point t {\displaystyle t} if it is left dense at point t {\displaystyle t} . == Derivative == Take a function: f : T → R , {\displaystyle f:\mathbb {T} \to \mathbb {R} ,} (where R could be any Banach space, but is set to the real line for simplicity). Definition: The delta derivative (also Hilger derivative) f Δ ( t ) {\displaystyle f^{\Delta }(t)} exists if and only if: For every ε > 0 {\displaystyle \varepsilon >0} there exists a neighborhood U {\displaystyle U} of t {\displaystyle t} such that: | f ( σ ( t ) ) − f ( s ) − f Δ ( t ) ( σ ( t ) − s ) | ≤ ε | σ ( t ) − s | {\displaystyle \left|f(\sigma (t))-f(s)-f^{\Delta }(t)(\sigma (t)-s)\right|\leq \varepsilon \left|\sigma (t)-s\right|} for all s {\displaystyle s} in U {\displaystyle U} . Take T = R . {\displaystyle \mathbb {T} =\mathbb {R} .} Then σ ( t ) = t {\displaystyle \sigma (t)=t} , μ ( t ) = 0 {\displaystyle \mu (t)=0} , f Δ = f ′ {\displaystyle f^{\Delta }=f'} ; is the derivative used in standard calculus. If T = Z {\displaystyle \mathbb {T} =\mathbb {Z} } (the integers), σ ( t ) = t + 1 {\displaystyle \sigma (t)=t+1} , μ ( t ) = 1 {\displaystyle \mu (t)=1} , f Δ = Δ f {\displaystyle f^{\Delta }=\Delta f} is the forward difference operator used in difference equations. == Integration == The delta integral is defined as the antiderivative with respect to the delta derivative. If F ( t ) {\displaystyle F(t)} has a continuous derivative f ( t ) = F Δ ( t ) {\displaystyle f(t)=F^{\Delta }(t)} one sets ∫ r s f ( t ) Δ ( t ) = F ( s ) − F ( r ) . {\displaystyle \int _{r}^{s}f(t)\Delta (t)=F(s)-F(r).} == Laplace transform and z-transform == A Laplace transform can be defined for functions on time scales, which uses the same table of transforms for any arbitrary time scale. This transform can be used to solve dynamic equations on time scales. If the time scale is the non-negative integers then the transform is equal to a modified Z-transform: Z ′ { x [ z ] } = Z { x [ z + 1 ] } z + 1 {\displaystyle {\mathcal {Z}}'\{x[z]\}={\frac {{\mathcal {Z}}\{x[z+1]\}}{z+1}}} == Partial differentiation == Partial differential equations and partial difference equations are unified as partial dynamic equations on time scales. == Multiple integration == Multiple integration on time scales is treated in Bohner (2005). == Stochastic dynamic equations on time scales == Stochastic differential equations and stochastic difference equations can be generalized to stochastic dynamic equations on time scales. == Measure theory on time scales == Associated with every time scale is a natural measure defined via μ Δ ( A ) = λ ( ρ − 1 ( A ) ) , {\displaystyle \mu ^{\Delta }(A)=\lambda (\rho ^{-1}(A)),} where λ {\displaystyle \lambda } denotes Lebesgue measure and ρ {\displaystyle \rho } is the backward shift operator defined on R {\displaystyle \mathbb {R} } . The delta integral turns out to be the usual Lebesgue–Stieltjes integral with respect to this measure ∫ r s f ( t ) Δ t = ∫ [ r , s ) f ( t ) d μ Δ ( t ) {\displaystyle \int _{r}^{s}f(t)\Delta t=\int _{[r,s)}f(t)d\mu ^{\Delta }(t)} and the delta derivative turns out to be the Radon–Nikodym derivative with respect to this measure f Δ ( t ) = d f d μ Δ ( t ) . {\displaystyle f^{\Delta }(t)={\frac {df}{d\mu ^{\Delta }}}(t).} == Distributions on time scales == The Dirac delta and Kronecker delta are unified on time scales as the Hilger delta: δ a H ( t ) = { 1 μ ( a ) , t = a 0 , t ≠ a {\displaystyle \delta _{a}^{\mathbb {H} }(t)={\begin{cases}{\dfrac {1}{\mu (a)}},&t=a\\0,&t\neq a\end{cases}}} == Fractional calculus on time scales == Fractional calculus on time scales is treated in Bastos, Mozyrska, and Torres. == See also == Analysis on fractals for dynamic equations on a Cantor set. Multiple-scale analysis Method of averaging Krylov–Bogoliubov averaging method == References == == Further reading == Agarwal, Ravi; Bohner, Martin; O’Regan, Donal; Peterson, Allan (2002). "Dynamic equations on time scales: a survey". Journal of Computational and Applied Mathematics. 141 (1–2): 1–26. Bibcode:2002JCoAM.141....1A. doi:10.1016/S0377-0427(01)00432-0. Dynamic Equations on Time Scales Special issue of Journal of Computational and Applied Mathematics (2002) Dynamic Equations And Applications Special Issue of Advances in Difference Equations (2006) Dynamic Equations on Time Scales: Qualitative Analysis and Applications Special issue of Nonlinear Dynamics And Systems Theory (2009) == External links == The Baylor University Time Scales Group Timescalewiki.org
Wikipedia/Time-scale_calculus
The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis. The WDF was first proposed in physics to account for quantum corrections to classical statistical mechanics in 1932 by Eugene Wigner, and it is of importance in quantum mechanics in phase space (see, by way of comparison: Wigner quasi-probability distribution, also called the Wigner function or the Wigner–Ville distribution). Given the shared algebraic structure between position-momentum and time-frequency conjugate pairs, it also usefully serves in signal processing, as a transform in time-frequency analysis, the subject of this article. Compared to a short-time Fourier transform, such as the Gabor transform, the Wigner distribution function provides the highest possible temporal vs frequency resolution which is mathematically possible within the limitations of the uncertainty principle. The downside is the introduction of large cross terms between every pair of signal components and between positive and negative frequencies, which makes the original formulation of the function a poor fit for most analysis applications. Subsequent modifications have been proposed which preserve the sharpness of the Wigner distribution function but largely suppress cross terms. == Mathematical definition == There are several different definitions for the Wigner distribution function. The definition given here is specific to time-frequency analysis. Given the time series x [ t ] {\displaystyle x[t]} , its non-stationary auto-covariance function is given by C x ( t 1 , t 2 ) = ⟨ ( x [ t 1 ] − μ [ t 1 ] ) ( x [ t 2 ] − μ [ t 2 ] ) ∗ ⟩ , {\displaystyle C_{x}(t_{1},t_{2})=\left\langle \left(x[t_{1}]-\mu [t_{1}]\right)\left(x[t_{2}]-\mu [t_{2}]\right)^{*}\right\rangle ,} where ⟨ ⋯ ⟩ {\displaystyle \langle \cdots \rangle } denotes the average over all possible realizations of the process and μ ( t ) {\displaystyle \mu (t)} is the mean, which may or may not be a function of time. The Wigner function W x ( t , f ) {\displaystyle W_{x}(t,f)} is then given by first expressing the autocorrelation function in terms of the average time t = ( t 1 + t 2 ) / 2 {\displaystyle t=(t_{1}+t_{2})/2} and time lag τ = t 1 − t 2 {\displaystyle \tau =t_{1}-t_{2}} , and then Fourier transforming the lag. W x ( t , f ) = ∫ − ∞ ∞ C x ( t + τ 2 , t − τ 2 ) e − 2 π i τ f d τ . {\displaystyle W_{x}(t,f)=\int _{-\infty }^{\infty }C_{x}\left(t+{\frac {\tau }{2}},t-{\frac {\tau }{2}}\right)\,e^{-2\pi i\tau f}\,d\tau .} So for a single (mean-zero) time series, the Wigner function is simply given by W x ( t , f ) = ∫ − ∞ ∞ x ( t + τ 2 ) x ∗ ( t − τ 2 ) e − 2 π i τ f d τ . {\displaystyle W_{x}(t,f)=\int _{-\infty }^{\infty }x\left(t+{\frac {\tau }{2}}\right)\,x^{*}\left(t-{\frac {\tau }{2}}\right)\,e^{-2\pi i\tau f}\,d\tau .} The motivation for the Wigner function is that it reduces to the spectral density function at all times t {\displaystyle t} for stationary processes, yet it is fully equivalent to the non-stationary autocorrelation function. Therefore, the Wigner function tells us (roughly) how the spectral density changes in time. == Time-frequency analysis example == Here are some examples illustrating how the WDF is used in time-frequency analysis. === Constant input signal === When the input signal is constant, its time-frequency distribution is a horizontal line along the time axis. For example, if x(t) = 1, then W x ( t , f ) = ∫ − ∞ ∞ e − i 2 π τ f d τ = δ ( f ) . {\displaystyle W_{x}(t,f)=\int _{-\infty }^{\infty }e^{-i2\pi \tau \,f}\,d\tau =\delta (f).} === Sinusoidal input signal === When the input signal is a sinusoidal function, its time-frequency distribution is a horizontal line parallel to the time axis, displaced from it by the sinusoidal signal's frequency. For example, if x(t) = e i2πkt, then W x ( t , f ) = ∫ − ∞ ∞ e i 2 π k ( t + τ 2 ) e − i 2 π k ( t − τ 2 ) e − i 2 π τ f d τ = ∫ − ∞ ∞ e − i 2 π τ ( f − k ) d τ = δ ( f − k ) . {\displaystyle {\begin{aligned}W_{x}(t,f)&=\int _{-\infty }^{\infty }e^{i2\pi k\left(t+{\frac {\tau }{2}}\right)}e^{-i2\pi k\left(t-{\frac {\tau }{2}}\right)}e^{-i2\pi \tau \,f}\,d\tau \\&=\int _{-\infty }^{\infty }e^{-i2\pi \tau \left(f-k\right)}\,d\tau \\&=\delta (f-k).\end{aligned}}} === Chirp input signal === When the input signal is a linear chirp function, the instantaneous frequency is a linear function. This means that the time frequency distribution should be a straight line. For example, if x ( t ) = e i 2 π k t 2 {\displaystyle x(t)=e^{i2\pi kt^{2}}} , then its instantaneous frequency is 1 2 π d ( 2 π k t 2 ) d t = 2 k t , {\displaystyle {\frac {1}{2\pi }}{\frac {d(2\pi kt^{2})}{dt}}=2kt~,} and its WDF W x ( t , f ) = ∫ − ∞ ∞ e i 2 π k ( t + τ 2 ) 2 e − i 2 π k ( t − τ 2 ) 2 e − i 2 π τ f d τ = ∫ − ∞ ∞ e i 4 π k t τ e − i 2 π τ f d τ = ∫ − ∞ ∞ e − i 2 π τ ( f − 2 k t ) d τ = δ ( f − 2 k t ) . {\displaystyle {\begin{aligned}W_{x}(t,f)&=\int _{-\infty }^{\infty }e^{i2\pi k\left(t+{\frac {\tau }{2}}\right)^{2}}e^{-i2\pi k\left(t-{\frac {\tau }{2}}\right)^{2}}e^{-i2\pi \tau \,f}\,d\tau \\&=\int _{-\infty }^{\infty }e^{i4\pi kt\tau }e^{-i2\pi \tau f}\,d\tau \\&=\int _{-\infty }^{\infty }e^{-i2\pi \tau (f-2kt)}\,d\tau \\&=\delta (f-2kt)~.\end{aligned}}} === Delta input signal === When the input signal is a delta function, since it is only non-zero at t=0 and contains infinite frequency components, its time-frequency distribution should be a vertical line across the origin. This means that the time frequency distribution of the delta function should also be a delta function. By WDF W x ( t , f ) = ∫ − ∞ ∞ δ ( t + τ 2 ) δ ( t − τ 2 ) e − i 2 π τ f d τ = 4 ∫ − ∞ ∞ δ ( 2 t + τ ) δ ( 2 t − τ ) e − i 2 π τ f d τ = 4 δ ( 4 t ) e i 4 π t f = δ ( t ) e i 4 π t f = δ ( t ) . {\displaystyle {\begin{aligned}W_{x}(t,f)&=\int _{-\infty }^{\infty }\delta \left(t+{\frac {\tau }{2}}\right)\delta \left(t-{\frac {\tau }{2}}\right)e^{-i2\pi \tau \,f}\,d\tau \\&=4\int _{-\infty }^{\infty }\delta (2t+\tau )\delta (2t-\tau )e^{-i2\pi \tau f}\,d\tau \\&=4\delta (4t)e^{i4\pi tf}\\&=\delta (t)e^{i4\pi tf}\\&=\delta (t).\end{aligned}}} The Wigner distribution function is best suited for time-frequency analysis when the input signal's phase is 2nd order or lower. For those signals, WDF can exactly generate the time frequency distribution of the input signal. === Boxcar function === x ( t ) = { 1 | t | < 1 / 2 0 otherwise {\displaystyle x(t)={\begin{cases}1&|t|<1/2\\0&{\text{otherwise}}\end{cases}}\qquad } , the rectangular function ⇒ W x ( t , f ) = { 1 π f sin ⁡ ( 2 π f { 1 − 2 | t | } ) | t | < 1 / 2 0 otherwise {\displaystyle W_{x}(t,f)={\begin{cases}{\frac {1}{\pi f}}\sin(2\pi f\{1-2|t|\})&|t|<1/2\\0&{\mbox{otherwise}}\end{cases}}} == Cross term property == The Wigner distribution function is not a linear transform. A cross term ("time beats") occurs when there is more than one component in the input signal, analogous in time to frequency beats. In the ancestral physics Wigner quasi-probability distribution, this term has important and useful physics consequences, required for faithful expectation values. By contrast, the short-time Fourier transform does not have this feature. Negative features of the WDF are reflective of the Gabor limit of the classical signal and physically unrelated to any possible underlay of quantum structure. The following are some examples that exhibit the cross-term feature of the Wigner distribution function. x ( t ) = { cos ⁡ ( 2 π t ) t ≤ − 2 cos ⁡ ( 4 π t ) − 2 < t ≤ 2 cos ⁡ ( 3 π t ) t > 2 {\displaystyle x(t)={\begin{cases}\cos(2\pi t)&t\leq -2\\\cos(4\pi t)&-2<t\leq 2\\\cos(3\pi t)&t>2\end{cases}}} x ( t ) = e i t 3 {\displaystyle x(t)=e^{it^{3}}} In order to reduce the cross-term difficulty, several approaches have been proposed in the literature, some of them leading to new transforms as the modified Wigner distribution function, the Gabor–Wigner transform, the Choi-Williams distribution function and Cohen's class distribution. == Properties of the Wigner distribution function == The Wigner distribution function has several evident properties listed in the following table. Projection property | x ( t ) | 2 = ∫ − ∞ ∞ W x ( t , f ) d f | X ( f ) | 2 = ∫ − ∞ ∞ W x ( t , f ) d t {\displaystyle {\begin{aligned}|x(t)|^{2}&=\int _{-\infty }^{\infty }W_{x}(t,f)\,df\\|X(f)|^{2}&=\int _{-\infty }^{\infty }W_{x}(t,f)\,dt\end{aligned}}} Energy property ∫ − ∞ ∞ ∫ − ∞ ∞ W x ( t , f ) d f d t = ∫ − ∞ ∞ | x ( t ) | 2 d t = ∫ − ∞ ∞ | X ( f ) | 2 d f {\displaystyle \int _{-\infty }^{\infty }\int _{-\infty }^{\infty }W_{x}(t,f)\,df\,dt=\int _{-\infty }^{\infty }|x(t)|^{2}\,dt=\int _{-\infty }^{\infty }|X(f)|^{2}\,df} Recovery property ∫ − ∞ ∞ W x ( t 2 , f ) e i 2 π f t d f = x ( t ) x ∗ ( 0 ) ∫ − ∞ ∞ W x ( t , f 2 ) e i 2 π f t d t = X ( f ) X ∗ ( 0 ) {\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }W_{x}\left({\frac {t}{2}},f\right)e^{i2\pi ft}\,df&=x(t)x^{*}(0)\\\int _{-\infty }^{\infty }W_{x}\left(t,{\frac {f}{2}}\right)e^{i2\pi ft}\,dt&=X(f)X^{*}(0)\end{aligned}}} Mean condition frequency and mean condition time X ( f ) = | X ( f ) | e i 2 π ψ ( f ) , x ( t ) = | x ( t ) | e i 2 π ϕ ( t ) , if ϕ ′ ( t ) = | x ( t ) | − 2 ∫ − ∞ ∞ f W x ( t , f ) d f and − ψ ′ ( f ) = | X ( f ) | − 2 ∫ − ∞ ∞ t W x ( t , f ) d t {\displaystyle {\begin{aligned}X(f)&=|X(f)|e^{i2\pi \psi (f)},\quad x(t)=|x(t)|e^{i2\pi \phi (t)},\\{\text{if }}\phi '(t)&=|x(t)|^{-2}\int _{-\infty }^{\infty }fW_{x}(t,f)\,df\\{\text{ and }}-\psi '(f)&=|X(f)|^{-2}\int _{-\infty }^{\infty }tW_{x}(t,f)\,dt\end{aligned}}} Moment properties ∫ − ∞ ∞ ∫ − ∞ ∞ t n W x ( t , f ) d t d f = ∫ − ∞ ∞ t n | x ( t ) | 2 d t ∫ − ∞ ∞ ∫ − ∞ ∞ f n W x ( t , f ) d t d f = ∫ − ∞ ∞ f n | X ( f ) | 2 d f {\displaystyle {\begin{aligned}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }t^{n}W_{x}(t,f)\,dt\,df&=\int _{-\infty }^{\infty }t^{n}|x(t)|^{2}\,dt\\\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f^{n}W_{x}(t,f)\,dt\,df&=\int _{-\infty }^{\infty }f^{n}|X(f)|^{2}\,df\end{aligned}}} Real properties W x ∗ ( t , f ) = W x ( t , f ) {\displaystyle W_{x}^{*}(t,f)=W_{x}(t,f)} Region properties If x ( t ) = 0 for t > t 0 then W x ( t , f ) = 0 for t > t 0 If x ( t ) = 0 for t < t 0 then W x ( t , f ) = 0 for t < t 0 {\displaystyle {\begin{aligned}{\text{If }}x(t)&=0{\text{ for }}t>t_{0}{\text{ then }}W_{x}(t,f)=0{\text{ for }}t>t_{0}\\{\text{If }}x(t)&=0{\text{ for }}t<t_{0}{\text{ then }}W_{x}(t,f)=0{\text{ for }}t<t_{0}\end{aligned}}} Multiplication theorem If y ( t ) = x ( t ) h ( t ) then W y ( t , f ) = ∫ − ∞ ∞ W x ( t , ρ ) W h ( t , f − ρ ) d ρ {\displaystyle {\begin{aligned}{\text{If }}y(t)&=x(t)h(t)\\{\text{then }}W_{y}(t,f)&=\int _{-\infty }^{\infty }W_{x}(t,\rho )W_{h}(t,f-\rho )\,d\rho \end{aligned}}} Convolution theorem If y ( t ) = ∫ − ∞ ∞ x ( t − τ ) h ( τ ) d τ then W y ( t , f ) = ∫ − ∞ ∞ W x ( ρ , f ) W h ( t − ρ , f ) d ρ {\displaystyle {\begin{aligned}{\text{If }}y(t)&=\int _{-\infty }^{\infty }x(t-\tau )h(\tau )\,d\tau \\{\text{then }}W_{y}(t,f)&=\int _{-\infty }^{\infty }W_{x}(\rho ,f)W_{h}(t-\rho ,f)\,d\rho \end{aligned}}} Correlation theorem If y ( t ) = ∫ − ∞ ∞ x ( t + τ ) h ∗ ( τ ) d τ then W y ( t , ω ) = ∫ − ∞ ∞ W x ( ρ , ω ) W h ( − t + ρ , ω ) d ρ {\displaystyle {\begin{aligned}{\text{If }}y(t)&=\int _{-\infty }^{\infty }x(t+\tau )h^{*}(\tau )\,d\tau {\text{ then }}\\W_{y}(t,\omega )&=\int _{-\infty }^{\infty }W_{x}(\rho ,\omega )W_{h}(-t+\rho ,\omega )\,d\rho \end{aligned}}} Time-shifting covariance If y ( t ) = x ( t − t 0 ) then W y ( t , f ) = W x ( t − t 0 , f ) {\displaystyle {\begin{aligned}{\text{If }}y(t)&=x(t-t_{0})\\{\text{then }}W_{y}(t,f)&=W_{x}(t-t_{0},f)\end{aligned}}} Modulation covariance If y ( t ) = e i 2 π f 0 t x ( t ) then W y ( t , f ) = W x ( t , f − f 0 ) {\displaystyle {\begin{aligned}{\text{If }}y(t)&=e^{i2\pi f_{0}t}x(t)\\{\text{then }}W_{y}(t,f)&=W_{x}(t,f-f_{0})\end{aligned}}} Scale covariance If y ( t ) = a x ( a t ) for some a > 0 then then W y ( t , f ) = W x ( a t , f a ) {\displaystyle {\begin{aligned}{\text{If }}y(t)&={\sqrt {a}}x(at){\text{ for some }}a>0{\text{ then }}\\{\text{then }}W_{y}(t,f)&=W_{x}(at,{\frac {f}{a}})\end{aligned}}} == Windowed Wigner Distribution Function == When a signal is not time limited, its Wigner Distribution Function is hard to implement. Thus, we add a new function(mask) to its integration part, so that we only have to implement part of the original function instead of integrating all the way from negative infinity to positive infinity. Original function: W x ( t , f ) = ∫ − ∞ ∞ x ( t + τ 2 ) ⋅ x ∗ ( t − τ 2 ) e − j 2 π τ f ⋅ d τ {\displaystyle W_{x}(t,f)=\int _{-\infty }^{\infty }x\left(t+{\frac {\tau }{2}}\right)\cdot x^{*}\left(t-{\frac {\tau }{2}}\right)e^{-j2\pi \tau f}\cdot d\tau } Function with mask: W x ( t , f ) = ∫ − ∞ ∞ w ( τ ) x ( t + τ 2 ) ⋅ x ∗ ( t − τ 2 ) e − j 2 π τ f ⋅ d τ {\displaystyle W_{x}(t,f)=\int _{-\infty }^{\infty }w(\tau )x\left(t+{\frac {\tau }{2}}\right)\cdot x^{*}\left(t-{\frac {\tau }{2}}\right)e^{-j2\pi \tau f}\cdot d\tau } where w ( τ ) {\displaystyle w(\tau )} is real and time-limited === Implementation === According to definition: W x ( t , f ) = ∫ − ∞ ∞ w ( τ ) x ( t + τ 2 ) ⋅ x ∗ ( t − τ 2 ) e − j 2 π τ f ⋅ d τ W x ( t , f ) = 2 ∫ − ∞ ∞ w ( 2 τ ′ ) x ( t + τ ′ ) ⋅ x ∗ ( t − τ ′ ) e − j 4 π τ ′ f ⋅ d τ ′ W x ( n Δ t , m Δ f ) = 2 ∑ p = − ∞ ∞ w ( 2 p Δ t ) x ( ( n + p ) Δ t ) x ∗ ( ( n − p ) Δ t ) e − j 4 π m p Δ t Δ f Δ t {\displaystyle {\begin{aligned}W_{x}(t,f)=\int _{-\infty }^{\infty }w(\tau )x\left(t+{\frac {\tau }{2}}\right)\cdot x^{*}\left(t-{\frac {\tau }{2}}\right)e^{-j2\pi \tau f}\cdot d\tau \\W_{x}(t,f)=2\int _{-\infty }^{\infty }w(2\tau ')x\left(t+\tau '\right)\cdot x^{*}\left(t-\tau '\right)e^{-j4\pi \tau 'f}\cdot d\tau '\\W_{x}(n\Delta _{t},m\Delta _{f})=2\sum _{p=-\infty }^{\infty }w(2p\Delta _{t})x((n+p)\Delta _{t})x^{\ast }((n-p)\Delta _{t})e^{-j4\pi mp\Delta _{t}\Delta _{f}}\Delta _{t}\end{aligned}}} Suppose that w ( t ) = 0 {\displaystyle w(t)=0} for | t | > B → w ( 2 p Δ t ) = 0 {\displaystyle |t|>B\rightarrow w(2p\Delta _{t})=0} for p < − Q {\displaystyle p<-Q} and p > Q {\displaystyle p>Q} W x ( n Δ t , m Δ f ) = 2 ∑ p = − Q Q w ( 2 p Δ t ) x ( ( n + p ) Δ t ) x ∗ ( ( n − p ) Δ t ) e − j 4 π m p Δ t Δ f Δ t {\displaystyle {\begin{aligned}W_{x}(n\Delta _{t},m\Delta _{f})=2\sum _{p=-Q}^{Q}w(2p\Delta _{t})x((n+p)\Delta _{t})x^{\ast }((n-p)\Delta _{t})e^{-j4\pi mp\Delta _{t}\Delta _{f}}\Delta _{t}\end{aligned}}} We take x ( t ) = δ ( t − t 1 ) + δ ( t − t 2 ) {\displaystyle x(t)=\delta (t-t_{1})+\delta (t-t_{2})} as example W x ( t , f ) = ∫ − ∞ ∞ w ( τ ) x ( t + τ 2 ) ⋅ x ∗ ( t − τ 2 ) e − j 2 π τ f ⋅ d τ , {\displaystyle {\begin{aligned}W_{x}(t,f)=\int _{-\infty }^{\infty }w(\tau )x\left(t+{\frac {\tau }{2}}\right)\cdot x^{*}\left(t-{\frac {\tau }{2}}\right)e^{-j2\pi \tau f}\cdot d\tau \,,\end{aligned}}} where w ( τ ) {\displaystyle w(\tau )} is a real function And then we compare the difference between two conditions. Ideal: W x ( t , f ) = 0 , for t ≠ t 2 , t 1 {\displaystyle W_{x}(t,f)=0,{\text{ for }}t\neq t_{2},t_{1}} When mask function w ( τ ) = 1 {\displaystyle w(\tau )=1} , which means no mask function. y ( t , τ ) = x ( t + τ 2 ) {\displaystyle y(t,\tau )=x(t+{\frac {\tau }{2}})} y ∗ ( t , − τ ) = x ∗ ( t − τ 2 ) {\displaystyle y^{*}(t,-\tau )=x^{*}(t-{\frac {\tau }{2}})} W x ( t , f ) = ∫ − ∞ ∞ x ( t + τ 2 ) x ∗ ( t − τ 2 ) e − j 2 π τ f d τ {\displaystyle W_{x}(t,f)=\int _{-\infty }^{\infty }x(t+{\frac {\tau }{2}})x^{*}(t-{\frac {\tau }{2}})e^{-j2\pi \tau f}d\tau } = ∫ − ∞ ∞ [ δ ( t + τ 2 − t 1 ) + δ ( t + τ 2 − t 2 ) ] [ δ ( t − τ 2 − t 1 ) + δ ( t − τ 2 − t 2 ) ] e − j 2 π τ f ⋅ d τ {\displaystyle =\int _{-\infty }^{\infty }[\delta (t+{\frac {\tau }{2}}-t_{1})+\delta (t+{\frac {\tau }{2}}-t_{2})][\delta (t-{\frac {\tau }{2}}-t_{1})+\delta (t-{\frac {\tau }{2}}-t_{2})]e^{-j2\pi \tau f}\cdot d\tau } = 4 ∫ − ∞ ∞ [ δ ( 2 t + τ − 2 t 1 ) + δ ( 2 t + τ − 2 t 2 ) ] [ δ ( 2 t − τ − 2 t 1 ) + δ ( 2 t − τ − 2 t 2 ) ] e j 2 π τ f ⋅ d τ {\displaystyle =4\int _{-\infty }^{\infty }[\delta (2t+\tau -2t_{1})+\delta (2t+\tau -2t_{2})][\delta (2t-\tau -2t_{1})+\delta (2t-\tau -2t_{2})]e^{j2\pi \tau f}\cdot d\tau } === 3 Conditions === Then we consider the condition with mask function: We can see that w ( τ ) {\displaystyle w(\tau )} have value only between –B to B, thus conducting with w ( τ ) {\displaystyle w(\tau )} can remove cross term of the function. But if x(t) is not a Delta function nor a narrow frequency function, instead, it is a function with wide frequency or ripple. The edge of the signal may still exist between –B and B, which still cause the cross term problem. for example: == See also == Time-frequency representation Short-time Fourier transform Spectrogram Gabor transform Autocorrelation Gabor–Wigner transform Modified Wigner distribution function Optical equivalence theorem Polynomial Wigner–Ville distribution Cohen's class distribution function Wigner quasi-probability distribution Transformation between distributions in time-frequency analysis Bilinear time–frequency distribution == References == == Further reading == Wigner, E. (1932). "On the Quantum Correction for Thermodynamic Equilibrium" (PDF). Physical Review. 40 (5): 749–759. Bibcode:1932PhRv...40..749W. doi:10.1103/PhysRev.40.749. hdl:10338.dmlcz/141466. J. Ville, 1948. "Théorie et Applications de la Notion de Signal Analytique", Câbles et Transmission, 2, 61–74 . T. A. C. M. Classen and W. F. G. Mecklenbrauker, 1980. "The Wigner distribution-a tool for time-frequency signal analysis; Part I," Philips J. Res., vol. 35, pp. 217–250. L. Cohen (1989): Proceedings of the IEEE 77 pp. 941–981, Time-frequency distributions---a review L. Cohen, Time-Frequency Analysis, Prentice-Hall, New York, 1995. ISBN 978-0135945322 S. Qian and D. Chen, Joint Time-Frequency Analysis: Methods and Applications, Chap. 5, Prentice Hall, N.J., 1996. B. Boashash, "Note on the Use of the Wigner Distribution for Time Frequency Signal Analysis", IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 36, No. 9, pp. 1518–1521, Sept. 1988. doi:10.1109/29.90380. B. Boashash, editor,Time-Frequency Signal Analysis and Processing – A Comprehensive Reference, Elsevier Science, Oxford, 2003, ISBN 0-08-044335-4. F. Hlawatsch, G. F. Boudreaux-Bartels: "Linear and quadratic time-frequency signal representation," IEEE Signal Processing Magazine, pp. 21–67, Apr. 1992. R. L. Allen and D. W. Mills, Signal Analysis: Time, Frequency, Scale, and Structure, Wiley- Interscience, NJ, 2004. Jian-Jiun Ding, Time frequency analysis and wavelet transform class notes, the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, 2015. Kakofengitis, D., & Steuernagel, O. (2017). "Wigner's quantum phase space current in weakly anharmonic weakly excited two-state systems" European Physical Journal Plus 14.07.2017 == External links == Sonogram Visible Speech Under GPL Licensed Freeware for the visual extraction of the Wigner Distribution.
Wikipedia/Wigner_distribution_function
In optics, the Fresnel diffraction equation for near-field diffraction is an approximation of the Kirchhoff–Fresnel diffraction that can be applied to the propagation of waves in the near field. It is used to calculate the diffraction pattern created by waves passing through an aperture or around an object, when viewed from relatively close to the object. In contrast the diffraction pattern in the far field region is given by the Fraunhofer diffraction equation. The near field can be specified by the Fresnel number, F, of the optical arrangement. When F ≪ 1 {\displaystyle F\ll 1} the diffracted wave is considered to be in the Fraunhofer field. However, the validity of the Fresnel diffraction integral is deduced by the approximations derived below. Specifically, the phase terms of third order and higher must be negligible, a condition that may be written as F θ 2 4 ≪ 1 , {\displaystyle {\frac {F\theta ^{2}}{4}}\ll 1,} where θ {\displaystyle \theta } is the maximal angle described by θ ≈ a / L , {\displaystyle \theta \approx a/L,} a and L the same as in the definition of the Fresnel number. Hence this condition can be approximated as a 4 4 L 3 λ ≪ 1 {\textstyle {\frac {a^{4}}{4L^{3}\lambda }}\ll 1} . The multiple Fresnel diffraction at closely spaced periodical ridges (ridged mirror) causes the specular reflection; this effect can be used for atomic mirrors. == Early treatments of this phenomenon == Some of the earliest work on what would become known as Fresnel diffraction was carried out by Francesco Maria Grimaldi in Italy in the 17th century. In his monograph entitled "Light", Richard C. MacLaurin explains Fresnel diffraction by asking what happens when light propagates, and how that process is affected when a barrier with a slit or hole in it is interposed in the beam produced by a distant source of light. He uses the Principle of Huygens to investigate, in classical terms, what transpires. The wave front that proceeds from the slit and on to a detection screen some distance away very closely approximates a wave front originating across the area of the gap without regard to any minute interactions with the actual physical edge. The result is that if the gap is very narrow only diffraction patterns with bright centers can occur. If the gap is made progressively wider, then diffraction patterns with dark centers will alternate with diffraction patterns with bright centers. As the gap becomes larger, the differentials between dark and light bands decrease until a diffraction effect can no longer be detected. MacLaurin does not mention the possibility that the center of the series of diffraction rings produced when light is shone through a small hole may be black, but he does point to the inverse situation wherein the shadow produced by a small circular object can paradoxically have a bright center. (p. 219) In his Optics, Francis Weston Sears offers a mathematical approximation suggested by Fresnel that predicts the main features of diffraction patterns and uses only simple mathematics. By considering the perpendicular distance from the hole in a barrier screen to a nearby detection screen along with the wavelength of the incident light, it is possible to compute a number of regions called half-period elements or Fresnel zones. The inner zone is a circle and each succeeding zone will be a concentric annular ring. If the diameter of the circular hole in the screen is sufficient to expose the first or central Fresnel zone, the amplitude of light at the center of the detection screen will be double what it would be if the detection screen were not obstructed. If the diameter of the circular hole in the screen is sufficient to expose two Fresnel zones, then the amplitude at the center is almost zero. That means that a Fresnel diffraction pattern can have a dark center. These patterns can be seen and measured, and correspond well to the values calculated for them. == The Fresnel diffraction integral == According to the Rayleigh–Sommerfeld diffraction theory, the electric-field diffraction pattern at a point (x, y, z) is given by the following solution to the Helmholtz equation: E ( x , y , z ) = 1 i λ ∬ − ∞ + ∞ E ( x ′ , y ′ , 0 ) e i k r r z r ( 1 + i k r ) d x ′ d y ′ , {\displaystyle E(x,y,z)={\frac {1}{i\lambda }}\iint _{-\infty }^{+\infty }E(x',y',0){\frac {e^{ikr}}{r}}{\frac {z}{r}}\left(1+{\frac {i}{kr}}\right)\,dx'dy',} where E ( x ′ , y ′ , 0 ) {\displaystyle E(x',y',0)} is the electric field at the aperture, r = ( x − x ′ ) 2 + ( y − y ′ ) 2 + z 2 , {\displaystyle r={\sqrt {(x-x')^{2}+(y-y')^{2}+z^{2}}},} k {\displaystyle k} is the wavenumber 2 π / λ , {\displaystyle 2\pi /\lambda ,} i {\displaystyle i} is the imaginary unit. The analytical solution of this integral quickly becomes impractically complex for all but the simplest diffraction geometries. Therefore, it is usually calculated numerically. === The Fresnel approximation === The main problem for solving the integral is the expression of r. First, we can simplify the algebra by introducing the substitution ρ 2 = ( x − x ′ ) 2 + ( y − y ′ ) 2 . {\displaystyle \rho ^{2}=(x-x')^{2}+(y-y')^{2}.} Substituting into the expression for r, we find r = ρ 2 + z 2 = z 1 + ρ 2 z 2 . {\displaystyle r={\sqrt {\rho ^{2}+z^{2}}}=z{\sqrt {1+{\frac {\rho ^{2}}{z^{2}}}}}.} Next, by the binomial expansion, 1 + u = ( 1 + u ) 1 2 = 1 + u 2 − u 2 8 + ⋯ {\displaystyle {\sqrt {1+u}}=(1+u)^{\frac {1}{2}}=1+{\frac {u}{2}}-{\frac {u^{2}}{8}}+\cdots } We can express r {\displaystyle r} as r = z 1 + ρ 2 z 2 = z [ 1 + ρ 2 2 z 2 − 1 8 ( ρ 2 z 2 ) 2 + ⋯ ] = z + ρ 2 2 z − ρ 4 8 z 3 + ⋯ {\displaystyle {\begin{aligned}r&=z{\sqrt {1+{\frac {\rho ^{2}}{z^{2}}}}}\\&=z\left[1+{\frac {\rho ^{2}}{2z^{2}}}-{\frac {1}{8}}\left({\frac {\rho ^{2}}{z^{2}}}\right)^{2}+\cdots \right]\\&=z+{\frac {\rho ^{2}}{2z}}-{\frac {\rho ^{4}}{8z^{3}}}+\cdots \end{aligned}}} If we consider all the terms of binomial series, then there is no approximation. Let us substitute this expression in the argument of the exponential within the integral; the key to the Fresnel approximation is to assume that the third term is very small and can be ignored, and henceforth any higher orders. In order to make this possible, it has to contribute to the variation of the exponential for an almost null term. In other words, it has to be much smaller than the period of the complex exponential, i.e., 2 π {\displaystyle 2\pi } : k ρ 4 8 z 3 ≪ 2 π . {\displaystyle k{\frac {\rho ^{4}}{8z^{3}}}\ll 2\pi .} Expressing k in terms of the wavelength, k = 2 π λ , {\displaystyle k={\frac {2\pi }{\lambda }},} we get the following relationship: ρ 4 z 3 λ ≪ 8. {\displaystyle {\frac {\rho ^{4}}{z^{3}\lambda }}\ll 8.} Multiplying both sides by z 3 / λ 3 , {\displaystyle z^{3}/\lambda ^{3},} we have ρ 4 λ 4 ≪ 8 z 3 λ 3 , {\displaystyle {\frac {\rho ^{4}}{\lambda ^{4}}}\ll 8{\frac {z^{3}}{\lambda ^{3}}},} or, substituting the earlier expression for ρ 2 , {\displaystyle \rho ^{2},} 1 λ 4 [ ( x − x ′ ) 2 + ( y − y ′ ) 2 ] 2 ≪ 8 z 3 λ 3 . {\displaystyle {\frac {1}{\lambda ^{4}}}\left[(x-x')^{2}+(y-y')^{2}\right]^{2}\ll 8{\frac {z^{3}}{\lambda ^{3}}}.} If this condition holds true for all values of x, x', y and y', then we can ignore the third term in the Taylor expression. Furthermore, if the third term is negligible, then all terms of higher order will be even smaller, so we can ignore them as well. For applications involving optical wavelengths, the wavelength λ is typically many orders of magnitude smaller than the relevant physical dimensions. In particular, λ ≪ z , {\displaystyle \lambda \ll z,} and λ ≪ ρ . {\displaystyle \lambda \ll \rho .} Thus, as a practical matter, the required inequality will always hold true as long as ρ ≪ z . {\displaystyle \rho \ll z.} We can then approximate the expression with only the first two terms: r ≈ z + ρ 2 2 z = z + ( x − x ′ ) 2 + ( y − y ′ ) 2 2 z . {\displaystyle r\approx z+{\frac {\rho ^{2}}{2z}}=z+{\frac {(x-x')^{2}+(y-y')^{2}}{2z}}.} This equation is the Fresnel approximation, and the inequality stated above is a condition for the approximation's validity. === Fresnel diffraction === The condition for validity is fairly weak, and it allows all length parameters to take comparable values, provided the aperture is small compared to the path length. For the r in the denominator we go one step further and approximate it with only the first term, r ≈ z . {\displaystyle r\approx z.} This is valid in particular if we are interested in the behaviour of the field only in a small area close to the origin, where the values of x and y are much smaller than z. In general, Fresnel diffraction is valid if the Fresnel number is approximately 1. For Fresnel diffraction the electric field at point ( x , y , z ) {\displaystyle (x,y,z)} is then given by E ( x , y , z ) = e i k z i λ z ∬ − ∞ + ∞ E ( x ′ , y ′ , 0 ) e i k 2 z [ ( x − x ′ ) 2 + ( y − y ′ ) 2 ] d x ′ d y ′ . {\displaystyle E(x,y,z)={\frac {e^{ikz}}{i\lambda z}}\iint _{-\infty }^{+\infty }E(x',y',0)e^{{\frac {ik}{2z}}\left[(x-x')^{2}+(y-y')^{2}\right]}\,dx'dy'.} This is the Fresnel diffraction integral; it means that, if the Fresnel approximation is valid, the propagating field is a spherical wave, originating at the aperture and moving along z. The integral modulates the amplitude and phase of the spherical wave. Analytical solution of this expression is still only possible in rare cases. For a further simplified case, valid only for much larger distances from the diffraction source, see Fraunhofer diffraction. Unlike Fraunhofer diffraction, Fresnel diffraction accounts for the curvature of the wavefront, in order to correctly calculate the relative phase of interfering waves. == Alternative forms == === Convolution === The integral can be expressed in other ways in order to calculate it using some mathematical properties. If we define the function h ( x , y , z ) = e i k z i λ z e i k 2 z ( x 2 + y 2 ) , {\displaystyle h(x,y,z)={\frac {e^{ikz}}{i\lambda z}}e^{i{\frac {k}{2z}}(x^{2}+y^{2})},} then the integral can be expressed in terms of a convolution: E ( x , y , z ) = E ( x , y , 0 ) ∗ h ( x , y , z ) ; {\displaystyle E(x,y,z)=E(x,y,0)*h(x,y,z);} in other words, we are representing the propagation using a linear-filter modeling. That is why we might call the function h ( x , y , z ) {\displaystyle h(x,y,z)} the impulse response of free-space propagation. === Fourier transform === Another possible way is through the Fourier transform. If in the integral we express k in terms of the wavelength: k = 2 π λ {\displaystyle k={\frac {2\pi }{\lambda }}} and expand each component of the transverse displacement: ( x − x ′ ) 2 = x 2 + x ′ 2 − 2 x x ′ , ( y − y ′ ) 2 = y 2 + y ′ 2 − 2 y y ′ , {\displaystyle {\begin{aligned}\left(x-x'\right)^{2}&=x^{2}+x'^{2}-2xx',\\\left(y-y'\right)^{2}&=y^{2}+y'^{2}-2yy',\end{aligned}}} then we can express the integral in terms of the two-dimensional Fourier transform. Let us use the following definition: G ( p , q ) = F { g ( x , y ) } ≡ ∬ − ∞ ∞ g ( x , y ) e − i 2 π ( p x + q y ) d x d y , {\displaystyle G(p,q)={\mathcal {F}}\{g(x,y)\}\equiv \iint _{-\infty }^{\infty }g(x,y)e^{-i2\pi (px+qy)}\,dx\,dy,} where p and q are spatial frequencies (wavenumbers). The Fresnel integral can be expressed as E ( x , y , z ) = e i k z i λ z e i π λ z ( x 2 + y 2 ) F { E ( x ′ , y ′ , 0 ) e i π λ z ( x ′ 2 + y ′ 2 ) } | p = x λ z , q = y λ z = h ( x , y ) ⋅ G ( p , q ) | p = x λ z , q = y λ z . {\displaystyle {\begin{aligned}E(x,y,z)&=\left.{\frac {e^{ikz}}{i\lambda z}}e^{i{\frac {\pi }{\lambda z}}(x^{2}+y^{2})}{\mathcal {F}}\left\{E(x',y',0)e^{i{\frac {\pi }{\lambda z}}(x'^{2}+y'^{2})}\right\}\right|_{p={\frac {x}{\lambda z}},\ q={\frac {y}{\lambda z}}}\\&=h(x,y)\cdot G(p,q){\big |}_{p={\frac {x}{\lambda z}},\ q={\frac {y}{\lambda z}}}.\end{aligned}}} That is, first multiply the field to be propagated by a complex exponential, calculate its two-dimensional Fourier transform, replace ( p , q ) {\displaystyle (p,q)} with ( x λ z , y λ z ) {\displaystyle \left({\tfrac {x}{\lambda z}},{\tfrac {y}{\lambda z}}\right)} and multiply it by another factor. This expression is better than the others when the process leads to a known Fourier transform, and the connection with the Fourier transform is tightened in the linear canonical transformation, discussed below. === Linear canonical transformation === From the point of view of the linear canonical transformation, Fresnel diffraction can be seen as a shear in the time–frequency domain, corresponding to how the Fourier transform is a rotation in the time–frequency domain. == See also == Fraunhofer diffraction Fresnel integral Fresnel zone Fresnel number Augustin-Jean Fresnel Ridged mirror Fresnel imager Euler spiral == Notes == == References ==
Wikipedia/Fresnel_transform
For digital image processing, the Focus recovery from a defocused image is an ill-posed problem since it loses the component of high frequency. Most of the methods for focus recovery are based on depth estimation theory. The Linear canonical transform (LCT) gives a scalable kernel to fit many well-known optical effects. Using LCTs to approximate an optical system for imaging and inverting this system, theoretically permits recovery of a defocused image. == Depth of field and perceptual focus == In photography, depth of field (DOF) means an effective focal length. It is usually used for stressing an object and deemphasizing the background (and/or the foreground). The important measure related to DOF is the lens aperture. Decreasing the diameter of aperture increases focus and lowers resolution and vice versa. == The Huygens–Fresnel principle and DOF == The Huygens–Fresnel principle describes diffraction of wave propagation between two fields. It belongs to Fourier optics rather than geometric optics. The disturbance of diffraction depends on two circumstance parameters, the size of aperture and the interfiled distance. Consider a source field and a destination field, field 1 and field 0, respectively. P1(x1,y1) is the position in the source field, P0(x0,y0) is the position in the destination field. The Huygens–Fresnel principle gives the diffraction formula for two fields U(x0,y0), U(x1,y1) as following: U ( x 0 , y 0 ) = 1 j λ ∫ ∫ U ( x 1 , y 1 ) e j k r 01 r 01 cos ⁡ θ d x 1 d y 1 {\displaystyle \mathbf {U} (x_{0},y_{0})={\frac {1}{j\lambda }}\int \!\int \mathbf {U} (x_{1},y_{1}){\frac {e^{jkr_{01}}}{r_{01}}}\cos \theta dx_{1}dy_{1}} where θ denotes the angle between r 01 {\displaystyle r_{01}} and z {\displaystyle z} . Replace cos θ by r 01 z {\displaystyle {\frac {r_{01}}{z}}} and r 01 {\displaystyle r_{01}} by [ ( x 0 − x 1 ) 2 + ( y 0 − y 1 ) 2 + z 2 ] 1 / 2 {\displaystyle [(x_{0}-x_{1})^{2}+(y_{0}-y_{1})^{2}+z^{2}]^{1/2}} we get U ( x 0 , y 0 ) = 1 j λ z ∫ ∫ U ( x 1 , y 1 ) exp ⁡ ( j k z [ 1 + ( x 0 − x 1 z ) 2 + ( y 0 − y 1 z ) 2 ] 1 / 2 ) 1 + ( x 0 − x 1 z ) 2 + ( y 0 − y 1 z ) 2 d x 1 d y 1 {\displaystyle \mathbf {U} (x_{0},y_{0})={\frac {1}{j\lambda z}}\int \!\int \mathbf {U} (x_{1},y_{1}){\frac {\exp(jkz[1+({\frac {x_{0}-x_{1}}{z}})^{2}+({\frac {y_{0}-y_{1}}{z}})^{2}]^{1/2})}{1+({\frac {x_{0}-x_{1}}{z}})^{2}+({\frac {y_{0}-y_{1}}{z}})^{2}}}dx_{1}dy_{1}} The further distance z or the smaller aperture (x1,y1) causes a greater diffraction. A larger DOF can lead to a more effective focused wave distribution. This seems to be a conflict. Here are the notations: Diffraction In a real imaging environment, the depths of objects comparing to the aperture are usually not enough to lead to serious diffraction. However, a long enough depth of the object can truly blurs the image. Effective Focus Small aperture, small blurring radius, few wave information. Loses details in comparing to a large aperture. In conclusion, diffraction explains a micro behavior whereas DOF shows a macro behavior. Both of them are related to aperture size. == Linear canonical transform == As the meaning of "canonical", the linear canonical transform (LCT) is a scalable transform that connects to many important kernels such as the Fresnel transform, Fraunhofer transform and the fractional Fourier transform. It can be easily controlled by its four parameters, a, b, c, d (3 degrees of freedom). The definition: L M ( f ( u ) ) = ∫ L M ( u , u ′ ) f ( u ′ ) d u ′ {\displaystyle L_{M}(f(u))=\int L_{M}(u,u')f(u')du'} where L M ( u , u ′ ) = { 1 b e − j π / 4 e [ j π ( d b u 2 ) − 2 1 b u u ′ + a b u ′ 2 ] , if b ≠ 0 d e j 2 c d u 2 δ ( u ′ − d u ) , if b = 0 {\displaystyle L_{M}(u,u')={\begin{cases}{\sqrt {\frac {1}{b}}}e^{-j\pi /4}e^{[j\pi ({\frac {d}{b}}u^{2})-2{\frac {1}{b}}uu'+{\frac {a}{b}}u'^{2}]},&{\mbox{if }}b\neq 0\\{\sqrt {d}}e^{{\frac {j}{2}}cdu^{2}}\delta (u'-du),&{\mbox{if }}b=0\end{cases}}} Consider a general imaging system with object distance z0, focal length of the thin lens f and an imaging distance z1. The effect of the propagation in freespace acts as nearly a chirp convolution, that is, the formula of diffraction. Besides, the effect of the propagation in thin lens acts as a chirp multiplication. The parameters are all simplified as paraxial approximations while meeting the freespace propagation. It does not consider aperture size. From the properties of the LCT, it is possible to obtain those 4 parameters for this optical system as: [ 1 − z 1 f λ z 0 − λ z 0 z 1 f + λ z 1 − 1 λ f 1 − z 0 f ] {\displaystyle {\begin{bmatrix}1-{\frac {z_{1}}{f}}\quad &\lambda z_{0}-{\frac {\lambda z_{0}z_{1}}{f}}+\lambda z_{1}\\-{\frac {1}{\lambda f}}\quad &1-{\frac {z_{0}}{f}}\end{bmatrix}}} Once the values of z1, z0 and f are known, the LCT can simulate any optical system. == Notes == == References == Haldun M. Ozaktas; Zeev Zalevsky; M. Alper Kutay (2001). The fractional Fourier transform with applications in optics and signal processing. New York: John Wiley & Sons. ISBN 978-0-471-96346-2. M. Sorel and J. Flusser, "Space-variant restoration of images degraded by camera motion blur", IEEE Transactions on Image Processing, vol. 17, pp. 105–116, Feb. 2008. "The way a zoom lens works". Jos. Schneider Optische Werke GmbH. February 2008. Archived from the original on 2012-05-08. B. Barshan, M. Alper Kutay and H. M. Ozaktas, "Optimal filtering with linear ca-nonical transformations", Optics Communications, vol. 135, pp. 32–36, Feb. 1997.
Wikipedia/Focus_recovery_based_on_the_linear_canonical_transform
In evolutionary biology, mimicry in vertebrates is mimicry by a vertebrate of some model (an animal, not necessarily a vertebrate), deceiving some other animal, the dupe. Mimicry differs from camouflage as it is meant to be seen, while animals use camouflage to remain hidden. Visual, olfactory, auditory, biochemical, and behavioral modalities of mimicry have been documented in vertebrates. There are few well-studied examples of mimicry in vertebrates. Still, many of the basic types of mimicry apply to vertebrates, especially among snakes. Batesian mimicry is rare among vertebrates but found in some reptiles (particularly snakes) and amphibians. Müllerian mimicry is found in some snakes, birds, amphibians, and fish. Aggressive mimicry is known in some vertebrate predators and parasites, while certain forms of sexual mimicry are distinctly more complex than in invertebrates. == Classification == === Defensive === ==== Batesian ==== Batesian mimicry is a form of defense that allows a harmless species to mimic the appearance of a toxic, noxious, or harmful species to protect itself from predators. By mimicking the appearance of a harmful species, a predator is less likely to attack the species due to its awareness of the signal of warning color patterns. Batesian mimicry occurs in multiple vertebrates, but is less prevalent in mammals due to a relative rarity of well-marked harmful models. However, this form of mimicry is prevalent in snakes and frogs, where chemical defense has coevolved with distinct coloration. Still, mammals have evolved Batesian mimicry systems where particularly powerful or harmful models exist. For example, Batesian mimicry may occur in cheetah cubs. They replicate the appearance of a sympatric species, the honey badger (Mellivora capensis). The honey badger has a white or silvery back with a black or brownish underbelly and grows to a body length of about three feet long and ten inches high. As cubs, cheetahs have the same reverse-countershading color pattern and are roughly the same size. Due to this conspicuous coloration, potential predators like lions and birds of prey are less likely to hunt cheetah cubs, as from a distance they appear to be honey badgers. Honey badgers make an effective model because their aggressive nature and glands on their tails that produce a noxious fluid enable them to deter predators up to 10x their size. Batesian mimicry also occurs in the scarlet kingsnake. This species resembles the venomous coral snake, sharing a pattern of red, black, and yellow bands. Although the order of the color rings differ between the two snakes, from a distance a predator can easily mistake the scarlet kingsnake for its venomous model. ==== Müllerian ==== Müllerian mimicry is another form of defensive mimicry, except the system involves two or more species that are all toxic, noxious, or harmful. These species develop similar appearances to collectively protect against predators. This adaptation is said to have evolved due to the additive protection of many species that look the same and reliably have harmful defenses. That is to say, this mimicry system evolves convergently. If a predator is aware of the potential threat of one species, the predator will also avoid any species with a similar appearance, creating the Müllerian mimicry affect. Again, the relative lack of noxious models limits most examples to systems that involve reptiles or amphibians. Müllerian mimicry is found in many pitvipers. All pit vipers are capable of delivering a life-threateningly venomous bite. In Asia, different species found throughout Asia have evolved separately to have a very similar appearance. Each species is found in different places in Asia, but have the same green coloration with reddish tail tip. These shared colorations are warnings signals for predators. Because a predator is aware of these warning signals, it will avoid all species with this color pattern. Species that benefit from this system include Trimeresurus macrops, T. purpureomaculatus, Trimeresurus septentrionalis, T. flavomaculatus and T. hageni. Müllerian mimicry is also found in a ring of poisonous frog species in Peru. The mimic poison frog (Dendrobates imitator) mimics 3 similarly poisonous frogs of the same genus that live in different areas. These are D. variabilis, D. fantasticus, and D. ventrimaculatus. D. imitator can replicate the different appearances of all 3 species with color patterns ranging from black spots with yellow back and bluish green limbs, larger black spots with yellow outline, and black linear spots with yellow and bluish green outline. The slow loris is one of the few known venomous mammals, and appears to use Müllerian mimicry for protection. It is hypothesized that this venom may have allowed it to develop a system of Müllerian mimicry with the Indian cobra. Slow lorises appear to look similar to the cobras with "facial markings undeniably akin to the eyespots and accompanying stripes of the spectacled cobra". Dark contrasting dorsal stripes are also apparent in both species,, helping to confuse predators from above. When in aggressive encounters, slow lorises will make a grunting noise that mimics the hiss of a cobra. This example of Müllerian mimicry is likely unique to vertebrates due to its multiple modalities: biochemical, behavioral, visual, and auditory. Since the cobra is undoubtedly more dangerous to predators (and prey, as the loris eats predominantly fruits, gums, and insects), it is unclear if the benefit from this system is mutual; Still, both species are dangerous in their own right, and can therefore most accurately be classified as Müllerian. === Aggressive === Aggressive mimicry is a form of mimicry, opposite in principle to defensive mimicry, that occurs in certain predators, parasites or parasitoids. These organisms benefit by sharing some of the characteristics of a harmless species in order to deceive their prey or host. Most examples of aggressive mimicry involve the predator employing a signal to lure its prey towards it under the promise of food, sex, or other rewards—much like the idiom of a wolf in sheep's clothing. ==== In predators ==== Some predators pretend to be prey or a third-party organism that the prey beneficially interacts with. In either situation, the mimicry increases the predator's chances of catching its prey. One form of predatory mimicry, lingual luring, involves wriggling the tongue to attract prey, duping them into believing the tongue is a small worm, an unusual case of a vertebrate mimicking an invertebrate. In the puff adder Bitis arietans, lingual luring only occurs in the act of attracting amphibian prey, suggesting that puff adders distinguished between prey types when selecting how to perform a display of aggressive mimicry. Another form of aggressive mimicry is caudal luring, in which the tail is waved to mimic prey. By mimicking invertebrate larva, the predator attracts prey of small vertebrates such as frogs, lizards, and birds. Male puff adders have longer, more obvious-looking tails. Sidewinder rattlesnakes, puff adders, lanceheads, and multiple other ambush-predatory snakes use caudal luring to attract prey. Complicated forms of aggressive mimicry have also been observed in fish, creating a system that resembles Batesian mimicry. The false cleanerfish, Aspidontus taeniatus, is a fin-eating blenny that has evolved to resemble a local species of cleaner wrasse, Labroides dimidiatus, which engages in mutualistic cleaning with larger fish. By closely mimicking the coloration and the cleaner fish's distinctive dancing display, false cleanerfish are able to remain in close quarters with large predatory reef fish, and gain access to victims during foraging. Some aggressive mimics switch rapidly between aggressive mimicry and defensive behavior depending on whether they are in the presence of a prey or a potential predator. For example, the sidewinder rattlesnake ceases aggressive behavior upon the arrival of a predatory toad and begins species-typical defensive displays. ==== Host-parasite ==== Host-parasite mimicry is a form of aggressive mimicry in which a parasite mimics its own host. Brood parasitism is a common form of parasitic aggressive mimicry that occurs in vertebrates, with cuckoos being a notable example. Brood parasite mothers will surrender their offspring to be raised by another organism, of either the same or a different species, unbeknownst to the other organism. This allows the progeny to be nurtured without energy expenditure or parental care by the true parent. Cuckoos are brood parasites that lay their eggs to match the color and pattern of their host's own eggs. Different species of cuckoo hatchlings have been known to mimic the acoustic sound, such as during begging, and appearance of the host offspring. Unlike most vertebrates that perform aggressive mimicry, certain brood parasitic birds display signals of two distinct modalities at the same time. For example, Horsfield's bronze cuckoo nestlings have been found to employ both acoustic and visual sensory modalities at the same time to increase efficiency and success of their mimicry. However, host-parasite systems are not always as precise. Great spotted cuckoos are brood parasites that lay eggs that can successfully dupe other birds such as the magpie, pied starling, and black crow, despite having different egg color, egg size, and offspring features. It is hypothesized that these differences in characteristic have evolved after the mimicry system due to genetic isolation, as the appearance of eggs laid by European an African great spotted cuckoos are different. Evidence also exists for other forms of parasitic mimicry in vertebrates. One such form is interspecific social dominance mimicry, a type of social parasitism where a subordinate species (usually determined by size) evolves over time to mimic its dominant ecological competitor, thereby competing with its previously socially dominant opponent. One such example is found in the tyrant flycatcher family, in which different birds of similar appearance exist from six different genera. Smaller-bodied species from four genera have been found to mimic the appearance of the larger species of the other two genera, suggesting that an avian mimicry complex has contributed to convergent evolution, providing a competitive advantage in the same ecological niche. === Automimicry === Automimicry is a type of mimicry that occurs within a single species, in which an individual mimics either a different member of its own species or a different part of its own body. In some cases, it is considered a form of Batesian mimicry, and is exhibited by a wide variety of vertebrates. Many of the basic strategies automimics use in invertebrates is repeated in vertebrates, such as eyespots. ==== Sexual ==== In sexual mimicry, an organism mimics the behaviors or physical traits of the opposite sex within its species. Spotted hyenas are one of the few vertebrate examples. In spotted hyenas, females have a pseudo-penis, which is highly erectile clitoral tissue, as well as a false scrotum. Females have evolved to mimic or exceed the testosterone levels of males This is advantageous because it lends females heightened aggression and dominance over the males in a highly competitive environment. Alternatively, it may have evolved for the advantage it bestows upon sexually indistinguishable cubs, which experience a high level of female-targeted infanticide. Another example is in flat lizards, where some males imitate female coloring to sneak around more dominant males and achieve copulation with females. ==== Anatomical ==== Some vertebrates species self-mimic their own body parts, through the use of patterns or actual anatomy. Two widespread examples of this are eyespots and false heads, both of which can misdirect, confuse, or intimidate potential predators. Eyespots are a form of automimicry in which an organism displays false eyes on a different part of its body, considered to be an aversion to predators who believe the prey animal has spotted them or is behaving aggressively, even when they are actually facing the other direction and unaware. In the case of attack, eyespots may also redirect damage away from the true head. Eyespots can be seen across the vertebrate taxa, from the four-eyed butterfly fish to pygmy owls. False-head mimicry occurs when an organism displays a different body part that has evolved to look like a head, achieving the same scare tactic as eyespots, and also protecting the vulnerable and important real head. For example, the rubber boa coil up and hide their heads, instead displaying their tails, which look morphologically like their heads, in a defensive behavior. == Evolution == Mimicry, in vertebrates or otherwise, is widely hypothesized to follow patterns of directional selection. However, it is argued that, while positive evolution might stabilize mimic forms, other evolutionary factors like random mutation create mimetic forms simply by coincidence. Vertebrate evolution systems often operate under unique selective pressures, resulting in the different quantitative and qualitative characteristics we observe between mimicry in vertebrates and other animals. The primary difference between mimicry in vertebrates and in insects is a decreased diversity and frequency. The 50,000 extant vertebrates are dwarfed by the over 1 million known invertebrates. Vertebrates seem to have multiple barriers to precise mimicry that invertebrates do not. Due to the drastic difference in average body size between the two phyla, vertebrates tend to mimic other living things, while invertebrates are much better able to mimic inanimate objects. Large size makes any imprecision much more noticeable to the naked eye, slowing or preventing the evolution of mimicry. However, when a potential prey is highly noxious, as in snakes, predators that avoid even poor mimics gain a strong selective advantage; whereas insects, rarely able to deliver enough toxin to threaten vertebrate predators, would need precise mimicry to avoid detection. The assumption of scarcity in vertebrate mimetic resemblances is largely limited due to human perception. Humans are hyper-perceptive to visual mimicry systems, and find these the most abundant. However, olfactory, biochemical, and even electroreceptive forms of mimicry are likely to be much more common than currently accounted for. == References ==
Wikipedia/Mimicry_in_vertebrates
A computed tomography scan (CT scan), formerly called computed axial tomography scan (CAT scan), is a medical imaging technique used to obtain detailed internal images of the body. The personnel that perform CT scans are called radiographers or radiology technologists. CT scanners use a rotating X-ray tube and a row of detectors placed in a gantry to measure X-ray attenuations by different tissues inside the body. The multiple X-ray measurements taken from different angles are then processed on a computer using tomographic reconstruction algorithms to produce tomographic (cross-sectional) images (virtual "slices") of a body. CT scans can be used in patients with metallic implants or pacemakers, for whom magnetic resonance imaging (MRI) is contraindicated. Since its development in the 1970s, CT scanning has proven to be a versatile imaging technique. While CT is most prominently used in medical diagnosis, it can also be used to form images of non-living objects. The 1979 Nobel Prize in Physiology or Medicine was awarded jointly to South African-American physicist Allan MacLeod Cormack and British electrical engineer Godfrey Hounsfield "for the development of computer-assisted tomography". == Types == On the basis of image acquisition and procedures, various type of scanners are available in the market. === Sequential CT === Sequential CT, also known as step-and-shoot CT, is a type of scanning method in which the CT table moves stepwise. The table increments to a particular location and then stops which is followed by the X-ray tube rotation and acquisition of a slice. The table then increments again, and another slice is taken. The table movement stops while taking slices. This results in an increased time of scanning. === Spiral CT === Spinning tube, commonly called spiral CT, or helical CT, is an imaging technique in which an entire X-ray tube is spun around the central axis of the area being scanned. These are the dominant type of scanners on the market because they have been manufactured longer and offer a lower cost of production and purchase. The main limitation of this type of CT is the bulk and inertia of the equipment (X-ray tube assembly and detector array on the opposite side of the circle) which limits the speed at which the equipment can spin. Some designs use two X-ray sources and detector arrays offset by an angle, as a technique to improve temporal resolution. === Electron beam tomography === Electron beam tomography (EBT) is a specific form of CT in which a large enough X-ray tube is constructed so that only the path of the electrons, travelling between the cathode and anode of the X-ray tube, are spun using deflection coils. This type had a major advantage since sweep speeds can be much faster, allowing for less blurry imaging of moving structures, such as the heart and arteries. Fewer scanners of this design have been produced when compared with spinning tube types, mainly due to the higher cost associated with building a much larger X-ray tube and detector array and limited anatomical coverage. === Dual energy CT === Dual energy CT, also known as spectral CT, is an advancement of computed Tomography in which two energies are used to create two sets of data. A dual energy CT may employ dual source, single source with dual detector layer, single source with energy switching methods to get two different sets of data. Dual source CT is an advanced scanner with a two X-ray tube detector system, unlike conventional single tube systems. These two detector systems are mounted on a single gantry at 90° in the same plane. Dual source CT scanners allow fast scanning with higher temporal resolution by acquiring a full CT slice in only half a rotation. Fast imaging reduces motion blurring at high heart rates and potentially allowing for shorter breath-hold time. This is particularly useful for ill patients having difficulty holding their breath or unable to take heart-rate lowering medication. Single source with energy switching is another mode of dual energy CT in which a single tube is operated at two different energies by switching the energies frequently. === CT perfusion imaging === CT perfusion imaging is a specific form of CT to assess flow through blood vessels whilst injecting a contrast agent. Blood flow, blood transit time, and organ blood volume, can all be calculated with reasonable sensitivity and specificity. This type of CT may be used on the heart, although sensitivity and specificity for detecting abnormalities are still lower than for other forms of CT. This may also be used on the brain, where CT perfusion imaging can often detect poor brain perfusion well before it is detected using a conventional spiral CT scan. This is better for stroke diagnosis than other CT types. === PET CT === Positron emission tomography–computed tomography is a hybrid CT modality which combines, in a single gantry, a positron emission tomography (PET) scanner and an X-ray computed tomography (CT) scanner, to acquire sequential images from both devices in the same session, which are combined into a single superposed (co-registered) image. Thus, functional imaging obtained by PET, which depicts the spatial distribution of metabolic or biochemical activity in the body can be more precisely aligned or correlated with anatomic imaging obtained by CT scanning. PET-CT gives both anatomical and functional details of an organ under examination and is helpful in detecting different type of cancers. == Medical use == Since its introduction in the 1970s, CT has become an important tool in medical imaging to supplement conventional X-ray imaging and medical ultrasonography. It has more recently been used for preventive medicine or screening for disease, for example, CT colonography for people with a high risk of colon cancer, or full-motion heart scans for people with a high risk of heart disease. Several institutions offer full-body scans for the general population although this practice goes against the advice and official position of many professional organizations in the field primarily due to the radiation dose applied. The use of CT scans has increased dramatically over the last two decades in many countries. An estimated 72 million scans were performed in the United States in 2007 and more than 80 million in 2015. === Head === CT scanning of the head is typically used to detect infarction (stroke), tumors, calcifications, haemorrhage, and bone trauma. Of the above, hypodense (dark) structures can indicate edema and infarction, hyperdense (bright) structures indicate calcifications and haemorrhage and bone trauma can be seen as disjunction in bone windows. Tumors can be detected by the swelling and anatomical distortion they cause, or by surrounding edema. CT scanning of the head is also used in CT-guided stereotactic surgery and radiosurgery for treatment of intracranial tumors, arteriovenous malformations, and other surgically treatable conditions using a device known as the N-localizer. === Neck === Contrast CT is generally the initial study of choice for neck masses in adults. CT of the thyroid plays an important role in the evaluation of thyroid cancer. CT scan often incidentally finds thyroid abnormalities, and so is often the preferred investigation modality for thyroid abnormalities. === Lungs === A CT scan can be used for detecting both acute and chronic changes in the lung parenchyma, the tissue of the lungs. It is particularly relevant here because normal two-dimensional X-rays do not show such defects. A variety of techniques are used, depending on the suspected abnormality. For evaluation of chronic interstitial processes such as emphysema, and fibrosis, thin sections with high spatial frequency reconstructions are used; often scans are performed both on inspiration and expiration. This special technique is called high resolution CT that produces a sampling of the lung, and not continuous images. Bronchial wall thickening can be seen on lung CTs and generally (but not always) implies inflammation of the bronchi. An incidentally found nodule in the absence of symptoms (sometimes referred to as an incidentaloma) may raise concerns that it might represent a tumor, either benign or malignant. Perhaps persuaded by fear, patients and doctors sometimes agree to an intensive schedule of CT scans, sometimes up to every three months and beyond the recommended guidelines, in an attempt to do surveillance on the nodules. However, established guidelines advise that patients without a prior history of cancer and whose solid nodules have not grown over a two-year period are unlikely to have any malignant cancer. For this reason, and because no research provides supporting evidence that intensive surveillance gives better outcomes, and because of risks associated with having CT scans, patients should not receive CT screening in excess of those recommended by established guidelines. === Angiography === Computed tomography angiography (CTA) is a type of contrast CT to visualize the arteries and veins throughout the body. This ranges from arteries serving the brain to those bringing blood to the lungs, kidneys, arms and legs. An example of this type of exam is CT pulmonary angiogram (CTPA) used to diagnose pulmonary embolism (PE). It employs computed tomography and an iodine-based contrast agent to obtain an image of the pulmonary arteries. CT scans can reduce the risk of angiography by providing clinicians with more information about the positioning and number of clots prior to the procedure. === Cardiac === A CT scan of the heart is performed to gain knowledge about cardiac or coronary anatomy. Traditionally, cardiac CT scans are used to detect, diagnose, or follow up coronary artery disease. More recently CT has played a key role in the fast-evolving field of transcatheter structural heart interventions, more specifically in the transcatheter repair and replacement of heart valves. The main forms of cardiac CT scanning are: Coronary CT angiography (CCTA): the use of CT to assess the coronary arteries of the heart. The subject receives an intravenous injection of radiocontrast, and then the heart is scanned using a high-speed CT scanner, allowing radiologists to assess the extent of occlusion in the coronary arteries, usually to diagnose coronary artery disease. Coronary CT calcium scan: also used for the assessment of severity of coronary artery disease. Specifically, it looks for calcium deposits in the coronary arteries that can narrow arteries and increase the risk of a heart attack. A typical coronary CT calcium scan is done without the use of radiocontrast, but it can possibly be done from contrast-enhanced images as well. To better visualize the anatomy, post-processing of the images is common. Most common are multiplanar reconstructions (MPR) and volume rendering. For more complex anatomies and procedures, such as heart valve interventions, a true 3D reconstruction or a 3D print is created based on these CT images to gain a deeper understanding. === Abdomen and pelvis === CT is an accurate technique for diagnosis of abdominal diseases like Crohn's disease, GIT bleeding, and diagnosis and staging of cancer, as well as follow-up after cancer treatment to assess response. It is commonly used to investigate acute abdominal pain. Non-contrast-enhanced CT scans are the gold standard for diagnosing kidney stone disease. They allow clinicians to estimate the size, volume, and density of stones, helping to guide further treatment; with size being especially important in predicting the time to spontaneous passage of a stone. === Axial skeleton and extremities === For the axial skeleton and extremities, CT is often used to image complex fractures, especially ones around joints, because of its ability to reconstruct the area of interest in multiple planes. Fractures, ligamentous injuries, and dislocations can easily be recognized with a 0.2 mm resolution. With modern dual-energy CT scanners, new areas of use have been established, such as aiding in the diagnosis of gout. === Biomechanical use === CT is used in biomechanics to quickly reveal the geometry, anatomy, density and elastic moduli of biological tissues. == Other uses == === Industrial use === Industrial CT scanning (industrial computed tomography) is a process which uses X-ray equipment to produce 3D representations of components both externally and internally. Industrial CT scanning has been used in many areas of industry for internal inspection of components. Some of the key uses for CT scanning have been flaw detection, failure analysis, metrology, assembly analysis, image-based finite element methods and reverse engineering applications. CT scanning is also employed in the imaging and conservation of museum artifacts. === Aviation security === CT scanning has also found an application in transport security (predominantly airport security) where it is currently used in a materials analysis context for explosives detection CTX (explosive-detection device) and is also under consideration for automated baggage/parcel security scanning using computer vision based object recognition algorithms that target the detection of specific threat items based on 3D appearance (e.g. guns, knives, liquid containers). Its usage in airport security pioneered at Shannon Airport in March 2022 has ended the ban on liquids over 100 ml there, a move that Heathrow Airport plans for a full roll-out on 1 December 2022 and the TSA spent $781.2 million on an order for over 1,000 scanners, ready to go live in the summer. === Geological use === X-ray CT is used in geological studies to quickly reveal materials inside a drill core. Dense minerals such as pyrite and barite appear brighter and less dense components such as clay appear dull in CT images. === Paleontological use === Traditional methods of studying fossils are often destructive, such as the use of thin sections and physical preparation. X-ray CT is used in paleontology to non-destructively visualize fossils in 3D. This has many advantages. For example, we can look at fragile structures that might never otherwise be able to be studied. In addition, one can freely move around models of fossils in virtual 3D space to inspect it without damaging the fossil. === Cultural heritage use === X-ray CT and micro-CT can also be used for the conservation and preservation of objects of cultural heritage. For many fragile objects, direct research and observation can be damaging and can degrade the object over time. Using CT scans, conservators and researchers are able to determine the material composition of the objects they are exploring, such as the position of ink along the layers of a scroll, without any additional harm. These scans have been optimal for research focused on the workings of the Antikythera mechanism or the text hidden inside the charred outer layers of the En-Gedi Scroll. However, they are not optimal for every object subject to these kinds of research questions, as there are certain artifacts like the Herculaneum papyri in which the material composition has very little variation along the inside of the object. After scanning these objects, computational methods can be employed to examine the insides of these objects, as was the case with the virtual unwrapping of the En-Gedi scroll and the Herculaneum papyri. Micro-CT has also proved useful for analyzing more recent artifacts such as still-sealed historic correspondence that employed the technique of letterlocking (complex folding and cuts) that provided a "tamper-evident locking mechanism". Further examples of use cases in archaeology is imaging the contents of sarcophagi or ceramics. Recently, CWI in Amsterdam has collaborated with Rijksmuseum to investigate art object inside details in the framework called IntACT. === Microorganism research === Varied types of fungus can degrade wood to different degrees, one Belgium research group has been used X-ray CT 3 dimension with sub-micron resolution unveiled fungi can penetrate micropores of 0.6 μm under certain conditions. === Timber sawmill === Sawmills use industrial CT scanners to detect round defects, for instance knots, to improve total value of timber productions. Most sawmills are planning to incorporate this robust detection tool to improve productivity in the long run, however initial investment cost is high. == Interpretation of results == === Presentation === The result of a CT scan is a volume of voxels, which may be presented to a human observer by various methods, which broadly fit into the following categories: Slices (of varying thickness). Thin slice is generally regarded as planes representing a thickness of less than 3 mm. Thick slice is generally regarded as planes representing a thickness between 3 mm and 5 mm. Projection, including maximum intensity projection and average intensity projection Volume rendering (VR) Technically, all volume renderings become projections when viewed on a 2-dimensional display, making the distinction between projections and volume renderings a bit vague. The epitomes of volume rendering models feature a mix of for example coloring and shading in order to create realistic and observable representations. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients. ==== Grayscale ==== Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely block the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. ==== Windowing ==== CT data sets have a very high dynamic range which must be reduced for display or printing. This is typically done via a process of "windowing", which maps a range (the "window") of pixel values to a grayscale ramp. For example, CT images of the brain are commonly viewed with a window extending from 0 HU to 80 HU. Pixel values of 0 and lower, are displayed as black; values of 80 and higher are displayed as white; values within the window are displayed as a gray intensity proportional to position within the window. The window used for display must be matched to the X-ray density of the object of interest, in order to optimize the visible detail. Window width and window level parameters are used to control the windowing of a scan. ==== Multiplanar reconstruction and projections ==== Multiplanar reconstruction (MPR) is the process of converting data from one anatomical plane (usually transverse) to other planes. It can be used for thin slices as well as projections. Multiplanar reconstruction is possible as present CT scanners provide almost isotropic resolution. MPR is used almost in every scan. The spine is frequently examined with it. An image of the spine in axial plane can only show one vertebral bone at a time and cannot show its relation with other vertebral bones. By reformatting the data in other planes, visualization of the relative position can be achieved in sagittal and coronal plane. New software allows the reconstruction of data in non-orthogonal (oblique) planes, which help in the visualization of organs which are not in orthogonal planes. It is better suited for visualization of the anatomical structure of the bronchi as they do not lie orthogonal to the direction of the scan. Curved-plane reconstruction (or curved planar reformation = CPR) is performed mainly for the evaluation of vessels. This type of reconstruction helps to straighten the bends in a vessel, thereby helping to visualize a whole vessel in a single image or in multiple images. After a vessel has been "straightened", measurements such as cross-sectional area and length can be made. This is helpful in preoperative assessment of a surgical procedure. For 2D projections used in radiation therapy for quality assurance and planning of external beam radiotherapy, including digitally reconstructed radiographs, see Beam's eye view. ==== Volume rendering ==== A threshold value of radiodensity is set by the operator (e.g., a level that corresponds to bone). With the help of edge detection image processing algorithms a 3D model can be constructed from the initial data and displayed on screen. Various thresholds can be used to get multiple models, each anatomical component such as muscle, bone and cartilage can be differentiated on the basis of different colours given to them. However, this mode of operation cannot show interior structures. Surface rendering is limited technique as it displays only the surfaces that meet a particular threshold density, and which are towards the viewer. However, In volume rendering, transparency, colours and shading are used which makes it easy to present a volume in a single image. For example, Pelvic bones could be displayed as semi-transparent, so that, even viewing at an oblique angle one part of the image does not hide another. === Image quality === ==== Dose versus image quality ==== An important issue within radiology today is how to reduce the radiation dose during CT examinations without compromising the image quality. In general, higher radiation doses result in higher-resolution images, while lower doses lead to increased image noise and unsharp images. However, increased dosage raises the adverse side effects, including the risk of radiation-induced cancer – a four-phase abdominal CT gives the same radiation dose as 300 chest X-rays. Several methods that can reduce the exposure to ionizing radiation during a CT scan exist. New software technology can significantly reduce the required radiation dose. New iterative tomographic reconstruction algorithms (e.g., iterative Sparse Asymptotic Minimum Variance) could offer super-resolution without requiring higher radiation dose. Individualize the examination and adjust the radiation dose to the body type and body organ examined. Different body types and organs require different amounts of radiation. Higher resolution is not always suitable, such as detection of small pulmonary masses. ==== Artifacts ==== Although images produced by CT are generally faithful representations of the scanned volume, the technique is susceptible to a number of artifacts, such as the following:Chapters 3 and 5 Streak artifact Streaks are often seen around materials that block most X-rays, such as metal or bone. Numerous factors contribute to these streaks: under sampling, photon starvation, motion, beam hardening, and Compton scatter. This type of artifact commonly occurs in the posterior fossa of the brain, or if there are metal implants. The streaks can be reduced using newer reconstruction techniques. Approaches such as metal artifact reduction (MAR) can also reduce this artifact. MAR techniques include spectral imaging, where CT images are taken with photons of different energy levels, and then synthesized into monochromatic images with special software such as GSI (Gemstone Spectral Imaging). Partial volume effect This appears as "blurring" of edges. It is due to the scanner being unable to differentiate between a small amount of high-density material (e.g., bone) and a larger amount of lower density (e.g., cartilage). The reconstruction assumes that the X-ray attenuation within each voxel is homogeneous; this may not be the case at sharp edges. This is most commonly seen in the z-direction (craniocaudal direction), due to the conventional use of highly anisotropic voxels, which have a much lower out-of-plane resolution, than in-plane resolution. This can be partially overcome by scanning using thinner slices, or an isotropic acquisition on a modern scanner. Ring artifact Probably the most common mechanical artifact, the image of one or many "rings" appears within an image. They are usually caused by the variations in the response from individual elements in a two dimensional X-ray detector due to defect or miscalibration. Ring artifacts can largely be reduced by intensity normalization, also referred to as flat field correction. Remaining rings can be suppressed by a transformation to polar space, where they become linear stripes. A comparative evaluation of ring artefact reduction on X-ray tomography images showed that the method of Sijbers and Postnov can effectively suppress ring artefacts. Noise This appears as grain on the image and is caused by a low signal to noise ratio. This occurs more commonly when a thin slice thickness is used. It can also occur when the power supplied to the X-ray tube is insufficient to penetrate the anatomy. Windmill Streaking appearances can occur when the detectors intersect the reconstruction plane. This can be reduced with filters or a reduction in pitch. Beam hardening This can give a "cupped appearance" when grayscale is visualized as height. It occurs because conventional sources, like X-ray tubes emit a polychromatic spectrum. Photons of higher photon energy levels are typically attenuated less. Because of this, the mean energy of the spectrum increases when passing the object, often described as getting "harder". This leads to an effect increasingly underestimating material thickness, if not corrected. Many algorithms exist to correct for this artifact. They can be divided into mono- and multi-material methods. == Advantages == CT scanning has several advantages over traditional two-dimensional medical radiography. First, CT eliminates the superimposition of images of structures outside the area of interest. Second, CT scans have greater image resolution, enabling examination of finer details. CT can distinguish between tissues that differ in radiographic density by 1% or less. Third, CT scanning enables multiplanar reformatted imaging: scan data can be visualized in the transverse (or axial), coronal, or sagittal plane, depending on the diagnostic task. The improved resolution of CT has permitted the development of new investigations. For example, CT angiography avoids the invasive insertion of a catheter. CT scanning can perform a virtual colonoscopy with greater accuracy and less discomfort for the patient than a traditional colonoscopy. Virtual colonography is far more accurate than a barium enema for detection of tumors and uses a lower radiation dose. CT is a moderate-to-high radiation diagnostic technique. The radiation dose for a particular examination depends on multiple factors: volume scanned, patient build, number and type of scan protocol, and desired resolution and image quality. Two helical CT scanning parameters, tube current and pitch, can be adjusted easily and have a profound effect on radiation. CT scanning is more accurate than two-dimensional radiographs in evaluating anterior interbody fusion, although they may still over-read the extent of fusion. == Adverse effects == === Cancer === The radiation used in CT scans can damage body cells, including DNA molecules, which can lead to radiation-induced cancer. The radiation doses received from CT scans is variable. Compared to the lowest dose X-ray techniques, CT scans can have 100 to 1,000 times higher dose than conventional X-rays. However, a lumbar spine X-ray has a similar dose as a head CT. Articles in the media often exaggerate the relative dose of CT by comparing the lowest-dose X-ray techniques (chest X-ray) with the highest-dose CT techniques. In general, a routine abdominal CT has a radiation dose similar to three years of average background radiation. Large scale population-based studies have consistently demonstrated that low dose radiation from CT scans has impacts on cancer incidence in a variety of cancers. For example, in a large population-based Australian cohort it was found that up to 3.7% of brain cancers were caused by CT scan radiation. Some experts project that in the future, between three and five percent of all cancers would result from medical imaging. An Australian study of 10.9 million people reported that the increased incidence of cancer after CT scan exposure in this cohort was mostly due to irradiation. In this group, one in every 1,800 CT scans was followed by an excess cancer. If the lifetime risk of developing cancer is 40% then the absolute risk rises to 40.05% after a CT. The risks of CT scan radiation are especially important in patients undergoing recurrent CT scans within a short time span of one to five years. Some experts note that CT scans are known to be "overused," and "there is distressingly little evidence of better health outcomes associated with the current high rate of scans." On the other hand, a recent paper analyzing the data of patients who received high cumulative doses showed a high degree of appropriate use. This creates an important issue of cancer risk to these patients. Moreover, a highly significant finding that was previously unreported is that some patients received >100 mSv dose from CT scans in a single day, which counteracts existing criticisms some investigators may have on the effects of protracted versus acute exposure. There are contrarian views and the debate is ongoing. Some studies have shown that publications indicating an increased risk of cancer from typical doses of body CT scans are plagued with serious methodological limitations and several highly improbable results, concluding that no evidence indicates such low doses cause any long-term harm. One study estimated that as many as 0.4% of cancers in the United States resulted from CT scans, and that this may have increased to as much as 1.5 to 2% based on the rate of CT use in 2007. Others dispute this estimate, as there is no consensus that the low levels of radiation used in CT scans cause damage. Lower radiation doses are used in many cases, such as in the investigation of renal colic. A person's age plays a significant role in the subsequent risk of cancer. Estimated lifetime cancer mortality risks from an abdominal CT of a one-year-old is 0.1%, or 1:1000 scans. The risk for someone who is 40 years old is half that of someone who is 20 years old with substantially less risk in the elderly. The International Commission on Radiological Protection estimates that the risk to a fetus being exposed to 10 mGy (a unit of radiation exposure) increases the rate of cancer before 20 years of age from 0.03% to 0.04% (for reference a CT pulmonary angiogram exposes a fetus to 4 mGy). A 2012 review did not find an association between medical radiation and cancer risk in children noting however the existence of limitations in the evidences over which the review is based. CT scans can be performed with different settings for lower exposure in children with most manufacturers of CT scans as of 2007 having this function built in. Furthermore, certain conditions can require children to be exposed to multiple CT scans. Current recommendations are to inform patients of the risks of CT scanning. However, employees of imaging centers tend not to communicate such risks unless patients ask. === Contrast reactions === In the United States half of CT scans are contrast CTs using intravenously injected radiocontrast agents. The most common reactions from these agents are mild, including nausea, vomiting, and an itching rash. Severe life-threatening reactions may rarely occur. Overall reactions occur in 1 to 3% with nonionic contrast and 4 to 12% of people with ionic contrast. Skin rashes may appear within a week to 3% of people. The old radiocontrast agents caused anaphylaxis in 1% of cases while the newer, low-osmolar agents cause reactions in 0.01–0.04% of cases. Death occurs in about 2 to 30 people per 1,000,000 administrations, with newer agents being safer. There is a higher risk of mortality in those who are female, elderly or in poor health, usually secondary to either anaphylaxis or acute kidney injury. The contrast agent may induce contrast-induced nephropathy. This occurs in 2 to 7% of people who receive these agents, with greater risk in those who have preexisting kidney failure, preexisting diabetes, or reduced intravascular volume. People with mild kidney impairment are usually advised to ensure full hydration for several hours before and after the injection. For moderate kidney failure, the use of iodinated contrast should be avoided; this may mean using an alternative technique instead of CT. Those with severe kidney failure requiring dialysis require less strict precautions, as their kidneys have so little function remaining that any further damage would not be noticeable and the dialysis will remove the contrast agent; it is normally recommended, however, to arrange dialysis as soon as possible following contrast administration to minimize any adverse effects of the contrast. In addition to the use of intravenous contrast, orally administered contrast agents are frequently used when examining the abdomen. These are frequently the same as the intravenous contrast agents, merely diluted to approximately 10% of the concentration. However, oral alternatives to iodinated contrast exist, such as very dilute (0.5–1% w/v) barium sulfate suspensions. Dilute barium sulfate has the advantage that it does not cause allergic-type reactions or kidney failure, but cannot be used in patients with suspected bowel perforation or suspected bowel injury, as leakage of barium sulfate from damaged bowel can cause fatal peritonitis. Side effects from contrast agents, administered intravenously in some CT scans, might impair kidney performance in patients with kidney disease, although this risk is now believed to be lower than previously thought. === Scan dose === The table reports average radiation exposures; however, there can be a wide variation in radiation doses between similar scan types, where the highest dose could be as much as 22 times higher than the lowest dose. A typical plain film X-ray involves radiation dose of 0.01 to 0.15 mGy, while a typical CT can involve 10–20 mGy for specific organs, and can go up to 80 mGy for certain specialized CT scans. For purposes of comparison, the world average dose rate from naturally occurring sources of background radiation is 2.4 mSv per year, equal for practical purposes in this application to 2.4 mGy per year. While there is some variation, most people (99%) received less than 7 mSv per year as background radiation. Medical imaging as of 2007 accounted for half of the radiation exposure of those in the United States with CT scans making up two thirds of this amount. In the United Kingdom it accounts for 15% of radiation exposure. The average radiation dose from medical sources is ≈0.6 mSv per person globally as of 2007. Those in the nuclear industry in the United States are limited to doses of 50 mSv a year and 100 mSv every 5 years. Lead is the main material used by radiography personnel for shielding against scattered X-rays. ==== Radiation dose units ==== The radiation dose reported in the gray or mGy unit is proportional to the amount of energy that the irradiated body part is expected to absorb, and the physical effect (such as DNA double strand breaks) on the cells' chemical bonds by X-ray radiation is proportional to that energy. The sievert unit is used in the report of the effective dose. The sievert unit, in the context of CT scans, does not correspond to the actual radiation dose that the scanned body part absorbs but to another radiation dose of another scenario, the whole body absorbing the other radiation dose and the other radiation dose being of a magnitude, estimated to have the same probability to induce cancer as the CT scan. Thus, as is shown in the table above, the actual radiation that is absorbed by a scanned body part is often much larger than the effective dose suggests. A specific measure, termed the computed tomography dose index (CTDI), is commonly used as an estimate of the radiation absorbed dose for tissue within the scan region, and is automatically computed by medical CT scanners. The equivalent dose is the effective dose of a case, in which the whole body would actually absorb the same radiation dose, and the sievert unit is used in its report. In the case of non-uniform radiation, or radiation given to only part of the body, which is common for CT examinations, using the local equivalent dose alone would overstate the biological risks to the entire organism. ==== Effects of radiation ==== Most adverse health effects of radiation exposure may be grouped in two general categories: deterministic effects (harmful tissue reactions) due in large part to the killing/malfunction of cells following high doses; stochastic effects, i.e., cancer and heritable effects involving either cancer development in exposed individuals owing to mutation of somatic cells or heritable disease in their offspring owing to mutation of reproductive (germ) cells. The added lifetime risk of developing cancer by a single abdominal CT of 8 mSv is estimated to be 0.05%, or 1 one in 2,000. Because of increased susceptibility of fetuses to radiation exposure, the radiation dosage of a CT scan is an important consideration in the choice of medical imaging in pregnancy. ==== Excess doses ==== In October, 2009, the US Food and Drug Administration (FDA) initiated an investigation of brain perfusion CT (PCT) scans, based on radiation burns caused by incorrect settings at one particular facility for this particular type of CT scan. Over 200 patients were exposed to radiation at approximately eight times the expected dose for an 18-month period; over 40% of them lost patches of hair. This event prompted a call for increased CT quality assurance programs. It was noted that "while unnecessary radiation exposure should be avoided, a medically needed CT scan obtained with appropriate acquisition parameter has benefits that outweigh the radiation risks." Similar problems have been reported at other centers. These incidents are believed to be due to human error. == Procedure == CT scan procedure varies according to the type of the study and the organ being imaged. The patient lies on the CT table and the centering of the table is done according to the body part. The IV line is established in case of contrast-enhanced CT. After selecting proper and rate of contrast from the pressure injector, the scout is taken to localize and plan the scan. Once the plan is selected, the contrast is given. The raw data is processed according to the study and proper windowing is done to make scans easy to diagnose. === Preparation === Patient preparation may vary according to the type of scan. The general patient preparation includes. Signing the informed consent. Removal of metallic objects and jewelry from the region of interest. Changing to the hospital gown according to hospital protocol. Checking of kidney function, especially creatinine and urea levels (in case of CECT). == Mechanism == Computed tomography operates by using an X-ray generator that rotates around the object; X-ray detectors are positioned on the opposite side of the circle from the X-ray source. As the X-rays pass through the patient, they are attenuated differently by various tissues according to the tissue density. A visual representation of the raw data obtained is called a sinogram, yet it is not sufficient for interpretation. Once the scan data has been acquired, the data must be processed using a form of tomographic reconstruction, which produces a series of cross-sectional images. These cross-sectional images are made up of small units of pixels or voxels. Pixels in an image obtained by CT scanning are displayed in terms of relative radiodensity. The pixel itself is displayed according to the mean attenuation of the tissue(s) that it corresponds to on a scale from +3,071 (most attenuating) to −1,024 (least attenuating) on the Hounsfield scale. A pixel is a two dimensional unit based on the matrix size and the field of view. When the CT slice thickness is also factored in, the unit is known as a voxel, which is a three-dimensional unit. Water has an attenuation of 0 Hounsfield units (HU), while air is −1,000 HU, cancellous bone is typically +400 HU, and cranial bone can reach 2,000 HU or more (os temporale) and can cause artifacts. The attenuation of metallic implants depends on the atomic number of the element used: Titanium usually has an amount of +1000 HU, iron steel can completely extinguish the X-ray and is, therefore, responsible for well-known line-artifacts in computed tomograms. Artifacts are caused by abrupt transitions between low- and high-density materials, which results in data values that exceed the dynamic range of the processing electronics. Two-dimensional CT images are conventionally rendered so that the view is as though looking up at it from the patient's feet. Hence, the left side of the image is to the patient's right and vice versa, while anterior in the image also is the patient's anterior and vice versa. This left-right interchange corresponds to the view that physicians generally have in reality when positioned in front of patients. Initially, the images generated in CT scans were in the transverse (axial) anatomical plane, perpendicular to the long axis of the body. Modern scanners allow the scan data to be reformatted as images in other planes. Digital geometry processing can generate a three-dimensional image of an object inside the body from a series of two-dimensional radiographic images taken by rotation around a fixed axis. These cross-sectional images are widely used for medical diagnosis and therapy. === Contrast === Contrast media used for X-ray CT, as well as for plain film X-ray, are called radiocontrasts. Radiocontrasts for CT are, in general, iodine-based. This is useful to highlight structures such as blood vessels that otherwise would be difficult to delineate from their surroundings. Using contrast material can also help to obtain functional information about tissues. Often, images are taken both with and without radiocontrast. == History == The history of X-ray computed tomography goes back to at least 1917 with the mathematical theory of the Radon transform. In October 1963, William H. Oldendorf received a U.S. patent for a "radiant energy apparatus for investigating selected areas of interior objects obscured by dense material". The first commercially viable CT scanner was invented by Godfrey Hounsfield in 1972. It is often claimed that revenues from the sales of The Beatles' records in the 1960s helped fund the development of the first CT scanner at EMI. The first production X-ray CT machines were in fact called EMI scanners. === Etymology === The word tomography is derived from the Greek tome 'slice' and graphein 'to write'. Computed tomography was originally known as the "EMI scan" as it was developed in the early 1970s at a research branch of EMI, a company best known today for its music and recording business. It was later known as computed axial tomography (CAT or CT scan) and body section röntgenography. The term CAT scan is no longer in technical use because current CT scans enable for multiplanar reconstructions. This makes CT scan the most appropriate term, which is used by radiologists in common vernacular as well as in textbooks and scientific papers. In Medical Subject Headings (MeSH), computed axial tomography was used from 1977 to 1979, but the current indexing explicitly includes X-ray in the title. The term sinogram was introduced by Paul Edholm and Bertil Jacobson in 1975. == Society and culture == === Campaigns === In response to increased concern by the public and the ongoing progress of best practices, the Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently Campaign which is designed to maintain high-quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in Radiology. Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely. The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose. === Prevalence === Use of CT has increased dramatically over the last two decades. An estimated 72 million scans were performed in the United States in 2007, accounting for close to half of the total per-capita dose rate from radiologic and nuclear medicine procedures. Of the CT scans, six to eleven percent are done in children, an increase of seven to eightfold from 1980. Similar increases have been seen in Europe and Asia. In Calgary, Canada, 12.1% of people who present to the emergency with an urgent complaint received a CT scan, most commonly either of the head or of the abdomen. The percentage who received CT, however, varied markedly by the emergency physician who saw them from 1.8% to 25%. In the emergency department in the United States, CT or MRI imaging is done in 15% of people who present with injuries as of 2007 (up from 6% in 1998). The increased use of CT scans has been the greatest in two fields: screening of adults (screening CT of the lung in smokers, virtual colonoscopy, CT cardiac screening, and whole-body CT in asymptomatic patients) and CT imaging of children. Shortening of the scanning time to around 1 second, eliminating the strict need for the subject to remain still or be sedated, is one of the main reasons for the large increase in the pediatric population (especially for the diagnosis of appendicitis). As of 2007, in the United States a proportion of CT scans are performed unnecessarily. Some estimates place this number at 30%. There are a number of reasons for this including: legal concerns, financial incentives, and desire by the public. For example, some healthy people avidly pay to receive full-body CT scans as screening. In that case, it is not at all clear that the benefits outweigh the risks and costs. Deciding whether and how to treat incidentalomas is complex, radiation exposure is not negligible, and the money for the scans involves opportunity cost. == Manufacturers == Major manufacturers of CT scanning devices and equipment are: Canon Medical Systems Corporation Fujifilm Healthcare GE HealthCare Neusoft Medical Systems Philips Siemens Healthineers United Imaging == Research == Photon-counting computed tomography is a CT technique currently under development. Typical CT scanners use energy integrating detectors; photons are measured as a voltage on a capacitor which is proportional to the X-rays detected. However, this technique is susceptible to noise and other factors which can affect the linearity of the voltage to X-ray intensity relationship. Photon counting detectors (PCDs) are still affected by noise but it does not change the measured counts of photons. PCDs have several potential advantages, including improving signal (and contrast) to noise ratios, reducing doses, improving spatial resolution, and through use of several energies, distinguishing multiple contrast agents. PCDs have only recently become feasible in CT scanners due to improvements in detector technologies that can cope with the volume and rate of data required. As of February 2016, photon counting CT is in use at three sites. Some early research has found the dose reduction potential of photon counting CT for breast imaging to be very promising. In view of recent findings of high cumulative doses to patients from recurrent CT scans, there has been a push for scanning technologies and techniques that reduce ionising radiation doses to patients to sub-milliSievert (sub-mSv in the literature) levels during the CT scan process, a goal that has been lingering. == See also == == References == == External links == Development of CT imaging CT Artefacts—PPT by David Platten Filler A (2009-06-30). "The History, Development and Impact of Computed Imaging in Neurological Diagnosis and Neurosurgery: CT, MRI, and DTI". Nature Precedings: 1. doi:10.1038/npre.2009.3267.4. ISSN 1756-0357. Boone JM, McCollough CH (2021). "Computed tomography turns 50". Physics Today. 74 (9): 34–40. Bibcode:2021PhT....74i..34B. doi:10.1063/PT.3.4834. ISSN 0031-9228. S2CID 239718717.
Wikipedia/X-ray_Tomography
The World-Wide Standardized Seismograph Network (WWSSN) – originally the World-Wide Network of Seismograph Stations (WWNSS) – was a global network of about 120 seismograph stations built in the 1960s that generated an unprecedented collection of high quality seismic data. This data enabled seismology to become a quantitative science, elucidated the focal mechanisms of earthquakes and the structure of the Earth's crust, and contributed to the development of plate tectonic theory. The WWSSN is credited with spurring a renaissance in seismological research. The WWSSN also "created a global network infrastructure, including the data-exchange procedures and station technical capabilities needed to support the establishment of the more advanced networks in operation today", and has been the model for every global seismic network since then. A principal feature of the WWSSN was that each station had identical equipment, uniformly calibrated. These consisted of three short-period (~1 second) seismographs (oriented north–south, east–west, and vertically), three long-period (~15 seconds) seismographs, and an accurate radio-synchronized crystal-controlled clock. The seismograms were produced on photographic drum recorders, developed on-site, then sent to a Data Center for copying onto 70-mm and 35-mm film (until 1978, and then after onto microfiche). The WWSSN also featured a data distribution system that made this data available to anyone at nominal cost from a single location, providing the basis for much research. The WWSSN arose from a political concern. In the 1950s concerns about radioactive fallout from above-ground testing of nuclear weapons prompted the leadership of the three leading nuclear nations (President Eisenhower of the United States, General Secretary Khrushchev of the Soviet Union, and Prime Minister Macmillan of the United Kingdom) to ban further testing of nuclear weapons. However, there was a hitch. The United States would not agree to banning kinds of nuclear tests where there was no capability to detect and identify any violations, and for smaller, underground tests seismology was not sufficiently developed to have that capability. The Eisenhower Administration therefore convened the Berkner panel to recommend ways to improve the nation's seismic detection abilities. The Berkner report, issued in 1959, was the basis of a comprehensive research and development program known as Project Vela Uniform, funded through the U.S. Department of Defense Defense Advanced Research Projects Agency(DARPA). DARPA then funded the U.S. Coast and Geodetic Survey (C&GS) to implement one of the Berkner Report recommendations, designing and building what became the WWSSN. Performance specifications and a request for proposals were published in November 1960, a contract awarded in early 1961, and the first station was installed in the C&GS Albuquerque (New Mexico) Seismological Laboratory (ASL) in October 1961. An additional 89 stations were installed by the end of 1963, and the network was essentially complete by the end of 1967 with 117 stations, with 121 stations eventually installed. These were mostly outside of the U.S., but not in Canada (they had their own system), the Soviet-bloc countries, China or France (they were building their own nuclear weapons and wanted to retain an option for testing), or French-speaking countries. DARPA funding ended in fiscal year 1967 (July 1966–June 1967), and plans for transferring funding responsibilities to the Commerce Department were blocked by an impasse in Congress. Though other agencies contributed partial funding (mainly for purchase and shipping of photographic supplies), permanent funding was not obtained, and routine maintenance and training were suspended. In 1973 ASL and WWSSN were transferred to the United States Geological Survey, and operation of the network continued at a reduced level of support until it was terminated in 1996. In the late 1970s digital recorders were added to 13 WWSSN stations; these "DWWSSN" stations operated as part of the Global Digital Seismographic Network (GSDN). Successor to the WWSSN is the Global Seismographic Network (GSN), operated by the Incorporated Research Institutions for Seismology, now EarthScope Consortium. A similar system, the Unified System of Seismic Stations (ESSN, transliterated from Russian), was built in the USSR with 168 stations using Kirnos seismographs. == See also == Partial Nuclear Test Ban Treaty of 1963 Project Vela == Notes == == Sources == == Further reading == The VELA Program. A Twenty-Five Year Review of Basic Research has much detail about the WWSSN.
Wikipedia/World-Wide_Standardized_Seismograph_Network
The banana doughnut theory - also sometimes known as Born-Fréchet kernel theory, or finite-frequency theory - is a model in seismic tomography that describes the shape of the Fresnel zone along the entire ray path of a body wave. This theory suggests that the area that influences the ray velocity is the surrounding material and not the infinitesimally small ray path. This surrounding material forms a tube enclosing the ray, but does not incorporate the ray path itself. The name was coined by Princeton University postdoc Henk Marquering. This theory gets the name "banana" because the tube of influence along the entire ray path from source to receiver is an arc resembling the fruit. The "doughnut" part of the name comes from the ring shape of the cross-section. The ray path is a hollow banana, or a banana-shaped doughnut. Mohammad Youssof and colleagues (Youssof et al., 2015) of Rice University and the University of Copenhagen conducted one of the studies that compared both the Born-Fréchet kernel theory and the infinitesimal geometrical ray theory when they used the same datasets to see the resolving power on real datasets from the South African Seismic Array [SASE] in Kalahari (Carlson et al., 1996) and compared their results when using one and multiple frequencies to previous studies by Fouch et al. (2004), Priestley et al. (2006), and Silver et al. (2001). Youssof et al. (2015) models are similar in some ways, but they also have significant differences which include new results of cratonic boundaries, the keels' depth, and their structures. == References ==
Wikipedia/Banana_Doughnut_theory
Electronic filter topology defines electronic filter circuits without taking note of the values of the components used but only the manner in which those components are connected. Filter design characterises filter circuits primarily by their transfer function rather than their topology. Transfer functions may be linear or nonlinear. Common types of linear filter transfer function are; high-pass, low-pass, bandpass, band-reject or notch and all-pass. Once the transfer function for a filter is chosen, the particular topology to implement such a prototype filter can be selected so that, for example, one might choose to design a Butterworth filter using the Sallen–Key topology. Filter topologies may be divided into passive and active types. Passive topologies are composed exclusively of passive components: resistors, capacitors, and inductors. Active topologies also include active components (such as transistors, op amps, and other integrated circuits) that require power. Further, topologies may be implemented either in unbalanced form or else in balanced form when employed in balanced circuits. Implementations such as electronic mixers and stereo sound may require arrays of identical circuits. == Passive topologies == Passive filters have been long in development and use. Most are built from simple two-port networks called "sections". There is no formal definition of a section except that it must have at least one series component and one shunt component. Sections are invariably connected in a "cascade" or "daisy-chain" topology, consisting of additional copies of the same section or of completely different sections. The rules of series and parallel impedance would combine two sections consisting only of series components or shunt components into a single section. Some passive filters, consisting of only one or two filter sections, are given special names including the L-section, T-section and Π-section, which are unbalanced filters, and the C-section, H-section and box-section, which are balanced. All are built upon a very simple "ladder" topology (see below). The chart at the bottom of the page shows these various topologies in terms of general constant k filters. Filters designed using network synthesis usually repeat the simplest form of L-section topology though component values may change in each section. Image designed filters, on the other hand, keep the same basic component values from section to section though the topology may vary and tend to make use of more complex sections. L-sections are never symmetrical but two L-sections back-to-back form a symmetrical topology and many other sections are symmetrical in form. === Ladder topologies === Ladder topology, often called Cauer topology after Wilhelm Cauer (inventor of the elliptic filter), was in fact first used by George Campbell (inventor of the constant k filter). Campbell published in 1922 but had clearly been using the topology for some time before this. Cauer first picked up on ladders (published 1926) inspired by the work of Foster (1924). There are two forms of basic ladder topologies: unbalanced and balanced. Cauer topology is usually thought of as an unbalanced ladder topology. A ladder network consists of cascaded asymmetrical L-sections (unbalanced) or C-sections (balanced). In low pass form the topology would consist of series inductors and shunt capacitors. Other bandforms would have an equally simple topology transformed from the lowpass topology. The transformed network will have shunt admittances that are dual networks of the series impedances if they were duals in the starting network - which is the case with series inductors and shunt capacitors. === Modified ladder topologies === Image filter design commonly uses modifications of the basic ladder topology. These topologies, invented by Otto Zobel, have the same passbands as the ladder on which they are based but their transfer functions are modified to improve some parameter such as impedance matching, stopband rejection or passband-to-stopband transition steepness. Usually the design applies some transform to a simple ladder topology: the resulting topology is ladder-like but no longer obeys the rule that shunt admittances are the dual network of series impedances: it invariably becomes more complex with higher component count. Such topologies include; m-derived filter mm'-type filter General mn-type filter The m-type (m-derived) filter is by far the most commonly used modified image ladder topology. There are two m-type topologies for each of the basic ladder topologies; the series-derived and shunt-derived topologies. These have identical transfer functions to each other but different image impedances. Where a filter is being designed with more than one passband, the m-type topology will result in a filter where each passband has an analogous frequency-domain response. It is possible to generalise the m-type topology for filters with more than one passband using parameters m1, m2, m3 etc., which are not equal to each other resulting in general mn-type filters which have bandforms that can differ in different parts of the frequency spectrum. The mm'-type topology can be thought of as a double m-type design. Like the m-type it has the same bandform but offers further improved transfer characteristics. It is, however, a rarely used design due to increased component count and complexity as well as its normally requiring basic ladder and m-type sections in the same filter for impedance matching reasons. It is normally only found in a composite filter. === Bridged-T topologies === Zobel constant resistance filters use a topology that is somewhat different from other filter types, distinguished by having a constant input resistance at all frequencies and in that they use resistive components in the design of their sections. The higher component and section count of these designs usually limits their use to equalisation applications. Topologies usually associated with constant resistance filters are the bridged-T and its variants, all described in the Zobel network article; Bridged-T topology Balanced bridged-T topology Open-circuit L-section topology Short-circuit L-section topology Balanced open-circuit C-section topology Balanced short-circuit C-section topology The bridged-T topology is also used in sections intended to produce a signal delay but in this case no resistive components are used in the design. === Lattice topology === Both the T-section (from ladder topology) and the bridge-T (from Zobel topology) can be transformed into a lattice topology filter section but in both cases this results in high component count and complexity. The most common application of lattice filters (X-sections) is in all-pass filters used for phase equalisation. Although T and bridged-T sections can always be transformed into X-sections the reverse is not always possible because of the possibility of negative values of inductance and capacitance arising in the transform. Lattice topology is identical to the more familiar bridge topology, the difference being merely the drawn representation on the page rather than any real difference in topology, circuitry or function. == Active topologies == === Elementary feedback topology === The elementary feedback topology is based on the simple inverting amplifier configuration. The transfer function is: H ( s ) = V o V i = − Z 2 Z 1 {\displaystyle H(s)={\frac {V_{o}}{V_{i}}}=-{Z_{2} \over Z_{1}}} === Multiple feedback topology === Multiple feedback topology is an electronic filter topology which is used to implement an electronic filter by adding two poles to the transfer function. A diagram of the circuit topology for a second order low pass filter is shown in the figure on the right. The transfer function of the multiple feedback topology circuit, like all second-order linear filters, is: H ( s ) = V o V i = − 1 A s 2 + B s + C = K ω 0 2 s 2 + ω 0 Q s + ω 0 2 {\displaystyle H(s)={\frac {V_{o}}{V_{i}}}=-{\frac {1}{As^{2}+Bs+C}}={\frac {K{\omega _{0}}^{2}}{s^{2}+{\frac {\omega _{0}}{Q}}s+{\omega _{0}}^{2}}}} . In an MF filter, A = ( R 1 R 3 C 2 C 5 ) {\displaystyle A=(R_{1}R_{3}C_{2}C_{5})\,} B = R 3 C 5 + R 1 C 5 + R 1 R 3 C 5 / R 4 {\displaystyle B=R_{3}C_{5}+R_{1}C_{5}+R_{1}R_{3}C_{5}/R_{4}\,} C = R 1 / R 4 {\displaystyle C=R_{1}/R_{4}\,} Q = R 3 R 4 C 2 C 5 ( R 4 + R 3 + | K | R 3 ) C 5 {\displaystyle Q={\frac {\sqrt {R_{3}R_{4}C_{2}C_{5}}}{(R_{4}+R_{3}+|K|R_{3})C_{5}}}} is the quality factor. K = − R 4 / R 1 {\displaystyle K=-R_{4}/R_{1}\,} is the DC voltage gain ω 0 = 2 π f 0 = 1 / R 3 R 4 C 2 C 5 {\displaystyle \omega _{0}=2\pi f_{0}=1/{\sqrt {R_{3}R_{4}C_{2}C_{5}}}} is the corner frequency For finding suitable component values to achieve the desired filter properties, a similar approach can be followed as in the Design choices section of the alternative Sallen–Key topology. === Biquad filter topology === For the digital implementation of a biquad filter, see Digital biquad filter. A biquad filter is a type of linear filter that implements a transfer function that is the ratio of two quadratic functions. The name biquad is short for biquadratic. Any second-order filter topology can be referred to as a biquad, such as the MFB or Sallen-Key. However, there is also a specific "biquad" topology. It is also sometimes called the 'ring of 3' circuit. Biquad filters are typically active and implemented with a single-amplifier biquad (SAB) or two-integrator-loop topology. The SAB topology uses feedback to generate complex poles and possibly complex zeros. In particular, the feedback moves the real poles of an RC circuit in order to generate the proper filter characteristics. The two-integrator-loop topology is derived from rearranging a biquadratic transfer function. The rearrangement will equate one signal with the sum of another signal, its integral, and the integral's integral. In other words, the rearrangement reveals a state variable filter structure. By using different states as outputs, any kind of second-order filter can be implemented. The SAB topology is sensitive to component choice and can be more difficult to adjust. Hence, usually the term biquad refers to the two-integrator-loop state variable filter topology. ==== Tow-Thomas filter ==== For example, the basic configuration in Figure 1 can be used as either a low-pass or bandpass filter depending on where the output signal is taken from. The second-order low-pass transfer function is given by H ( s ) = G l p f ω 0 2 s 2 + ω 0 Q s + ω 0 2 {\displaystyle H(s)={\frac {G_{\mathrm {lpf} }{\omega _{0}}^{2}}{s^{2}+{\frac {\omega _{0}}{Q}}s+{\omega _{0}}^{2}}}} where low-pass gain G l p f = − R 2 / R 1 {\displaystyle G_{\mathrm {lpf} }=-R_{2}/R_{1}} . The second-order bandpass transfer function is given by H ( s ) = G b p f ω 0 Q s s 2 + ω 0 Q s + ω 0 2 {\displaystyle H(s)={\frac {G_{\mathrm {bpf} }{\frac {\omega _{0}}{Q}}s}{s^{2}+{\frac {\omega _{0}}{Q}}s+{\omega _{0}}^{2}}}} . with bandpass gain G b p f = − R 3 / R 1 {\displaystyle G_{\mathrm {bpf} }=-R_{3}/R_{1}} . In both cases, the Natural frequency is ω 0 = 1 / R 2 R 4 C 1 C 2 {\displaystyle \omega _{0}=1/{\sqrt {R_{2}R_{4}C_{1}C_{2}}}} . Quality factor is Q = R 3 2 C 1 R 2 R 4 C 2 {\displaystyle Q={\sqrt {\frac {{R_{3}}^{2}C_{1}}{R_{2}R_{4}C_{2}}}}} . The bandwidth is approximated by B = ω 0 / Q {\displaystyle B=\omega _{0}/Q} , and Q is sometimes expressed as a damping constant ζ = 1 / 2 Q {\displaystyle \zeta =1/2Q} . If a noninverting low-pass filter is required, the output can be taken at the output of the second operational amplifier, after the order of the second integrator and the inverter has been switched. If a noninverting bandpass filter is required, the order of the second integrator and the inverter can be switched, and the output taken at the output of the inverter's operational amplifier. ==== Akerberg-Mossberg filter ==== Figure 2 shows a variant of the Tow-Thomas topology, known as Akerberg-Mossberg topology, that uses an actively compensated Miller integrator, which improves filter performance. === Sallen–Key topology === The Sallen-Key design is a non-inverting second-order filter with the option of high Q and passband gain. == See also == Prototype filter Topology (electronics) Linear filter State variable filter == Notes == == References == == External links == Media related to Electronic filter topology at Wikimedia Commons
Wikipedia/Electronic_filter_topology
CV/gate (an abbreviation of control voltage/gate) is an analog method of controlling synthesizers, drum machines, and similar equipment with external sequencers. The control voltage typically controls pitch and the gate signal controls note on-off. This method was widely used in the epoch of analog modular synthesizers and CV/Gate music sequencers, since the introduction of the Roland MC-8 Microcomposer in 1977 through to the 1980s, when it was eventually superseded by the MIDI protocol (introduced in 1983), which is more feature-rich, easier to configure reliably, and more readily supports polyphony. The advent of digital synthesizers also made it possible to store and retrieve voice "patches" – eliminating patch cables and (for the most part) control voltages. However, numerous companies – including Doepfer, who designed a modular system for Kraftwerk in 1992, Buchla, MOTM, Analogue Systems, and others continue to manufacture modular synthesizers that are increasingly popular and rely primarily on analog CV/gate signals for communication. Additionally, some recent non-modular synthesizers (such as the Alesis Andromeda) and many effects devices (including the Moogerfooger pedals by Moog as well as many guitar oriented devices) include CV/gate connectivity. Many modern studios use a hybrid of MIDI and CV/gate to allow synchronization of older and newer equipment. == Basic usage == In modular synthesizers, each synthesizer component (e.g., low frequency oscillation (LFO), voltage controlled filter (VCF), etc.) can be connected to another component by means of a patch cable that transmits voltage. Changes in that voltage cause changes to one or more parameters of the component. This frequently involved a keyboard transmitting two types of data (CV and gate), or control modules such as LFOs and envelope generators transmitting CV data: Control voltage (CV) indicates which note (event) to play: a different voltage for each key pressed; those voltages are typically connected to one or more oscillators, thus producing the different pitches required. Such a method implies that the synthesizer is monophonic. CV can also control parameters such as rate, depth and duration of a control module. Trigger indicates when a note should start, a pulse that is used to trigger an event, typically an ADSR envelope. In the case of triggering a drum machine, a clock signal or LFO square wave could be employed to signal the next beat. The trigger can be a specific part of an electronic pulse, such as the rising slope of an electronic signal. Gate is related to a Trigger, but sustains the signal throughout the event. It turns on when the signal goes high, and turns off when the signal goes low. === CV === The concept of CV is fairly standard on analog synthesizers, but not its implementation. For pitch control via CV, there are two prominent implementations: Volts per octave was popularized by Bob Moog in the 1960s, and was widely adopted for control interfacing. One volt represents one octave, so the pitch produced by a voltage of 3 V is one octave lower than that produced by a voltage of 4 V. Each 1 V octave can be divided linearly into 12 semi-tones. Companies using this CV method included Roland, Moog, Sequential Circuits, Oberheim, ARP and the Eurorack standard from Doepfer, including more than 7000 modules from at least 316 manufacturers. This convention typically had control modules carry the source voltage (B+, 5 V) on the ring of a TRS jack, with the processed voltage returning on the tip. However, other manufacturers have used different implementations with voltages including –5 V to 5 V, 0 V to 5 V, 0 V to 10 V with the B+ possibly on the tip. This makes interoperability of modules problematic. Hertz per volt, used by most but not all Korg and Yamaha synthesizers, represents an octave of pitch by doubling voltage, so the pitch represented by 2 V is one octave lower than that represented by 4 V, and one higher than that represented by 1 V. The table compares notes and corresponding voltage levels in both implementations (this example uses 1 V per octave and 55 Hz/V): The voltages are linked by the formula V h z = 2 V o c t − 1 {\displaystyle V_{hz}=2^{V_{oct}-1}} , which can also be written V o c t = l n 2 ( V h z ) + 1 {\displaystyle V_{oct}=ln_{2}(V_{hz})+1} . These two implementations are not critically incompatible: voltage levels used are comparable and there are no other safety concerns. So, for example, connecting a Hz/volt keyboard to a volts/octave synthesizer will likely produce some sound, but it will be completely out of tune. At least one commercial interface has been created to solve the problem, the Korg MS-02 CV/trigger interface. On synthesizers the CV signal may be labelled "CV", "VCO in", "keyboard in", "OSC" or "keyboard voltage". CV control of parameters other than pitch usually follows the same pattern of minimum to maximum voltage. For example, Moog modular synthesizers use the 0 V - 5 V control voltage for all other parameters. They are represented on the front panel of many synthesizers as knobs, but often a patch bay allows the input or output of the related CV to synchronize multiple modules together. The pitch voltage from a keyboard could also be used to control the rate of an LFO, which could be applied to the volume of the oscillator output, creating a tremolo that becomes faster as the pitch rises. Modules that can be controlled by CV include VCF, VCA, high and low frequency oscillators, ring modulators, sample and hold circuits and noise injection. === Trigger === Trigger also has two implementations: V-trigger, "voltage trigger", or "positive trigger" normally holds voltage low (around 0 v) and at trigger produces a fixed positive voltage to switch a note on. The trigger voltage level differs among brands, from 2 V to 10 V. V-trigger is used by Roland and Sequential Circuits synthesizers, among others. S-trigger, "short circuit trigger", or "negative trigger" normally holds voltage high, shorting the trigger to ground when the note should play. S-trigger were used in the early Moog Modular systems, however they are rarely used nowadays. This is not to be confused with the inverted gate signals used in Korg and Yamaha synthesizers. Depending on the voltage level, connecting an incompatible triggering system will either yield no sound at all or reverse all keypress events (i.e. sound will be produced with no keys pressed and muted on keypress). On older equipment, the gate/trigger signal may be labelled "gate", "trig" or "S-trig". === Gate === Like a Trigger, gate signal voltage may vary among brands. In some implementations, gate signals may even dip into negative voltage ranges. Gate inputs are typically isolated, or "buffered" to prevent damage to some equipment that cannot handle excessive or negative voltages. == Modern usage == Since the publishing of the MIDI standard in 1983, usage of CV/gate to control synthesizers has decreased dramatically. The most criticized aspect of the CV/gate interface is the allowance of only a single note to sound at a single moment of time. Shortly after the MIDI standard came out Roland introduced the Roland MPU-101, a MIDI to CV/gate converter that takes an input from four MIDI channels; i.e. a variable base MIDI channel plus the next three consecutive MIDI channels and converted up to four MIDI channels into four separate CV/gate outputs able to control four separate CV/gate synthesizers or a four-voice synthesizer like the Oberheim Four Voice analog synthesizer which is made up of four separate monophonic SEM modules. However, the 1990s saw renewed interest in analog synthesizers and various other equipment. In order to facilitate synchronization between these older instruments and newer MIDI-enabled equipment, some companies produced several models of CV/gate-MIDI interfaces. Some models target controlling a single type of synthesizer and have fixed CV and gate implementation, while some models are more customizable and include methods to switch used implementation. CV/gate is also very easy to implement and it remains an easier alternative for homemade and modern modular synthesizers. Also, various equipment, such as stage lighting, sometimes uses a CV/gate interface. For example, a strobe light can be controlled using CV to set light intensity or color and gate to turn an effect on and off. With the resurgence of non-modular analog synthesizers, the exposure of synthesizer parameters via CV/gate provided a way to achieve some of the flexibility of modular synthesizers. Some synthesizers could also generate CV/gate signals and be used to control other synthesizers. One of the main advantages of CV/gate over MIDI is in the resolution. The fundamental MIDI control message uses seven bits or 128 possible steps for resolution. Thirty two controls per channel allow MSB (Most Significant Byte) and LSB (Least Significant Byte) together to specify 14 bits or 16,384 possible steps of total resolution. Control voltage is analog and by extension infinitely variable. There is less likelihood of hearing the zipper effect or noticeable steps in resolution over large parameter sweeps. Human hearing is especially sensitive to pitch changes, and for this reason MIDI pitch bend uses 14 bits fundamentally. Beyond the 512 directly defined 14-bit controls, MIDI also defines tens of thousands of 14-bit RPNs (Registered Parameter Number) and NRPNs (Non-Registered Parameter Number), but there is no method described for going beyond 14 bits. A major difference between CV/gate and MIDI is that in many analog synthesizers no distinction is made between voltages that represent control and voltages that represent audio. This means that audio signals can be used to modify control voltages and vice versa. In MIDI they are completely separate however, and additional software such as Expert Sleepers is required to convert analog CV signals into numerical MIDI control data. Some software synthesizers emulate control voltages to allow their virtual modules to be controlled as early analog synthesizers were. For example, Reason allows myriad connection possibilities with CV, and allows gate signals to have a "level" rather than a simple on-off (for example, to trigger not just a note, but the velocity of that note). In 2009, Mark of the Unicorn (MOTU) released a virtual instrument plug-in, Volta, allowing Mac-based audio workstations with Audio Units support to control some hardware devices. CV control is based on the audio interface line level outputs, and as such only supports a limited number of synthesizers. In recent years, many guitar effects processors have been designed with CV input. Implementations vary widely and are not compatible with one another so it is critical to understand how a manufacturer is producing the CV before attempting to use multiple processors in a system. Moog has facilitated this by producing two interfaces designed to receive and transmit CV in a system, the MP-201 (which includes MIDI) and the CP-251. Examples of effects allowing the use of CV include delays (Electroharmonix DMB and DMTT, Toneczar Echoczar, Line6, Strymon and others), tremolo (Goatkeeper), Flange (Foxrox Paradox), envelope generators/lowpass filters/ring modulators (Big Briar, WMD) and distortion (WMD). == See also == DIN sync DCB Open Sound Control == References == == External links == Gates and Triggers tutorial at Synthesizers.com Archived 2013-07-24 at the Wayback Machine Analogue Solutions' Beginner's guide to MIDI-CV conversion — a detailed article on all aspects of MIDI-CV conversion;
Wikipedia/Control_voltage
Information theory is the mathematical study of the quantification, storage, and communication of information. The field was established and formalized by Claude Shannon in the 1940s, though early contributions were made in the 1920s through the works of Harry Nyquist and Ralph Hartley. It is at the intersection of electronic engineering, mathematics, statistics, computer science, neurobiology, physics, and electrical engineering. A key measure in information theory is entropy. Entropy quantifies the amount of uncertainty involved in the value of a random variable or the outcome of a random process. For example, identifying the outcome of a fair coin flip (which has two equally likely outcomes) provides less information (lower entropy, less uncertainty) than identifying the outcome from a roll of a die (which has six equally likely outcomes). Some other important measures in information theory are mutual information, channel capacity, error exponents, and relative entropy. Important sub-fields of information theory include source coding, algorithmic complexity theory, algorithmic information theory and information-theoretic security. Applications of fundamental topics of information theory include source coding/data compression (e.g. for ZIP files), and channel coding/error detection and correction (e.g. for DSL). Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones and the development of the Internet and artificial intelligence. The theory has also found applications in other areas, including statistical inference, cryptography, neurobiology, perception, signal processing, linguistics, the evolution and function of molecular codes (bioinformatics), thermal physics, molecular dynamics, black holes, quantum computing, information retrieval, intelligence gathering, plagiarism detection, pattern recognition, anomaly detection, the analysis of music, art creation, imaging system design, study of outer space, the dimensionality of space, and epistemology. == Overview == Information theory studies the transmission, processing, extraction, and utilization of information. Abstractly, information can be thought of as the resolution of uncertainty. In the case of communication of information over a noisy channel, this abstract concept was formalized in 1948 by Claude Shannon in a paper entitled A Mathematical Theory of Communication, in which information is thought of as a set of possible messages, and the goal is to send these messages over a noisy channel, and to have the receiver reconstruct the message with low probability of error, in spite of the channel noise. Shannon's main result, the noisy-channel coding theorem, showed that, in the limit of many channel uses, the rate of information that is asymptotically achievable is equal to the channel capacity, a quantity dependent merely on the statistics of the channel over which the messages are sent. Coding theory is concerned with finding explicit methods, called codes, for increasing the efficiency and reducing the error rate of data communication over noisy channels to near the channel capacity. These codes can be roughly subdivided into data compression (source coding) and error-correction (channel coding) techniques. In the latter case, it took many years to find the methods Shannon's work proved were possible. A third class of information theory codes are cryptographic algorithms (both codes and ciphers). Concepts, methods and results from coding theory and information theory are widely used in cryptography and cryptanalysis, such as the unit ban. == Historical background == The landmark event establishing the discipline of information theory and bringing it to immediate worldwide attention was the publication of Claude E. Shannon's classic paper "A Mathematical Theory of Communication" in the Bell System Technical Journal in July and October 1948. Historian James Gleick rated the paper as the most important development of 1948, noting that the paper was "even more profound and more fundamental" than the transistor. He came to be known as the "father of information theory". Shannon outlined some of his initial ideas of information theory as early as 1939 in a letter to Vannevar Bush. Prior to this paper, limited information-theoretic ideas had been developed at Bell Labs, all implicitly assuming events of equal probability. Harry Nyquist's 1924 paper, Certain Factors Affecting Telegraph Speed, contains a theoretical section quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation W = K log m (recalling the Boltzmann constant), where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. Ralph Hartley's 1928 paper, Transmission of Information, uses the word information as a measurable quantity, reflecting the receiver's ability to distinguish one sequence of symbols from any other, thus quantifying information as H = log Sn = n log S, where S was the number of possible symbols, and n the number of symbols in a transmission. The unit of information was therefore the decimal digit, which since has sometimes been called the hartley in his honor as a unit or scale or measure of information. Alan Turing in 1940 used similar ideas as part of the statistical analysis of the breaking of the German second world war Enigma ciphers. Much of the mathematics behind information theory with events of different probabilities were developed for the field of thermodynamics by Ludwig Boltzmann and J. Willard Gibbs. Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored in Entropy in thermodynamics and information theory. In Shannon's revolutionary and groundbreaking paper, the work for which had been substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion: "The fundamental problem of communication is that of reproducing at one point, either exactly or approximately, a message selected at another point." With it came the ideas of: the information entropy and redundancy of a source, and its relevance through the source coding theorem; the mutual information, and the channel capacity of a noisy channel, including the promise of perfect loss-free communication given by the noisy-channel coding theorem; the practical result of the Shannon–Hartley law for the channel capacity of a Gaussian channel; as well as the bit—a new way of seeing the most fundamental unit of information. == Quantities of information == Information theory is based on probability theory and statistics, where quantified information is usually described in terms of bits. Information theory often concerns itself with measures of information of the distributions associated with random variables. One of the most important measures is called entropy, which forms the building block of many other measures. Entropy allows quantification of measure of information in a single random variable. Another useful concept is mutual information defined on two random variables, which describes the measure of information in common between those variables, which can be used to describe their correlation. The former quantity is a property of the probability distribution of a random variable and gives a limit on the rate at which data generated by independent samples with the given distribution can be reliably compressed. The latter is a property of the joint distribution of two random variables, and is the maximum rate of reliable communication across a noisy channel in the limit of long block lengths, when the channel statistics are determined by the joint distribution. The choice of logarithmic base in the following formulae determines the unit of information entropy that is used. A common unit of information is the bit or shannon, based on the binary logarithm. Other units include the nat, which is based on the natural logarithm, and the decimal digit, which is based on the common logarithm. In what follows, an expression of the form p log p is considered by convention to be equal to zero whenever p = 0. This is justified because lim p → 0 + p log ⁡ p = 0 {\displaystyle \lim _{p\rightarrow 0+}p\log p=0} for any logarithmic base. === Entropy of an information source === Based on the probability mass function of each source symbol to be communicated, the Shannon entropy H, in units of bits (per symbol), is given by H = − ∑ i p i log 2 ⁡ ( p i ) {\displaystyle H=-\sum _{i}p_{i}\log _{2}(p_{i})} where pi is the probability of occurrence of the i-th possible value of the source symbol. This equation gives the entropy in the units of "bits" (per symbol) because it uses a logarithm of base 2, and this base-2 measure of entropy has sometimes been called the shannon in his honor. Entropy is also commonly computed using the natural logarithm (base e, where e is Euler's number), which produces a measurement of entropy in nats per symbol and sometimes simplifies the analysis by avoiding the need to include extra constants in the formulas. Other bases are also possible, but less commonly used. For example, a logarithm of base 28 = 256 will produce a measurement in bytes per symbol, and a logarithm of base 10 will produce a measurement in decimal digits (or hartleys) per symbol. Intuitively, the entropy HX of a discrete random variable X is a measure of the amount of uncertainty associated with the value of X when only its distribution is known. The entropy of a source that emits a sequence of N symbols that are independent and identically distributed (iid) is N ⋅ H bits (per message of N symbols). If the source data symbols are identically distributed but not independent, the entropy of a message of length N will be less than N ⋅ H. If one transmits 1000 bits (0s and 1s), and the value of each of these bits is known to the receiver (has a specific value with certainty) ahead of transmission, it is clear that no information is transmitted. If, however, each bit is independently equally likely to be 0 or 1, 1000 shannons of information (more often called bits) have been transmitted. Between these two extremes, information can be quantified as follows. If X {\displaystyle \mathbb {X} } is the set of all messages {x1, ..., xn} that X could be, and p(x) is the probability of some x ∈ X {\displaystyle x\in \mathbb {X} } , then the entropy, H, of X is defined: H ( X ) = E X [ I ( x ) ] = − ∑ x ∈ X p ( x ) log ⁡ p ( x ) . {\displaystyle H(X)=\mathbb {E} _{X}[I(x)]=-\sum _{x\in \mathbb {X} }p(x)\log p(x).} (Here, I(x) is the self-information, which is the entropy contribution of an individual message, and E X {\displaystyle \mathbb {E} _{X}} is the expected value.) A property of entropy is that it is maximized when all the messages in the message space are equiprobable p(x) = 1/n; i.e., most unpredictable, in which case H(X) = log n. The special case of information entropy for a random variable with two outcomes is the binary entropy function, usually taken to the logarithmic base 2, thus having the shannon (Sh) as unit: H b ( p ) = − p log 2 ⁡ p − ( 1 − p ) log 2 ⁡ ( 1 − p ) . {\displaystyle H_{\mathrm {b} }(p)=-p\log _{2}p-(1-p)\log _{2}(1-p).} === Joint entropy === The joint entropy of two discrete random variables X and Y is merely the entropy of their pairing: (X, Y). This implies that if X and Y are independent, then their joint entropy is the sum of their individual entropies. For example, if (X, Y) represents the position of a chess piece—X the row and Y the column, then the joint entropy of the row of the piece and the column of the piece will be the entropy of the position of the piece. H ( X , Y ) = E X , Y [ − log ⁡ p ( x , y ) ] = − ∑ x , y p ( x , y ) log ⁡ p ( x , y ) {\displaystyle H(X,Y)=\mathbb {E} _{X,Y}[-\log p(x,y)]=-\sum _{x,y}p(x,y)\log p(x,y)\,} Despite similar notation, joint entropy should not be confused with cross-entropy. === Conditional entropy (equivocation) === The conditional entropy or conditional uncertainty of X given random variable Y (also called the equivocation of X about Y) is the average conditional entropy over Y: H ( X | Y ) = E Y [ H ( X | y ) ] = − ∑ y ∈ Y p ( y ) ∑ x ∈ X p ( x | y ) log ⁡ p ( x | y ) = − ∑ x , y p ( x , y ) log ⁡ p ( x | y ) . {\displaystyle H(X|Y)=\mathbb {E} _{Y}[H(X|y)]=-\sum _{y\in Y}p(y)\sum _{x\in X}p(x|y)\log p(x|y)=-\sum _{x,y}p(x,y)\log p(x|y).} Because entropy can be conditioned on a random variable or on that random variable being a certain value, care should be taken not to confuse these two definitions of conditional entropy, the former of which is in more common use. A basic property of this form of conditional entropy is that: H ( X | Y ) = H ( X , Y ) − H ( Y ) . {\displaystyle H(X|Y)=H(X,Y)-H(Y).\,} === Mutual information (transinformation) === Mutual information measures the amount of information that can be obtained about one random variable by observing another. It is important in communication where it can be used to maximize the amount of information shared between sent and received signals. The mutual information of X relative to Y is given by: I ( X ; Y ) = E X , Y [ S I ( x , y ) ] = ∑ x , y p ( x , y ) log ⁡ p ( x , y ) p ( x ) p ( y ) {\displaystyle I(X;Y)=\mathbb {E} _{X,Y}[SI(x,y)]=\sum _{x,y}p(x,y)\log {\frac {p(x,y)}{p(x)\,p(y)}}} where SI (Specific mutual Information) is the pointwise mutual information. A basic property of the mutual information is that I ( X ; Y ) = H ( X ) − H ( X | Y ) . {\displaystyle I(X;Y)=H(X)-H(X|Y).\,} That is, knowing Y, we can save an average of I(X; Y) bits in encoding X compared to not knowing Y. Mutual information is symmetric: I ( X ; Y ) = I ( Y ; X ) = H ( X ) + H ( Y ) − H ( X , Y ) . {\displaystyle I(X;Y)=I(Y;X)=H(X)+H(Y)-H(X,Y).\,} Mutual information can be expressed as the average Kullback–Leibler divergence (information gain) between the posterior probability distribution of X given the value of Y and the prior distribution on X: I ( X ; Y ) = E p ( y ) [ D K L ( p ( X | Y = y ) ‖ p ( X ) ) ] . {\displaystyle I(X;Y)=\mathbb {E} _{p(y)}[D_{\mathrm {KL} }(p(X|Y=y)\|p(X))].} In other words, this is a measure of how much, on the average, the probability distribution on X will change if we are given the value of Y. This is often recalculated as the divergence from the product of the marginal distributions to the actual joint distribution: I ( X ; Y ) = D K L ( p ( X , Y ) ‖ p ( X ) p ( Y ) ) . {\displaystyle I(X;Y)=D_{\mathrm {KL} }(p(X,Y)\|p(X)p(Y)).} Mutual information is closely related to the log-likelihood ratio test in the context of contingency tables and the multinomial distribution and to Pearson's χ2 test: mutual information can be considered a statistic for assessing independence between a pair of variables, and has a well-specified asymptotic distribution. === Kullback–Leibler divergence (information gain) === The Kullback–Leibler divergence (or information divergence, information gain, or relative entropy) is a way of comparing two distributions: a "true" probability distribution ⁠ p ( X ) {\displaystyle p(X)} ⁠, and an arbitrary probability distribution ⁠ q ( X ) {\displaystyle q(X)} ⁠. If we compress data in a manner that assumes ⁠ q ( X ) {\displaystyle q(X)} ⁠ is the distribution underlying some data, when, in reality, ⁠ p ( X ) {\displaystyle p(X)} ⁠ is the correct distribution, the Kullback–Leibler divergence is the number of average additional bits per datum necessary for compression. It is thus defined D K L ( p ( X ) ‖ q ( X ) ) = ∑ x ∈ X − p ( x ) log ⁡ q ( x ) − ∑ x ∈ X − p ( x ) log ⁡ p ( x ) = ∑ x ∈ X p ( x ) log ⁡ p ( x ) q ( x ) . {\displaystyle D_{\mathrm {KL} }(p(X)\|q(X))=\sum _{x\in X}-p(x)\log {q(x)}\,-\,\sum _{x\in X}-p(x)\log {p(x)}=\sum _{x\in X}p(x)\log {\frac {p(x)}{q(x)}}.} Although it is sometimes used as a 'distance metric', KL divergence is not a true metric since it is not symmetric and does not satisfy the triangle inequality (making it a semi-quasimetric). Another interpretation of the KL divergence is the "unnecessary surprise" introduced by a prior from the truth: suppose a number X is about to be drawn randomly from a discrete set with probability distribution ⁠ p ( x ) {\displaystyle p(x)} ⁠. If Alice knows the true distribution ⁠ p ( x ) {\displaystyle p(x)} ⁠, while Bob believes (has a prior) that the distribution is ⁠ q ( x ) {\displaystyle q(x)} ⁠, then Bob will be more surprised than Alice, on average, upon seeing the value of X. The KL divergence is the (objective) expected value of Bob's (subjective) surprisal minus Alice's surprisal, measured in bits if the log is in base 2. In this way, the extent to which Bob's prior is "wrong" can be quantified in terms of how "unnecessarily surprised" it is expected to make him. === Directed Information === Directed information, I ( X n → Y n ) {\displaystyle I(X^{n}\to Y^{n})} , is an information theory measure that quantifies the information flow from the random process X n = { X 1 , X 2 , … , X n } {\displaystyle X^{n}=\{X_{1},X_{2},\dots ,X_{n}\}} to the random process Y n = { Y 1 , Y 2 , … , Y n } {\displaystyle Y^{n}=\{Y_{1},Y_{2},\dots ,Y_{n}\}} . The term directed information was coined by James Massey and is defined as I ( X n → Y n ) ≜ ∑ i = 1 n I ( X i ; Y i | Y i − 1 ) {\displaystyle I(X^{n}\to Y^{n})\triangleq \sum _{i=1}^{n}I(X^{i};Y_{i}|Y^{i-1})} , where I ( X i ; Y i | Y i − 1 ) {\displaystyle I(X^{i};Y_{i}|Y^{i-1})} is the conditional mutual information I ( X 1 , X 2 , . . . , X i ; Y i | Y 1 , Y 2 , . . . , Y i − 1 ) {\displaystyle I(X_{1},X_{2},...,X_{i};Y_{i}|Y_{1},Y_{2},...,Y_{i-1})} . In contrast to mutual information, directed information is not symmetric. The I ( X n → Y n ) {\displaystyle I(X^{n}\to Y^{n})} measures the information bits that are transmitted causally from X n {\displaystyle X^{n}} to Y n {\displaystyle Y^{n}} . The Directed information has many applications in problems where causality plays an important role such as capacity of channel with feedback, capacity of discrete memoryless networks with feedback, gambling with causal side information, compression with causal side information, real-time control communication settings, and in statistical physics. === Other quantities === Other important information theoretic quantities include the Rényi entropy and the Tsallis entropy (generalizations of the concept of entropy), differential entropy (a generalization of quantities of information to continuous distributions), and the conditional mutual information. Also, pragmatic information has been proposed as a measure of how much information has been used in making a decision. == Coding theory == Coding theory is one of the most important and direct applications of information theory. It can be subdivided into source coding theory and channel coding theory. Using a statistical description for data, information theory quantifies the number of bits needed to describe the data, which is the information entropy of the source. Data compression (source coding): There are two formulations for the compression problem: lossless data compression: the data must be reconstructed exactly; lossy data compression: allocates bits needed to reconstruct the data, within a specified fidelity level measured by a distortion function. This subset of information theory is called rate–distortion theory. Error-correcting codes (channel coding): While data compression removes as much redundancy as possible, an error-correcting code adds just the right kind of redundancy (i.e., error correction) needed to transmit the data efficiently and faithfully across a noisy channel. This division of coding theory into compression and transmission is justified by the information transmission theorems, or source–channel separation theorems that justify the use of bits as the universal currency for information in many contexts. However, these theorems only hold in the situation where one transmitting user wishes to communicate to one receiving user. In scenarios with more than one transmitter (the multiple-access channel), more than one receiver (the broadcast channel) or intermediary "helpers" (the relay channel), or more general networks, compression followed by transmission may no longer be optimal. === Source theory === Any process that generates successive messages can be considered a source of information. A memoryless source is one in which each message is an independent identically distributed random variable, whereas the properties of ergodicity and stationarity impose less restrictive constraints. All such sources are stochastic. These terms are well studied in their own right outside information theory. ==== Rate ==== Information rate is the average entropy per symbol. For memoryless sources, this is merely the entropy of each symbol, while, in the case of a stationary stochastic process, it is: r = lim n → ∞ H ( X n | X n − 1 , X n − 2 , X n − 3 , … ) ; {\displaystyle r=\lim _{n\to \infty }H(X_{n}|X_{n-1},X_{n-2},X_{n-3},\ldots );} that is, the conditional entropy of a symbol given all the previous symbols generated. For the more general case of a process that is not necessarily stationary, the average rate is: r = lim n → ∞ 1 n H ( X 1 , X 2 , … X n ) ; {\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}H(X_{1},X_{2},\dots X_{n});} that is, the limit of the joint entropy per symbol. For stationary sources, these two expressions give the same result. The information rate is defined as: r = lim n → ∞ 1 n I ( X 1 , X 2 , … X n ; Y 1 , Y 2 , … Y n ) ; {\displaystyle r=\lim _{n\to \infty }{\frac {1}{n}}I(X_{1},X_{2},\dots X_{n};Y_{1},Y_{2},\dots Y_{n});} It is common in information theory to speak of the "rate" or "entropy" of a language. This is appropriate, for example, when the source of information is English prose. The rate of a source of information is related to its redundancy and how well it can be compressed, the subject of source coding. === Channel capacity === Communications over a channel is the primary motivation of information theory. However, channels often fail to produce exact reconstruction of a signal; noise, periods of silence, and other forms of signal corruption often degrade quality. Consider the communications process over a discrete channel. A simple model of the process is shown below: → Message W Encoder f n → E n c o d e d s e q u e n c e X n Channel p ( y | x ) → R e c e i v e d s e q u e n c e Y n Decoder g n → E s t i m a t e d m e s s a g e W ^ {\displaystyle {\xrightarrow[{\text{Message}}]{W}}{\begin{array}{|c| }\hline {\text{Encoder}}\\f_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Encoded \atop sequence} }]{X^{n}}}{\begin{array}{|c| }\hline {\text{Channel}}\\p(y|x)\\\hline \end{array}}{\xrightarrow[{\mathrm {Received \atop sequence} }]{Y^{n}}}{\begin{array}{|c| }\hline {\text{Decoder}}\\g_{n}\\\hline \end{array}}{\xrightarrow[{\mathrm {Estimated \atop message} }]{\hat {W}}}} Here X represents the space of messages transmitted, and Y the space of messages received during a unit time over our channel. Let p(y|x) be the conditional probability distribution function of Y given X. We will consider p(y|x) to be an inherent fixed property of our communications channel (representing the nature of the noise of our channel). Then the joint distribution of X and Y is completely determined by our channel and by our choice of f(x), the marginal distribution of messages we choose to send over the channel. Under these constraints, we would like to maximize the rate of information, or the signal, we can communicate over the channel. The appropriate measure for this is the mutual information, and this maximum mutual information is called the channel capacity and is given by: C = max f I ( X ; Y ) . {\displaystyle C=\max _{f}I(X;Y).\!} This capacity has the following property related to communicating at information rate R (where R is usually bits per symbol). For any information rate R < C and coding error ε > 0, for large enough N, there exists a code of length N and rate ≥ R and a decoding algorithm, such that the maximal probability of block error is ≤ ε; that is, it is always possible to transmit with arbitrarily small block error. In addition, for any rate R > C, it is impossible to transmit with arbitrarily small block error. Channel coding is concerned with finding such nearly optimal codes that can be used to transmit data over a noisy channel with a small coding error at a rate near the channel capacity. ==== Capacity of particular channel models ==== A continuous-time analog communications channel subject to Gaussian noise—see Shannon–Hartley theorem. A binary symmetric channel (BSC) with crossover probability p is a binary input, binary output channel that flips the input bit with probability p. The BSC has a capacity of 1 − Hb(p) bits per channel use, where Hb is the binary entropy function to the base-2 logarithm: A binary erasure channel (BEC) with erasure probability p is a binary input, ternary output channel. The possible channel outputs are 0, 1, and a third symbol 'e' called an erasure. The erasure represents complete loss of information about an input bit. The capacity of the BEC is 1 − p bits per channel use. ==== Channels with memory and directed information ==== In practice many channels have memory. Namely, at time i {\displaystyle i} the channel is given by the conditional probability P ( y i | x i , x i − 1 , x i − 2 , . . . , x 1 , y i − 1 , y i − 2 , . . . , y 1 ) {\displaystyle P(y_{i}|x_{i},x_{i-1},x_{i-2},...,x_{1},y_{i-1},y_{i-2},...,y_{1})} . It is often more comfortable to use the notation x i = ( x i , x i − 1 , x i − 2 , . . . , x 1 ) {\displaystyle x^{i}=(x_{i},x_{i-1},x_{i-2},...,x_{1})} and the channel become P ( y i | x i , y i − 1 ) {\displaystyle P(y_{i}|x^{i},y^{i-1})} . In such a case the capacity is given by the mutual information rate when there is no feedback available and the Directed information rate in the case that either there is feedback or not (if there is no feedback the directed information equals the mutual information). === Fungible information === Fungible information is the information for which the means of encoding is not important. Classical information theorists and computer scientists are mainly concerned with information of this sort. It is sometimes referred as speakable information. == Applications to other fields == === Intelligence uses and secrecy applications === Information theoretic concepts apply to cryptography and cryptanalysis. Turing's information unit, the ban, was used in the Ultra project, breaking the German Enigma machine code and hastening the end of World War II in Europe. Shannon himself defined an important concept now called the unicity distance. Based on the redundancy of the plaintext, it attempts to give a minimum amount of ciphertext necessary to ensure unique decipherability. Information theory leads us to believe it is much more difficult to keep secrets than it might first appear. A brute force attack can break systems based on asymmetric key algorithms or on most commonly used methods of symmetric key algorithms (sometimes called secret key algorithms), such as block ciphers. The security of all such methods comes from the assumption that no known attack can break them in a practical amount of time. Information theoretic security refers to methods such as the one-time pad that are not vulnerable to such brute force attacks. In such cases, the positive conditional mutual information between the plaintext and ciphertext (conditioned on the key) can ensure proper transmission, while the unconditional mutual information between the plaintext and ciphertext remains zero, resulting in absolutely secure communications. In other words, an eavesdropper would not be able to improve his or her guess of the plaintext by gaining knowledge of the ciphertext but not of the key. However, as in any other cryptographic system, care must be used to correctly apply even information-theoretically secure methods; the Venona project was able to crack the one-time pads of the Soviet Union due to their improper reuse of key material. === Pseudorandom number generation === Pseudorandom number generators are widely available in computer language libraries and application programs. They are, almost universally, unsuited to cryptographic use as they do not evade the deterministic nature of modern computer equipment and software. A class of improved random number generators is termed cryptographically secure pseudorandom number generators, but even they require random seeds external to the software to work as intended. These can be obtained via extractors, if done carefully. The measure of sufficient randomness in extractors is min-entropy, a value related to Shannon entropy through Rényi entropy; Rényi entropy is also used in evaluating randomness in cryptographic systems. Although related, the distinctions among these measures mean that a random variable with high Shannon entropy is not necessarily satisfactory for use in an extractor and so for cryptography uses. === Seismic exploration === One early commercial application of information theory was in the field of seismic oil exploration. Work in this field made it possible to strip off and separate the unwanted noise from the desired seismic signal. Information theory and digital signal processing offer a major improvement of resolution and image clarity over previous analog methods. === Semiotics === Semioticians Doede Nauta and Winfried Nöth both considered Charles Sanders Peirce as having created a theory of information in his works on semiotics.: 171 : 137  Nauta defined semiotic information theory as the study of "the internal processes of coding, filtering, and information processing.": 91  Concepts from information theory such as redundancy and code control have been used by semioticians such as Umberto Eco and Ferruccio Rossi-Landi to explain ideology as a form of message transmission whereby a dominant social class emits its message by using signs that exhibit a high degree of redundancy such that only one message is decoded among a selection of competing ones. === Integrated process organization of neural information === Quantitative information theoretic methods have been applied in cognitive science to analyze the integrated process organization of neural information in the context of the binding problem in cognitive neuroscience. In this context, either an information-theoretical measure, such as functional clusters (Gerald Edelman and Giulio Tononi's functional clustering model and dynamic core hypothesis (DCH)) or effective information (Tononi's integrated information theory (IIT) of consciousness), is defined (on the basis of a reentrant process organization, i.e. the synchronization of neurophysiological activity between groups of neuronal populations), or the measure of the minimization of free energy on the basis of statistical methods (Karl J. Friston's free energy principle (FEP), an information-theoretical measure which states that every adaptive change in a self-organized system leads to a minimization of free energy, and the Bayesian brain hypothesis). === Miscellaneous applications === Information theory also has applications in the search for extraterrestrial intelligence, black holes, bioinformatics, and gambling. == See also == === Applications === === History === Hartley, R.V.L. History of information theory Shannon, C.E. Timeline of information theory Yockey, H.P. Andrey Kolmogorov === Theory === === Concepts === == References == == Further reading == === The classic work === === Other journal articles === === Textbooks on information theory === === Other books === == External links == "Information", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Lambert F. L. (1999), "Shuffled Cards, Messy Desks, and Disorderly Dorm Rooms - Examples of Entropy Increase? Nonsense!", Journal of Chemical Education IEEE Information Theory Society and ITSOC Monographs, Surveys, and Reviews Archived 2018-06-12 at the Wayback Machine
Wikipedia/information_theory
Welch's method, named after Peter D. Welch, is an approach for spectral density estimation. It is used in physics, engineering, and applied mathematics for estimating the power of a signal at different frequencies. The method is based on the concept of using periodogram spectrum estimates, which are the result of converting a signal from the time domain to the frequency domain. Welch's method is an improvement on the standard periodogram spectrum estimating method and on Bartlett's method, in that it reduces noise in the estimated power spectra in exchange for reducing the frequency resolution. Due to the noise caused by imperfect and finite data, the noise reduction from Welch's method is often desired. == Definition and procedure == The Welch method is based on Bartlett's method and differs in two ways: The signal is split up into overlapping segments: the original data segment is split up into L data segments of length M, overlapping by D points. If D = M / 2, the overlap is said to be 50% If D = 0, the overlap is said to be 0%. This is the same situation as in the Bartlett's method. The overlapping segments are then windowed: After the data is split up into overlapping segments, the individual L data segments have a window applied to them (in the time domain). Most window functions afford more influence to the data at the center of the set than to data at the edges, which represents a loss of information. To mitigate that loss, the individual data sets are commonly overlapped in time (as in the above step). The windowing of the segments is what makes the Welch method a "modified" periodogram. After doing the above, the periodogram is calculated by computing the discrete Fourier transform, and then computing the squared magnitude of the result, yielding power spectrum estimates for each segment. The individual spectrum estimates are then averaged, which reduces the variance of the individual power measurements. The end result is an array of power measurements vs. frequency "bin". == Related approaches == Other overlapping windowed Fourier transforms include: Modified discrete cosine transform Short-time Fourier transform == See also == Fast Fourier transform Power spectrum Spectral density estimation == References == Welch, P. D. (1967), "The use of Fast Fourier Transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms" (PDF), IEEE Transactions on Audio and Electroacoustics, AU-15 (2): 70–73, Bibcode:1967ITAE...15...70W, doi:10.1109/TAU.1967.1161901 Oppenheim, Alan V.; Schafer, Ronald W. (1975). Digital signal processing. Englewood Cliffs, N.J.: Prentice-Hall. pp. 548–554. ISBN 0-13-214635-5. Proakis, John G.; Manolakis, Dimitri G. (1996), Digital Signal Processing: Principles, Algorithms and Applications (3 ed.), Upper Saddle River, NJ: Prentice-Hall, pp. 910–913, ISBN 9780133942897, sAcfAQAAIAAJ
Wikipedia/Welch's_method