text
stringlengths
9
3.55k
source
stringlengths
31
280
In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces.
https://en.wikipedia.org/wiki/Spectral_theorem
In general, the spectral theorem identifies a class of linear operators that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also spectral theory for a historical perspective.
https://en.wikipedia.org/wiki/Spectral_theorem
Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on Hilbert spaces. The spectral theorem also provides a canonical decomposition, called the spectral decomposition, of the underlying vector space on which the operator acts. Augustin-Louis Cauchy proved the spectral theorem for symmetric matrices, i.e., that every real, symmetric matrix is diagonalizable.
https://en.wikipedia.org/wiki/Spectral_theorem
In addition, Cauchy was the first to be systematic about determinants. The spectral theorem as generalized by John von Neumann is today perhaps the most important result of operator theory. This article mainly focuses on the simplest kind of spectral theorem, that for a self-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space.
https://en.wikipedia.org/wiki/Spectral_theorem
In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a method for orthonormalizing a set of vectors in an inner product space, most commonly the Euclidean space Rn equipped with the standard inner product. The Gram–Schmidt process takes a finite, linearly independent set of vectors S = {v1, ..., vk} for k ≤ n and generates an orthogonal set S′ = {u1, ..., uk} that spans the same k-dimensional subspace of Rn as S. The method is named after Jørgen Pedersen Gram and Erhard Schmidt, but Pierre-Simon Laplace had been familiar with it before Gram and Schmidt. In the theory of Lie group decompositions, it is generalized by the Iwasawa decomposition. The application of the Gram–Schmidt process to the column vectors of a full column rank matrix yields the QR decomposition (it is decomposed into an orthogonal and a triangular matrix).
https://en.wikipedia.org/wiki/Gram-Schmidt_theorem
In mathematics, particularly linear algebra, a zero matrix is a matrix with all its entries being zero. It is alternately denoted by the symbol O {\displaystyle O} . Some examples of zero matrices are 0 1 , 1 = , 0 2 , 2 = , 0 2 , 3 = , {\displaystyle 0_{1,1}={\begin{bmatrix}0\end{bmatrix}},\ 0_{2,2}={\begin{bmatrix}0&0\\0&0\end{bmatrix}},\ 0_{2,3}={\begin{bmatrix}0&0&0\\0&0&0\end{bmatrix}},\ } The set of m × n matrices with entries in a ring K forms a module K m , n {\displaystyle K_{m,n}} . The zero matrix 0 K m , n {\displaystyle 0_{K_{m,n}}} in K m , n {\displaystyle K_{m,n}} is the matrix with all entries equal to 0 K {\displaystyle 0_{K}} , where 0 K {\displaystyle 0_{K}} is the additive identity in K.
https://en.wikipedia.org/wiki/Zero_tensor
0 K m , n = {\displaystyle 0_{K_{m,n}}={\begin{bmatrix}0_{K}&0_{K}&\cdots &0_{K}\\0_{K}&0_{K}&\cdots &0_{K}\\\vdots &\vdots &&\vdots \\0_{K}&0_{K}&\cdots &0_{K}\end{bmatrix}}} The zero matrix is the additive identity in K m , n {\displaystyle K_{m,n}} . That is, for all A ∈ K m , n {\displaystyle A\in K_{m,n}}: 0 K m , n + A = A + 0 K m , n = A {\displaystyle 0_{K_{m,n}}+A=A+0_{K_{m,n}}=A} There is exactly one zero matrix of any given size m × n (with entries from a given ring), so when the context is clear, one often refers to the zero matrix.
https://en.wikipedia.org/wiki/Zero_tensor
In general, the zero element of a ring is unique, and typically denoted as 0 without any subscript to indicate the parent ring. Hence the examples above represent zero matrices over any ring. The zero matrix also represents the linear transformation which sends all vectors to the zero vector.
https://en.wikipedia.org/wiki/Zero_tensor
In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of m × n {\displaystyle m\times n} matrices, and is denoted by the symbol O {\displaystyle O} or 0 {\displaystyle 0} followed by subscripts corresponding to the dimension of the matrix as the context sees fit. Some examples of zero matrices are 0 1 , 1 = , 0 2 , 2 = , 0 2 , 3 = . {\displaystyle 0_{1,1}={\begin{bmatrix}0\end{bmatrix}},\ 0_{2,2}={\begin{bmatrix}0&0\\0&0\end{bmatrix}},\ 0_{2,3}={\begin{bmatrix}0&0&0\\0&0&0\end{bmatrix}}.\ }
https://en.wikipedia.org/wiki/Zero_matrix
In mathematics, particularly linear algebra, an orthogonal basis for an inner product space V {\displaystyle V} is a basis for V {\displaystyle V} whose vectors are mutually orthogonal. If the vectors of an orthogonal basis are normalized, the resulting basis is an orthonormal basis.
https://en.wikipedia.org/wiki/Orthogonal_basis
In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite dimension is a basis for V {\displaystyle V} whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space R n {\displaystyle \mathbb {R} ^{n}} is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection (or any orthogonal transformation) is also orthonormal, and every orthonormal basis for R n {\displaystyle \mathbb {R} ^{n}} arises in this fashion. For a general inner product space V , {\displaystyle V,} an orthonormal basis can be used to define normalized orthogonal coordinates on V .
https://en.wikipedia.org/wiki/Complete_orthonormal_basis
{\displaystyle V.} Under these coordinates, the inner product becomes a dot product of vectors. Thus the presence of an orthonormal basis reduces the study of a finite-dimensional inner product space to the study of R n {\displaystyle \mathbb {R} ^{n}} under dot product.
https://en.wikipedia.org/wiki/Complete_orthonormal_basis
Every finite-dimensional inner product space has an orthonormal basis, which may be obtained from an arbitrary basis using the Gram–Schmidt process. In functional analysis, the concept of an orthonormal basis can be generalized to arbitrary (infinite-dimensional) inner product spaces. Given a pre-Hilbert space H , {\displaystyle H,} an orthonormal basis for H {\displaystyle H} is an orthonormal set of vectors with the property that every vector in H {\displaystyle H} can be written as an infinite linear combination of the vectors in the basis.
https://en.wikipedia.org/wiki/Complete_orthonormal_basis
In this case, the orthonormal basis is sometimes called a Hilbert basis for H . {\displaystyle H.}
https://en.wikipedia.org/wiki/Complete_orthonormal_basis
Note that an orthonormal basis in this sense is not generally a Hamel basis, since infinite linear combinations are required. Specifically, the linear span of the basis must be dense in H , {\displaystyle H,} but it may not be the entire space. If we go on to Hilbert spaces, a non-orthonormal set of vectors having the same linear span as an orthonormal basis may not be a basis at all.
https://en.wikipedia.org/wiki/Complete_orthonormal_basis
For instance, any square-integrable function on the interval {\displaystyle } can be expressed (almost everywhere) as an infinite sum of Legendre polynomials (an orthonormal basis), but not necessarily as an infinite sum of the monomials x n . {\displaystyle x^{n}.} A different generalisation is to pseudo-inner product spaces, finite-dimensional vector spaces M {\displaystyle M} equipped with a non-degenerate symmetric bilinear form known as the metric tensor. In such a basis, the metric takes the form diag ( + 1 , ⋯ , + 1 , − 1 , ⋯ , − 1 ) {\displaystyle {\text{diag}}(+1,\cdots ,+1,-1,\cdots ,-1)} with p {\displaystyle p} positive ones and q {\displaystyle q} negative ones.
https://en.wikipedia.org/wiki/Complete_orthonormal_basis
In mathematics, particularly linear algebra, the Schur–Horn theorem, named after Issai Schur and Alfred Horn, characterizes the diagonal of a Hermitian matrix with given eigenvalues. It has inspired investigations and substantial generalizations in the setting of symplectic geometry. A few important generalizations are Kostant's convexity theorem, Atiyah–Guillemin–Sternberg convexity theorem, Kirwan convexity theorem.
https://en.wikipedia.org/wiki/Schur–Horn_theorem
In mathematics, particularly matrix theory and combinatorics, a Pascal matrix is a matrix (possibly infinite) containing the binomial coefficients as its elements. It is thus an encoding of Pascal's triangle in matrix form. There are three natural ways to achieve this: as a lower-triangular matrix, an upper-triangular matrix, or a symmetric matrix. For example, the 5 × 5 matrices are: There are other ways in which Pascal's triangle can be put into matrix form, but these are not easily extended to infinity.
https://en.wikipedia.org/wiki/Pascal_matrix
In mathematics, particularly matrix theory, a Stieltjes matrix, named after Thomas Joannes Stieltjes, is a real symmetric positive definite matrix with nonpositive off-diagonal entries. A Stieltjes matrix is necessarily an M-matrix. Every n×n Stieltjes matrix is invertible to a nonsingular symmetric nonnegative matrix, though the converse of this statement is not true in general for n > 2. From the above definition, a Stieltjes matrix is a symmetric invertible Z-matrix whose eigenvalues have positive real parts. As it is a Z-matrix, its off-diagonal entries are less than or equal to zero.
https://en.wikipedia.org/wiki/Stieltjes_matrix
In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side.
https://en.wikipedia.org/wiki/Bandwidth_(linear_algebra)
In mathematics, particularly matrix theory, the n×n Lehmer matrix (named after Derrick Henry Lehmer) is the constant symmetric matrix defined by A i j = { i / j , j ≥ i j / i , j < i . {\displaystyle A_{ij}={\begin{cases}i/j,&j\geq i\\j/i,&j
https://en.wikipedia.org/wiki/Lehmer_matrix
In mathematics, particularly measure theory, a 𝜎-ideal, or sigma ideal, of a sigma-algebra (𝜎, read "sigma," means countable in this context) is a subset with certain desirable closure properties. It is a special type of ideal. Its most frequent application is in probability theory.Let ( X , Σ ) {\displaystyle (X,\Sigma )} be a measurable space (meaning Σ {\displaystyle \Sigma } is a 𝜎-algebra of subsets of X {\displaystyle X} ). A subset N {\displaystyle N} of Σ {\displaystyle \Sigma } is a 𝜎-ideal if the following properties are satisfied: ∅ ∈ N {\displaystyle \varnothing \in N} ; When A ∈ N {\displaystyle A\in N} and B ∈ Σ {\displaystyle B\in \Sigma } then B ⊆ A {\displaystyle B\subseteq A} implies B ∈ N {\displaystyle B\in N} ; If { A n } n ∈ N ⊆ N {\displaystyle \left\{A_{n}\right\}_{n\in \mathbb {N} }\subseteq N} then ⋃ n ∈ N A n ∈ N .
https://en.wikipedia.org/wiki/Sigma-ideal
{\textstyle \bigcup _{n\in \mathbb {N} }A_{n}\in N.} Briefly, a sigma-ideal must contain the empty set and contain subsets and countable unions of its elements.
https://en.wikipedia.org/wiki/Sigma-ideal
The concept of 𝜎-ideal is dual to that of a countably complete (𝜎-) filter. If a measure μ {\displaystyle \mu } is given on ( X , Σ ) , {\displaystyle (X,\Sigma ),} the set of μ {\displaystyle \mu } -negligible sets ( S ∈ Σ {\displaystyle S\in \Sigma } such that μ ( S ) = 0 {\displaystyle \mu (S)=0} ) is a 𝜎-ideal. The notion can be generalized to preorders ( P , ≤ , 0 ) {\displaystyle (P,\leq ,0)} with a bottom element 0 {\displaystyle 0} as follows: I {\displaystyle I} is a 𝜎-ideal of P {\displaystyle P} just when (i') 0 ∈ I , {\displaystyle 0\in I,} (ii') x ≤ y and y ∈ I {\displaystyle x\leq y{\text{ and }}y\in I} implies x ∈ I , {\displaystyle x\in I,} and (iii') given a sequence x 1 , x 2 , … ∈ I , {\displaystyle x_{1},x_{2},\ldots \in I,} there exists some y ∈ I {\displaystyle y\in I} such that x n ≤ y {\displaystyle x_{n}\leq y} for each y .
https://en.wikipedia.org/wiki/Sigma-ideal
{\displaystyle y.} Thus I {\displaystyle I} contains the bottom element, is downward closed, and satisfies a countable analogue of the property of being upwards directed. A 𝜎-ideal of a set X {\displaystyle X} is a 𝜎-ideal of the power set of X .
https://en.wikipedia.org/wiki/Sigma-ideal
{\displaystyle X.} That is, when no 𝜎-algebra is specified, then one simply takes the full power set of the underlying set. For example, the meager subsets of a topological space are those in the 𝜎-ideal generated by the collection of closed subsets with empty interior.
https://en.wikipedia.org/wiki/Sigma-ideal
In mathematics, particularly multivariable calculus, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analogue of the line integral. Given a surface, one may integrate a scalar field (that is, a function of position which returns a scalar as a value) over the surface, or a vector field (that is, a function which returns a vector as value). If a region R is not flat, then it is called a surface as shown in the illustration. Surface integrals have applications in physics, particularly with the theories of classical electromagnetism.
https://en.wikipedia.org/wiki/Surface_integral
In mathematics, particularly numerical analysis, the Bramble–Hilbert lemma, named after James H. Bramble and Stephen Hilbert, bounds the error of an approximation of a function u {\displaystyle \textstyle u} by a polynomial of order at most m − 1 {\displaystyle \textstyle m-1} in terms of derivatives of u {\displaystyle \textstyle u} of order m {\displaystyle \textstyle m} . Both the error of the approximation and the derivatives of u {\displaystyle \textstyle u} are measured by L p {\displaystyle \textstyle L^{p}} norms on a bounded domain in R n {\displaystyle \textstyle \mathbb {R} ^{n}} . This is similar to classical numerical analysis, where, for example, the error of linear interpolation u {\displaystyle \textstyle u} can be bounded using the second derivative of u {\displaystyle \textstyle u} . However, the Bramble–Hilbert lemma applies in any number of dimensions, not just one dimension, and the approximation error and the derivatives of u {\displaystyle \textstyle u} are measured by more general norms involving averages, not just the maximum norm.
https://en.wikipedia.org/wiki/Bramble–Hilbert_lemma
Additional assumptions on the domain are needed for the Bramble–Hilbert lemma to hold. Essentially, the boundary of the domain must be "reasonable". For example, domains that have a spike or a slit with zero angle at the tip are excluded.
https://en.wikipedia.org/wiki/Bramble–Hilbert_lemma
Lipschitz domains are reasonable enough, which includes convex domains and domains with continuously differentiable boundary. The main use of the Bramble–Hilbert lemma is to prove bounds on the error of interpolation of function u {\displaystyle \textstyle u} by an operator that preserves polynomials of order up to m − 1 {\displaystyle \textstyle m-1} , in terms of the derivatives of u {\displaystyle \textstyle u} of order m {\displaystyle \textstyle m} . This is an essential step in error estimates for the finite element method. The Bramble–Hilbert lemma is applied there on the domain consisting of one element (or, in some superconvergence results, a small number of elements).
https://en.wikipedia.org/wiki/Bramble–Hilbert_lemma
In mathematics, particularly p-adic analysis, the p-adic exponential function is a p-adic analogue of the usual exponential function on the complex numbers. As in the complex case, it has an inverse function, named the p-adic logarithm.
https://en.wikipedia.org/wiki/P-adic_exponential_function
In mathematics, particularly q-analog theory, the Ramanujan theta function generalizes the form of the Jacobi theta functions, while capturing their general properties. In particular, the Jacobi triple product takes on a particularly elegant form when written in terms of the Ramanujan theta. The function is named after mathematician Srinivasa Ramanujan.
https://en.wikipedia.org/wiki/Ramanujan_theta_function
In mathematics, particularly set theory, a finite set is a set that has a finite number of elements. Informally, a finite set is a set which one could in principle count and finish counting. For example, { 2 , 4 , 6 , 8 , 10 } {\displaystyle \{2,4,6,8,10\}} is a finite set with five elements. The number of elements of a finite set is a natural number (possibly zero) and is called the cardinality (or the cardinal number) of the set.
https://en.wikipedia.org/wiki/Tarski_finiteness
A set that is not a finite set is called an infinite set. For example, the set of all positive integers is infinite: { 1 , 2 , 3 , … } .
https://en.wikipedia.org/wiki/Tarski_finiteness
{\displaystyle \{1,2,3,\ldots \}.} Finite sets are particularly important in combinatorics, the mathematical study of counting. Many arguments involving finite sets rely on the pigeonhole principle, which states that there cannot exist an injective function from a larger finite set to a smaller finite set.
https://en.wikipedia.org/wiki/Tarski_finiteness
In mathematics, particularly set theory, non-recursive ordinals are large countable ordinals greater than all the recursive ordinals, and therefore can not be expressed using recursive ordinal notations.
https://en.wikipedia.org/wiki/Nonrecursive_ordinals
In mathematics, particularly the area of topology, a simple-homotopy equivalence is a refinement of the concept of homotopy equivalence. Two CW-complexes are simple-homotopy equivalent if they are related by a sequence of collapses and expansions (inverses of collapses), and a homotopy equivalence is a simple homotopy equivalence if it is homotopic to such a map. The obstruction to a homotopy equivalence being a simple homotopy equivalence is the Whitehead torsion, τ ( f ) . {\displaystyle \tau (f).} A homotopy theory that studies simple-homotopy types is called simple homotopy theory.
https://en.wikipedia.org/wiki/Simple_homotopy
In mathematics, particularly the branch called category theory, a 2-group is a groupoid with a way to multiply objects, making it resemble a group. They are part of a larger hierarchy of n-groups. They were introduced by Hoàng Xuân Sính in the late 1960s under the name gr-categories, and they are also known as categorical groups.
https://en.wikipedia.org/wiki/2-group
In mathematics, particularly the field of calculus and Fourier analysis, the Fourier sine and cosine series are two mathematical series named after Joseph Fourier.
https://en.wikipedia.org/wiki/Fourier_cosine_series
In mathematics, particularly the study of Lie groups, a Dunkl operator is a certain kind of mathematical operator, involving differential operators but also reflections in an underlying space. Formally, let G be a Coxeter group with reduced root system R and kv an arbitrary "multiplicity" function on R (so ku = kv whenever the reflections σu and σv corresponding to the roots u and v are conjugate in G). Then, the Dunkl operator is defined by: T i f ( x ) = ∂ ∂ x i f ( x ) + ∑ v ∈ R + k v f ( x ) − f ( x σ v ) ⟨ x , v ⟩ v i {\displaystyle T_{i}f(x)={\frac {\partial }{\partial x_{i}}}f(x)+\sum _{v\in R_{+}}k_{v}{\frac {f(x)-f(x\sigma _{v})}{\left\langle x,v\right\rangle }}v_{i}} where v i {\displaystyle v_{i}} is the i-th component of v, 1 ≤ i ≤ N, x in RN, and f a smooth function on RN.
https://en.wikipedia.org/wiki/Dunkl_operator
Dunkl operators were introduced by Charles Dunkl (1989). One of Dunkl's major results was that Dunkl operators "commute," that is, they satisfy T i ( T j f ( x ) ) = T j ( T i f ( x ) ) {\displaystyle T_{i}(T_{j}f(x))=T_{j}(T_{i}f(x))} just as partial derivatives do. Thus Dunkl operators represent a meaningful generalization of partial derivatives.
https://en.wikipedia.org/wiki/Dunkl_operator
In mathematics, particularly topology, a Gδ space is a topological space in which closed sets are in a way ‘separated’ from their complements using only countably many open sets. A Gδ space may thus be regarded as a space satisfying a different kind of separation axiom. In fact normal Gδ spaces are referred to as perfectly normal spaces, and satisfy the strongest of separation axioms. Gδ spaces are also called perfect spaces. The term perfect is also used, incompatibly, to refer to a space with no isolated points; see Perfect set.
https://en.wikipedia.org/wiki/G-delta_space
In mathematics, particularly topology, a comb space is a particular subspace of R 2 {\displaystyle \mathbb {R} ^{2}} that resembles a comb. The comb space has properties that serve as a number of counterexamples. The topologist's sine curve has similar properties to the comb space. The deleted comb space is a variation on the comb space.
https://en.wikipedia.org/wiki/Comb_space
In mathematics, particularly topology, a cosmic space is any topological space that is a continuous image of some separable metric space. Equivalently (for regular T1 spaces but not in general), a space is cosmic if and only if it has a countable network; namely a countable collection of subsets of the space such that any open set is the union of a subcollection of these sets. Cosmic spaces have several interesting properties. There are a number of unsolved problems about them.
https://en.wikipedia.org/wiki/Cosmic_space
In mathematics, particularly topology, a topological space X is locally normal if intuitively it looks locally like a normal space. More precisely, a locally normal space satisfies the property that each point of the space belongs to a neighbourhood of the space that is normal under the subspace topology.
https://en.wikipedia.org/wiki/Locally_normal_space
In mathematics, particularly topology, an atlas is a concept used to describe a manifold. An atlas consists of individual charts that, roughly speaking, describe individual regions of the manifold. If the manifold is the surface of the Earth, then an atlas has its more common meaning. In general, the notion of atlas underlies the formal definition of a manifold and related structures such as vector bundles and other fiber bundles.
https://en.wikipedia.org/wiki/Coordinate_map
In mathematics, particularly topology, collections of subsets are said to be locally discrete if they look like they have precisely one element from a local point of view. The study of locally discrete collections is worthwhile as Bing's metrization theorem shows.
https://en.wikipedia.org/wiki/Locally_discrete_collection
In mathematics, particularly topology, the K-topology is a topology that one can impose on the set of all real numbers which has some interesting properties. Relative to the set of all real numbers carrying the standard topology, the set K = {1/n | n is a positive integer} is not closed since it doesn't contain its (only) limit point 0. Relative to the K-topology however, the set K is automatically decreed to be closed by adding ‘more’ basis elements to the standard topology on R. Basically, the K-topology on R is strictly finer than the standard topology on R. It is mostly useful for counterexamples in basic topology.
https://en.wikipedia.org/wiki/K-topology
In mathematics, particularly topology, the homeomorphism group of a topological space is the group consisting of all homeomorphisms from the space to itself with function composition as the group operation. Homeomorphism groups are very important in the theory of topological spaces and in general are examples of automorphism groups. Homeomorphism groups are topological invariants in the sense that the homeomorphism groups of homeomorphic topological spaces are isomorphic as groups.
https://en.wikipedia.org/wiki/Homeomorphism_group
In mathematics, particularly topology, the tube lemma, also called Wallace's theorem, is a useful tool in order to prove that the finite product of compact spaces is compact.
https://en.wikipedia.org/wiki/Tube_lemma
In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure μ {\displaystyle \mu } satisfies Carleman's condition, there is no other measure ν {\displaystyle \nu } having the same moments as μ . {\displaystyle \mu .} The condition was discovered by Torsten Carleman in 1922.
https://en.wikipedia.org/wiki/Carleman's_condition
In mathematics, particularly, in asymptotic convex geometry, Milman's reverse Brunn–Minkowski inequality is a result due to Vitali Milman that provides a reverse inequality to the famous Brunn–Minkowski inequality for convex bodies in n-dimensional Euclidean space Rn. Namely, it bounds the volume of the Minkowski sum of two bodies from above in terms of the volumes of the bodies.
https://en.wikipedia.org/wiki/Milman's_reverse_Brunn–Minkowski_inequality
In mathematics, pentation (or hyper-5) is the next hyperoperation after tetration and before hexation. It is defined as iterated (repeated) tetration (assuming right-associativity), just as tetration is iterated right-associative exponentiation. It is a binary operation defined with two numbers a and b, where a is tetrated to itself b-1 times. For instance, using hyperoperation notation for pentation and tetration, 2 3 {\displaystyle 23} means tetrating 2 to itself 2 times, or 2 ( 2 2 ) {\displaystyle 2(22)} . This can then be reduced to 2 ( 2 2 ) = 2 4 = 2 2 2 2 = 2 2 4 = 2 16 = 65 , 536. {\displaystyle 2(2^{2})=24=2^{2^{2^{2}}}=2^{2^{4}}=2^{16}=65,536.}
https://en.wikipedia.org/wiki/Pentation
In mathematics, perfectoid spaces are adic spaces of special kind, which occur in the study of problems of "mixed characteristic", such as local fields of characteristic zero which have residue fields of characteristic prime p. A perfectoid field is a complete topological field K whose topology is induced by a nondiscrete valuation of rank 1, such that the Frobenius endomorphism Φ is surjective on K°/p where K° denotes the ring of power-bounded elements. Perfectoid spaces may be used to (and were invented in order to) compare mixed characteristic situations with purely finite characteristic ones. Technical tools for making this precise are the tilting equivalence and the almost purity theorem. The notions were introduced in 2012 by Peter Scholze.
https://en.wikipedia.org/wiki/Perfectoid_space
In mathematics, persymmetric matrix may refer to: a square matrix which is symmetric with respect to the northeast-to-southwest diagonal; or a square matrix such that the values on each line perpendicular to the main diagonal are the same for a given line.The first definition is the most common in the recent literature. The designation "Hankel matrix" is often used for matrices satisfying the property in the second definition.
https://en.wikipedia.org/wiki/Persymmetric_matrix
In mathematics, perturbation theory works typically by expanding unknown quantity in a power series in a small parameter. However, in a perturbation problem beyond all orders, all coefficients of the perturbation expansion vanish and the difference between the function and the constant function 0 cannot be detected by a power series. A simple example is understood by an attempt at trying to expand e − 1 / ϵ {\displaystyle e^{-1/\epsilon }} in a Taylor series in ϵ > 0 {\displaystyle \epsilon >0} about 0. All terms in a naïve Taylor expansion are identically zero.
https://en.wikipedia.org/wiki/Perturbation_problem_beyond_all_orders
This is because the function e − 1 / z {\displaystyle e^{-1/z}} possesses an essential singularity at z = 0 {\displaystyle z=0} in the complex z {\displaystyle z} -plane, and therefore the function is most appropriately modeled by a Laurent series -- a Taylor series has a zero radius of convergence. Thus, if a physical problem possesses a solution of this nature, possibly in addition to an analytic part that may be modeled by a power series, the perturbative analysis fails to recover the singular part. Terms of nature similar to e − 1 / ϵ {\displaystyle e^{-1/\epsilon }} are considered to be "beyond all orders" of the standard perturbative power series.
https://en.wikipedia.org/wiki/Perturbation_problem_beyond_all_orders
In mathematics, physics and chemistry, a space group is the symmetry group of a repeating pattern in space, usually in three dimensions. The elements of a space group (its symmetry operations) are the rigid transformations of the pattern that leave it unchanged. In three dimensions, space groups are classified into 219 distinct types, or 230 types if chiral copies are considered distinct. Space groups are discrete cocompact groups of isometries of an oriented Euclidean space in any number of dimensions.
https://en.wikipedia.org/wiki/Space_groups
In dimensions other than 3, they are sometimes called Bieberbach groups. In crystallography, space groups are also called the crystallographic or Fedorov groups, and represent a description of the symmetry of the crystal. A definitive source regarding 3-dimensional space groups is the International Tables for Crystallography Hahn (2002).
https://en.wikipedia.org/wiki/Space_groups
In mathematics, physics and engineering, the sinc function, denoted by sinc(x), has two forms, normalized and unnormalized. In mathematics, the historical unnormalized sinc function is defined for x ≠ 0 by Alternatively, the unnormalized sinc function is often called the sampling function, indicated as Sa(x).In digital signal processing and information theory, the normalized sinc function is commonly defined for x ≠ 0 by In either case, the value at x = 0 is defined to be the limiting value for all real a ≠ 0 (the limit can be proven using the squeeze theorem). The normalization causes the definite integral of the function over the real numbers to equal 1 (whereas the same integral of the unnormalized sinc function has a value of π). As a further useful property, the zeros of the normalized sinc function are the nonzero integer values of x. The normalized sinc function is the Fourier transform of the rectangular function with no scaling.
https://en.wikipedia.org/wiki/Sinc_function
It is used in the concept of reconstructing a continuous bandlimited signal from uniformly spaced samples of that signal. The only difference between the two definitions is in the scaling of the independent variable (the x axis) by a factor of π. In both cases, the value of the function at the removable singularity at zero is understood to be the limit value 1.
https://en.wikipedia.org/wiki/Sinc_function
The sinc function is then analytic everywhere and hence an entire function. The function has also been called the cardinal sine or sine cardinal function. The term sinc was introduced by Philip M. Woodward in his 1952 article "Information theory and inverse probability in telecommunication", in which he said that the function "occurs so often in Fourier analysis and its applications that it does seem to merit some notation of its own", and his 1953 book Probability and Information Theory, with Applications to Radar. The function itself was first mathematically derived in this form by Lord Rayleigh in his expression (Rayleigh's Formula) for the zeroth-order spherical Bessel function of the first kind.
https://en.wikipedia.org/wiki/Sinc_function
In mathematics, physics, and art, moiré patterns (UK: MWAR-ay, US: mwar-AY, French: ) or moiré fringes are large-scale interference patterns that can be produced when a partially opaque ruled pattern with transparent gaps is overlaid on another similar pattern. For the moiré interference pattern to appear, the two patterns must not be completely identical, but rather displaced, rotated, or have slightly different pitch. Moiré patterns appear in many situations. In printing, the printed pattern of dots can interfere with the image.
https://en.wikipedia.org/wiki/Moiré_effect
In television and digital photography, a pattern on an object being photographed can interfere with the shape of the light sensors to generate unwanted artifacts. They are also sometimes created deliberately – in micrometers they are used to amplify the effects of very small movements. In physics, its manifestation is wave interference such as that seen in the double-slit experiment and the beat phenomenon in acoustics.
https://en.wikipedia.org/wiki/Moiré_effect
In mathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric or spatial vector, or – as here – simply a vector) is a geometric object that has both a magnitude (or length) and direction. A vector is what is needed to "carry" the point A to the point B; the Latin word vector means "one who carries". The magnitude of the vector is the distance between the two points and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity.
https://en.wikipedia.org/wiki/Introduction_to_the_mathematics_of_general_relativity
In mathematics, physics, and engineering, a Euclidean vector or simply a vector (sometimes called a geometric vector or spatial vector) is a geometric object that has magnitude (or length) and direction. Vectors can be added to other vectors according to vector algebra. A Euclidean vector is frequently represented by a directed line segment, or graphically as an arrow connecting an initial point A with a terminal point B, and denoted by A B → {\displaystyle {\overrightarrow {AB}}} . A vector is what is needed to "carry" the point A to the point B; the Latin word vector means "carrier".
https://en.wikipedia.org/wiki/Vector_direction
It was first used by 18th century astronomers investigating planetary revolution around the Sun. The magnitude of the vector is the distance between the two points, and the direction refers to the direction of displacement from A to B. Many algebraic operations on real numbers such as addition, subtraction, multiplication, and negation have close analogues for vectors, operations which obey the familiar algebraic laws of commutativity, associativity, and distributivity. These operations and associated laws qualify Euclidean vectors as an example of the more generalized concept of vectors defined simply as elements of a vector space.
https://en.wikipedia.org/wiki/Vector_direction
Vectors play an important role in physics: the velocity and acceleration of a moving object and the forces acting on it can all be described with vectors. Many other physical quantities can be usefully thought of as vectors. Although most of them do not represent distances (except, for example, position or displacement), their magnitude and direction can still be represented by the length and direction of an arrow. The mathematical representation of a physical vector depends on the coordinate system used to describe it. Other vector-like objects that describe physical quantities and transform in a similar way under changes of the coordinate system include pseudovectors and tensors.
https://en.wikipedia.org/wiki/Vector_direction
In mathematics, physics, and engineering, spatial frequency is a characteristic of any structure that is periodic across position in space. The spatial frequency is a measure of how often sinusoidal components (as determined by the Fourier transform) of the structure repeat per unit of distance. The SI unit of spatial frequency is cycles per meter (m).
https://en.wikipedia.org/wiki/Spatial_frequency
In image-processing applications, spatial frequency is often expressed in units of cycles per millimeter (mm) or equivalently line pairs per mm. In wave propagation, the spatial frequency is also known as wavenumber. Ordinary wavenumber is defined as the reciprocal of wavelength λ {\displaystyle \lambda } and is commonly denoted by ξ {\displaystyle \xi } or sometimes ν {\displaystyle \nu }: Angular wavenumber k {\displaystyle k} , expressed in rad per m, is related to ordinary wavenumber and wavelength by
https://en.wikipedia.org/wiki/Spatial_frequency
In mathematics, physics, and theoretical computer graphics, tapering is a kind of shape deformation. Just as an affine transformation, such as scaling or shearing, is a first-order model of shape deformation, tapering is a higher order deformation just as twisting and bending. Tapering can be thought of as non-constant scaling by a given tapering function. The resultant deformations can be linear or nonlinear.
https://en.wikipedia.org/wiki/Tapering_(mathematics)
To create a nonlinear taper, instead of scaling in x and y for all z with constants as in: q = p , {\displaystyle q={\begin{bmatrix}a&0&0\\0&b&0\\0&0&1\end{bmatrix}}p,} let a and b be functions of z so that: q = p . {\displaystyle q={\begin{bmatrix}a(p_{z})&0&0\\0&b(p_{z})&0\\0&0&1\end{bmatrix}}p.} An example of a linear taper is a ( z ) = α 0 + α 1 z {\displaystyle a(z)=\alpha _{0}+\alpha _{1}z} , and a quadratic taper a ( z ) = α 0 + α 1 z + α 2 z 2 {\displaystyle a(z)={\alpha }_{0}+{\alpha }_{1}z+{\alpha }_{2}z^{2}} . As another example, if the parametric equation of a cube were given by ƒ(t) = (x(t), y(t), z(t)), a nonlinear taper could be applied so that the cube's volume slowly decreases (or tapers) as the function moves in the positive z direction. For the given cube, an example of a nonlinear taper along z would be if, for instance, the function T(z) = 1/(a + bt) were applied to the cube's equation such that ƒ(t) = (T(z)x(t), T(z)y(t), T(z)z(t)), for some real constants a and b.
https://en.wikipedia.org/wiki/Tapering_(mathematics)
In mathematics, physics, electronics, control systems engineering, and statistics, the frequency domain refers to the analysis of mathematical functions or signals with respect to frequency, rather than time. Put simply, a time-domain graph shows how a signal changes over time, whereas a frequency-domain graph shows how the signal is distributed within different frequency bands over a range of frequencies. A frequency-domain representation consists of both the magnitude and the phase of a set of sinusoids (or other basis waveforms) at the frequency components of the signal.
https://en.wikipedia.org/wiki/Frequency_space
Although it is common to refer to the magnitude portion as the frequency response of a signal, the phase portion is required to uniquely define the signal. A given function or signal can be converted between the time and frequency domains with a pair of mathematical operators called transforms. An example is the Fourier transform, which converts a time function into a complex valued sum or integral of sine waves of different frequencies, with amplitudes and phases, each of which represents a frequency component.
https://en.wikipedia.org/wiki/Frequency_space
The "spectrum" of frequency components is the frequency-domain representation of the signal. The inverse Fourier transform converts the frequency-domain function back to the time-domain function. A spectrum analyzer is a tool commonly used to visualize electronic signals in the frequency domain.
https://en.wikipedia.org/wiki/Frequency_space
A frequency-domain representation may describe either a static function or a particular time period of a dynamic function (signal or system). The frequency transform of a dynamic function is performed over a finite time period of that function and assumes the function repeats infinitely outside of that time period. Some specialized signal processing techniques for dynamic functions use transforms that result in a joint time–frequency domain, with the instantaneous frequency response being a key link between the time domain and the frequency domain.
https://en.wikipedia.org/wiki/Frequency_space
In mathematics, piecewise syndeticity is a notion of largeness of subsets of the natural numbers. A set S ⊂ N {\displaystyle S\subset \mathbb {N} } is called piecewise syndetic if there exists a finite subset G of N {\displaystyle \mathbb {N} } such that for every finite subset F of N {\displaystyle \mathbb {N} } there exists an x ∈ N {\displaystyle x\in \mathbb {N} } such that x + F ⊂ ⋃ n ∈ G ( S − n ) {\displaystyle x+F\subset \bigcup _{n\in G}(S-n)} where S − n = { m ∈ N: m + n ∈ S } {\displaystyle S-n=\{m\in \mathbb {N} :m+n\in S\}} . Equivalently, S is piecewise syndetic if there is a constant b such that there are arbitrarily long intervals of N {\displaystyle \mathbb {N} } where the gaps in S are bounded by b.
https://en.wikipedia.org/wiki/Piecewise_syndetic_set
In mathematics, planar algebras first appeared in the work of Vaughan Jones on the standard invariant of a II1 subfactor. They also provide an appropriate algebraic framework for many knot invariants (in particular the Jones polynomial), and have been used in describing the properties of Khovanov homology with respect to tangle composition. Any subfactor planar algebra provides a family of unitary representations of Thompson groups. Any finite group (and quantum generalization) can be encoded as a planar algebra.
https://en.wikipedia.org/wiki/Planar_algebra
In mathematics, plurisubharmonic functions (sometimes abbreviated as psh, plsh, or plush functions) form an important class of functions used in complex analysis. On a Kähler manifold, plurisubharmonic functions form a subset of the subharmonic functions. However, unlike subharmonic functions (which are defined on a Riemannian manifold) plurisubharmonic functions can be defined in full generality on complex analytic spaces.
https://en.wikipedia.org/wiki/Plurisubharmonic_function
In mathematics, point-free geometry is a geometry whose primitive ontological notion is region rather than point. Two axiomatic systems are set out below, one grounded in mereology, the other in mereotopology and known as connection theory. Point-free geometry was first formulated in Whitehead (1919, 1920), not as a theory of geometry or of spacetime, but of "events" and of an "extension relation" between events. Whitehead's purposes were as much philosophical as scientific and mathematical.
https://en.wikipedia.org/wiki/Point-free_geometry
In mathematics, pointless topology, also called point-free topology (or pointfree topology) and locale theory, is an approach to topology that avoids mentioning points, and in which the lattices of open sets are the primitive notions. In this approach it becomes possible to construct topologically interesting spaces from purely algebraic data.
https://en.wikipedia.org/wiki/Pointless_topology
In mathematics, pointwise convergence is one of various senses in which a sequence of functions can converge to a particular function. It is weaker than uniform convergence, to which it is often compared.
https://en.wikipedia.org/wiki/Topology_of_pointwise_convergence
In mathematics, poly-Bernoulli numbers, denoted as B n ( k ) {\displaystyle B_{n}^{(k)}} , were defined by M. Kaneko as L i k ( 1 − e − x ) 1 − e − x = ∑ n = 0 ∞ B n ( k ) x n n ! {\displaystyle {Li_{k}(1-e^{-x}) \over 1-e^{-x}}=\sum _{n=0}^{\infty }B_{n}^{(k)}{x^{n} \over n!}} where Li is the polylogarithm. The B n ( 1 ) {\displaystyle B_{n}^{(1)}} are the usual Bernoulli numbers.
https://en.wikipedia.org/wiki/Poly-Bernoulli_number
Moreover, the Generalization of Poly-Bernoulli numbers with a,b,c parameters defined as follows L i k ( 1 − ( a b ) − x ) b x − a − x c x t = ∑ n = 0 ∞ B n ( k ) ( t ; a , b , c ) x n n ! {\displaystyle {Li_{k}(1-(ab)^{-x}) \over b^{x}-a^{-x}}c^{xt}=\sum _{n=0}^{\infty }B_{n}^{(k)}(t;a,b,c){x^{n} \over n!}}
https://en.wikipedia.org/wiki/Poly-Bernoulli_number
where Li is the polylogarithm. Kaneko also gave two combinatorial formulas: B n ( − k ) = ∑ m = 0 n ( − 1 ) m + n m ! S ( n , m ) ( m + 1 ) k , {\displaystyle B_{n}^{(-k)}=\sum _{m=0}^{n}(-1)^{m+n}m!S(n,m)(m+1)^{k},} B n ( − k ) = ∑ j = 0 min ( n , k ) ( j ! )
https://en.wikipedia.org/wiki/Poly-Bernoulli_number
2 S ( n + 1 , j + 1 ) S ( k + 1 , j + 1 ) , {\displaystyle B_{n}^{(-k)}=\sum _{j=0}^{\min(n,k)}(j! )^{2}S(n+1,j+1)S(k+1,j+1),} where S ( n , k ) {\displaystyle S(n,k)} is the number of ways to partition a size n {\displaystyle n} set into k {\displaystyle k} non-empty subsets (the Stirling number of the second kind). A combinatorial interpretation is that the poly-Bernoulli numbers of negative index enumerate the set of n {\displaystyle n} by k {\displaystyle k} (0,1)-matrices uniquely reconstructible from their row and column sums.
https://en.wikipedia.org/wiki/Poly-Bernoulli_number
Also it is the number of open tours by a biased rook on a board 1 ⋯ 1 ⏟ n 0 ⋯ 0 ⏟ k {\displaystyle \underbrace {1\cdots 1} _{n}\underbrace {0\cdots 0} _{k}} (see A329718 for definition). The Poly-Bernoulli number B k ( − k ) {\displaystyle B_{k}^{(-k)}} satisfies the following asymptotic: B k ( − k ) ∼ ( k ! ) 2 1 k π ( 1 − log ⁡ 2 ) ( 1 log ⁡ 2 ) 2 k + 1 , as k → ∞ .
https://en.wikipedia.org/wiki/Poly-Bernoulli_number
{\displaystyle B_{k}^{(-k)}\sim (k! )^{2}{\sqrt {\frac {1}{k\pi (1-\log 2)}}}\left({\frac {1}{\log 2}}\right)^{2k+1},\quad {\text{as }}k\rightarrow \infty .} For a positive integer n and a prime number p, the poly-Bernoulli numbers satisfy B n ( − p ) ≡ 2 n ( mod p ) , {\displaystyle B_{n}^{(-p)}\equiv 2^{n}{\pmod {p}},} which can be seen as an analog of Fermat's little theorem. Further, the equation B x ( − n ) + B y ( − n ) = B z ( − n ) {\displaystyle B_{x}^{(-n)}+B_{y}^{(-n)}=B_{z}^{(-n)}} has no solution for integers x, y, z, n > 2; an analog of Fermat's Last Theorem. Moreover, there is an analogue of Poly-Bernoulli numbers (like Bernoulli numbers and Euler numbers) which is known as Poly-Euler numbers.
https://en.wikipedia.org/wiki/Poly-Bernoulli_number
In mathematics, polyad is a concept of category theory introduced by Jean Bénabou in generalising monads. A polyad in a bicategory D is a bicategory morphism Φ from a locally punctual bicategory C to D, Φ: C → D. (A bicategory C is called locally punctual if all hom-categories C(X,Y) consist of one object and one morphism only.) Monads are polyads Φ: C → D where C has only one object.
https://en.wikipedia.org/wiki/Polyad
In mathematics, polynomial identity testing (PIT) is the problem of efficiently determining whether two multivariate polynomials are identical. More formally, a PIT algorithm is given an arithmetic circuit that computes a polynomial p in a field, and decides whether p is the zero polynomial. Determining the computational complexity required for polynomial identity testing is one of the most important open problems in algebraic computing complexity.
https://en.wikipedia.org/wiki/Polynomial_identity_testing
In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular: Positive-definite bilinear form Positive-definite function Positive-definite function on a group Positive-definite functional Positive-definite kernel Positive-definite matrix Positive-definite quadratic form
https://en.wikipedia.org/wiki/Positive_definite
In mathematics, potential flow around a circular cylinder is a classical solution for the flow of an inviscid, incompressible fluid around a cylinder that is transverse to the flow. Far from the cylinder, the flow is unidirectional and uniform. The flow has no vorticity and thus the velocity field is irrotational and can be modeled as a potential flow. Unlike a real fluid, this solution indicates a net zero drag on the body, a result known as d'Alembert's paradox.
https://en.wikipedia.org/wiki/Potential_flow_around_a_circular_cylinder
In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A {\displaystyle A} , the algorithm will produce a number λ {\displaystyle \lambda } , which is the greatest (in absolute value) eigenvalue of A {\displaystyle A} , and a nonzero vector v {\displaystyle v} , which is a corresponding eigenvector of λ {\displaystyle \lambda } , that is, A v = λ v {\displaystyle Av=\lambda v} . The algorithm is also known as the Von Mises iteration.Power iteration is a very simple algorithm, but it may converge slowly. The most time-consuming operation of the algorithm is the multiplication of matrix A {\displaystyle A} by a vector, so it is effective for a very large sparse matrix with appropriate implementation.
https://en.wikipedia.org/wiki/Power_method
In mathematics, precisely in the theory of functions of several complex variables, a pluriharmonic function is a real valued function which is locally the real part of a holomorphic function of several complex variables. Sometimes such a function is referred to as n-harmonic function, where n ≥ 2 is the dimension of the complex domain where the function is defined. However, in modern expositions of the theory of functions of several complex variables it is preferred to give an equivalent formulation of the concept, by defining pluriharmonic function a complex valued function whose restriction to every complex line is a harmonic function with respect to the real and imaginary part of the complex line parameter.
https://en.wikipedia.org/wiki/Pluriharmonic_function
In mathematics, primitive recursive set functions or primitive recursive ordinal functions are analogs of primitive recursive functions, defined for sets or ordinals rather than natural numbers. They were introduced by Jensen & Karp (1971).
https://en.wikipedia.org/wiki/Primitive_recursive_ordinal_function
In mathematics, probabilistic metric spaces are a generalization of metric spaces where the distance no longer takes values in the non-negative real numbers R ≥ 0, but in distribution functions.Let D+ be the set of all probability distribution functions F such that F(0) = 0 (F is a nondecreasing, left continuous mapping from R into such that max(F) = 1). Then given a non-empty set S and a function F: S × S → D+ where we denote F(p, q) by Fp,q for every (p, q) ∈ S × S, the ordered pair (S, F) is said to be a probabilistic metric space if: For all u and v in S, u = v if and only if Fu,v(x) = 1 for all x > 0. For all u and v in S, Fu,v = Fv,u. For all u, v and w in S, Fu,v(x) = 1 and Fv,w(y) = 1 ⇒ Fu,w(x + y) = 1 for x, y > 0.
https://en.wikipedia.org/wiki/Probabilistic_metric_space
In mathematics, progressive measurability is a property in the theory of stochastic processes. A progressively measurable process, while defined quite technically, is important because it implies the stopped process is measurable. Being progressively measurable is a strictly stronger property than the notion of being an adapted process. Progressively measurable processes are important in the theory of Itô integrals.
https://en.wikipedia.org/wiki/Progressively_measurable
In mathematics, projections onto convex sets (POCS), sometimes known as the alternating projection method, is a method to find a point in the intersection of two closed convex sets. It is a very simple algorithm and has been rediscovered many times. The simplest case, when the sets are affine spaces, was analyzed by John von Neumann. The case when the sets are affine spaces is special, since the iterates not only converge to a point in the intersection (assuming the intersection is non-empty) but to the orthogonal projection of the point onto the intersection.
https://en.wikipedia.org/wiki/Projections_onto_convex_sets
For general closed convex sets, the limit point need not be the projection. Classical work on the case of two closed convex sets shows that the rate of convergence of the iterates is linear. There are now extensions that consider cases when there are more than two sets, or when the sets are not convex, or that give faster convergence rates.
https://en.wikipedia.org/wiki/Projections_onto_convex_sets
Analysis of POCS and related methods attempt to show that the algorithm converges (and if so, find the rate of convergence), and whether it converges to the projection of the original point. These questions are largely known for simple cases, but a topic of active research for the extensions. There are also variants of the algorithm, such as Dykstra's projection algorithm. See the references in the further reading section for an overview of the variants, extensions and applications of the POCS method; a good historical background can be found in section III of.
https://en.wikipedia.org/wiki/Projections_onto_convex_sets