text
stringlengths
9
3.55k
source
stringlengths
31
280
In mathematics, the concept of a projective space originated from the visual effect of perspective, where parallel lines seem to meet at infinity. A projective space may thus be viewed as the extension of a Euclidean space, or, more generally, an affine space with points at infinity, in such a way that there is one point at infinity of each direction of parallel lines. This definition of a projective space has the disadvantage of not being isotropic, having two different sorts of points, which must be considered separately in proofs. Therefore, other definitions are generally preferred.
https://en.wikipedia.org/wiki/Projective_spaces
There are two classes of definitions. In synthetic geometry, point and line are primitive entities that are related by the incidence relation "a point is on a line" or "a line passes through a point", which is subject to the axioms of projective geometry. For some such set of axioms, the projective spaces that are defined have been shown to be equivalent to those resulting from the following definition, which is more often encountered in modern textbooks.
https://en.wikipedia.org/wiki/Projective_spaces
Using linear algebra, a projective space of dimension n is defined as the set of the vector lines (that is, vector subspaces of dimension one) in a vector space V of dimension n + 1. Equivalently, it is the quotient set of V \ {0} by the equivalence relation "being on the same vector line". As a vector line intersects the unit sphere of V in two antipodal points, projective spaces can be equivalently defined as spheres in which antipodal points are identified.
https://en.wikipedia.org/wiki/Projective_spaces
A projective space of dimension 1 is a projective line, and a projective space of dimension 2 is a projective plane. Projective spaces are widely used in geometry, as allowing simpler statements and simpler proofs. For example, in affine geometry, two distinct lines in a plane intersect in at most one point, while, in projective geometry, they intersect in exactly one point. Also, there is only one class of conic sections, which can be distinguished only by their intersections with the line at infinity: two intersection points for hyperbolas; one for the parabola, which is tangent to the line at infinity; and no real intersection point of ellipses. In topology, and more specifically in manifold theory, projective spaces play a fundamental role, being typical examples of non-orientable manifolds.
https://en.wikipedia.org/wiki/Projective_spaces
In mathematics, the concept of a relatively hyperbolic group is an important generalization of the geometric group theory concept of a hyperbolic group. The motivating examples of relatively hyperbolic groups are the fundamental groups of complete noncompact hyperbolic manifolds of finite volume.
https://en.wikipedia.org/wiki/Relatively_hyperbolic_group
In mathematics, the concept of a residuated mapping arises in the theory of partially ordered sets. It refines the concept of a monotone function. If A, B are posets, a function f: A → B is defined to be monotone if it is order-preserving: that is, if x ≤ y implies f(x) ≤ f(y). This is equivalent to the condition that the preimage under f of every down-set of B is a down-set of A. We define a principal down-set to be one of the form ↓{b} = { b' ∈ B: b' ≤ b }.
https://en.wikipedia.org/wiki/Residuated_mapping
In general the preimage under f of a principal down-set need not be a principal down-set. If all of them are, f is called residuated.
https://en.wikipedia.org/wiki/Residuated_mapping
The notion of residuated map can be generalized to a binary operator (or any higher arity) via component-wise residuation. This approach gives rise to notions of left and right division in a partially ordered magma, additionally endowing it with a quasigroup structure. (One speaks only of residuated algebra for higher arities). A binary (or higher arity) residuated map is usually not residuated as a unary map.
https://en.wikipedia.org/wiki/Residuated_mapping
In mathematics, the concept of abelian variety is the higher-dimensional generalization of the elliptic curve. The equations defining abelian varieties are a topic of study because every abelian variety is a projective variety. In dimension d ≥ 2, however, it is no longer as straightforward to discuss such equations. There is a large classical literature on this question, which in a reformulation is, for complex algebraic geometry, a question of describing relations between theta functions. The modern geometric treatment now refers to some basic papers of David Mumford, from 1966 to 1967, which reformulated that theory in terms from abstract algebraic geometry valid over general fields.
https://en.wikipedia.org/wiki/Equations_defining_abelian_varieties
In mathematics, the concept of an inverse element generalises the concepts of opposite (−x) and reciprocal (1/x) of numbers. Given an operation denoted here ∗, and an identity element denoted e, if x ∗ y = e, one says that x is a left inverse of y, and that y is a right inverse of x. (An identity element is an element such that x * e = x and e * y = y for all x and y for which the left-hand sides are defined.) When the operation ∗ is associative, if an element x has both a left inverse and a right inverse, then these two inverses are equal and unique; they are called the inverse element or simply the inverse.
https://en.wikipedia.org/wiki/Left_inverse_element
Often an adjective is added for specifying the operation, such as in additive inverse, multiplicative inverse, and functional inverse. In this case (associative operation), an invertible element is an element that has an inverse. In a ring, an invertible element, also called a unit, is an element that is invertible under multiplication (this is not ambiguous, as every element is invertible under addition).
https://en.wikipedia.org/wiki/Left_inverse_element
Inverses are commonly used in groups—where every element is invertible, and rings—where invertible elements are also called units. They are also commonly used for operations that are not defined for all possible operands, such as inverse matrices and inverse functions.
https://en.wikipedia.org/wiki/Left_inverse_element
This has been generalized to category theory, where, by definition, an isomorphism is an invertible morphism. The word 'inverse' is derived from Latin: inversus that means 'turned upside down', 'overturned'. This may take its origin from the case of fractions, where the (multiplicative) inverse is obtained by exchanging the numerator and the denominator (the inverse of x y {\displaystyle {\tfrac {x}{y}}} is y x {\displaystyle {\tfrac {y}{x}}} ).
https://en.wikipedia.org/wiki/Left_inverse_element
In mathematics, the concept of graph dynamical systems can be used to capture a wide range of processes taking place on graphs or networks. A major theme in the mathematical and computational analysis of GDSs is to relate their structural properties (e.g. the network connectivity) and the global dynamics that result. The work on GDSs considers finite graphs and finite state spaces.
https://en.wikipedia.org/wiki/Graph_dynamical_system
As such, the research typically involves techniques from, e.g., graph theory, combinatorics, algebra, and dynamical systems rather than differential geometry. In principle, one could define and study GDSs over an infinite graph (e.g. cellular automata or probabilistic cellular automata over Z k {\displaystyle \mathbb {Z} ^{k}} or interacting particle systems when some randomness is included), as well as GDSs with infinite state space (e.g. R {\displaystyle \mathbb {R} } as in coupled map lattices); see, for example, Wu. In the following, everything is implicitly assumed to be finite unless stated otherwise.
https://en.wikipedia.org/wiki/Graph_dynamical_system
In mathematics, the concept of groupoid algebra generalizes the notion of group algebra.
https://en.wikipedia.org/wiki/Groupoid_algebra
In mathematics, the concept of irreducibility is used in several ways. A polynomial over a field may be an irreducible polynomial if it cannot be factored over that field. In abstract algebra, irreducible can be an abbreviation for irreducible element of an integral domain; for example an irreducible polynomial. In representation theory, an irreducible representation is a nontrivial representation with no nontrivial proper subrepresentations.
https://en.wikipedia.org/wiki/Irreducible_(mathematics)
Similarly, an irreducible module is another name for a simple module. Absolutely irreducible is a term applied to mean irreducible, even after any finite extension of the field of coefficients. It applies in various situations, for example to irreducibility of a linear representation, or of an algebraic variety; where it means just the same as irreducible over an algebraic closure.
https://en.wikipedia.org/wiki/Irreducible_(mathematics)
In commutative algebra, a commutative ring R is irreducible if its prime spectrum, that is, the topological space Spec R, is an irreducible topological space. A matrix is irreducible if it is not similar via a permutation to a block upper triangular matrix (that has more than one block of positive size). (Replacing non-zero entries in the matrix by one, and viewing the matrix as the adjacency matrix of a directed graph, the matrix is irreducible if and only if such directed graph is strongly connected.)
https://en.wikipedia.org/wiki/Irreducible_(mathematics)
A detailed definition is given here. Also, a Markov chain is irreducible if there is a non-zero probability of transitioning (even if in more than one step) from any state to any other state. In the theory of manifolds, an n-manifold is irreducible if any embedded (n − 1)-sphere bounds an embedded n-ball.
https://en.wikipedia.org/wiki/Irreducible_(mathematics)
Implicit in this definition is the use of a suitable category, such as the category of differentiable manifolds or the category of piecewise-linear manifolds. The notions of irreducibility in algebra and manifold theory are related. An n-manifold is called prime, if it cannot be written as a connected sum of two n-manifolds (neither of which is an n-sphere).
https://en.wikipedia.org/wiki/Irreducible_(mathematics)
An irreducible manifold is thus prime, although the converse does not hold. From an algebraist's perspective, prime manifolds should be called "irreducible"; however, the topologist (in particular the 3-manifold topologist) finds the definition above more useful. The only compact, connected 3-manifolds that are prime but not irreducible are the trivial 2-sphere bundle over S1 and the twisted 2-sphere bundle over S1.
https://en.wikipedia.org/wiki/Irreducible_(mathematics)
See, for example, Prime decomposition (3-manifold). A topological space is irreducible if it is not the union of two proper closed subsets. This notion is used in algebraic geometry, where spaces are equipped with the Zariski topology; it is not of much significance for Hausdorff spaces.
https://en.wikipedia.org/wiki/Irreducible_(mathematics)
See also irreducible component, algebraic variety. In universal algebra, irreducible can refer to the inability to represent an algebraic structure as a composition of simpler structures using a product construction; for example subdirectly irreducible. A 3-manifold is P²-irreducible if it is irreducible and contains no 2-sided R P 2 {\displaystyle \mathbb {R} P^{2}} (real projective plane). An irreducible fraction (or fraction in lowest terms) is a vulgar fraction in which the numerator and denominator are smaller than those in any other equivalent fraction.
https://en.wikipedia.org/wiki/Irreducible_(mathematics)
In mathematics, the concept of quantity is an ancient one extending back to the time of Aristotle and earlier. Aristotle regarded quantity as a fundamental ontological and scientific category. In Aristotle's ontology, quantity or quantum was classified into two different types, which he characterized as follows: Quantum means that which is divisible into two or more constituent parts, of which each is by nature a one and a this. A quantum is a plurality if it is numerable, a magnitude if it is measurable.
https://en.wikipedia.org/wiki/Mathematical_quantity
Plurality means that which is divisible potentially into non-continuous parts, magnitude that which is divisible into continuous parts; of magnitude, that which is continuous in one dimension is length; in two breadth, in three depth. Of these, limited plurality is number, limited length is a line, breadth a surface, depth a solid. In his Elements, Euclid developed the theory of ratios of magnitudes without studying the nature of magnitudes, as Archimedes, but giving the following significant definitions: A magnitude is a part of a magnitude, the less of the greater, when it measures the greater; A ratio is a sort of relation in respect of size between two magnitudes of the same kind.
https://en.wikipedia.org/wiki/Mathematical_quantity
For Aristotle and Euclid, relations were conceived as whole numbers (Michell, 1993). John Wallis later conceived of ratios of magnitudes as real numbers: When a comparison in terms of ratio is made, the resultant ratio often leaves the genus of quantities compared, and passes into the numerical genus, whatever the genus of quantities compared may have been. That is, the ratio of magnitudes of any quantity, whether volume, mass, heat and so on, is a number. Following this, Newton then defined number, and the relationship between quantity and number, in the following terms: By number we understand not so much a multitude of unities, as the abstracted ratio of any quantity to another quantity of the same kind, which we take for unity.
https://en.wikipedia.org/wiki/Mathematical_quantity
In mathematics, the concept of symmetry is studied with the notion of a mathematical group. Every polyhedron has an associated symmetry group, which is the set of all transformations (Euclidean isometries) which leave the polyhedron invariant. The order of the symmetry group is the number of symmetries of the polyhedron. One often distinguishes between the full symmetry group, which includes reflections, and the proper symmetry group, which includes only rotations.
https://en.wikipedia.org/wiki/Regular_solid
The symmetry groups of the Platonic solids are a special class of three-dimensional point groups known as polyhedral groups. The high degree of symmetry of the Platonic solids can be interpreted in a number of ways. Most importantly, the vertices of each solid are all equivalent under the action of the symmetry group, as are the edges and faces.
https://en.wikipedia.org/wiki/Regular_solid
One says the action of the symmetry group is transitive on the vertices, edges, and faces. In fact, this is another way of defining regularity of a polyhedron: a polyhedron is regular if and only if it is vertex-uniform, edge-uniform, and face-uniform. There are only three symmetry groups associated with the Platonic solids rather than five, since the symmetry group of any polyhedron coincides with that of its dual.
https://en.wikipedia.org/wiki/Regular_solid
This is easily seen by examining the construction of the dual polyhedron. Any symmetry of the original must be a symmetry of the dual and vice versa. The three polyhedral groups are: the tetrahedral group T, the octahedral group O (which is also the symmetry group of the cube), and the icosahedral group I (which is also the symmetry group of the dodecahedron).The orders of the proper (rotation) groups are 12, 24, and 60 respectively – precisely twice the number of edges in the respective polyhedra.
https://en.wikipedia.org/wiki/Regular_solid
The orders of the full symmetry groups are twice as much again (24, 48, and 120). See (Coxeter 1973) for a derivation of these facts.
https://en.wikipedia.org/wiki/Regular_solid
All Platonic solids except the tetrahedron are centrally symmetric, meaning they are preserved under reflection through the origin. The following table lists the various symmetry properties of the Platonic solids. The symmetry groups listed are the full groups with the rotation subgroups given in parenthesis (likewise for the number of symmetries). Wythoff's kaleidoscope construction is a method for constructing polyhedra directly from their symmetry groups. They are listed for reference Wythoff's symbol for each of the Platonic solids.
https://en.wikipedia.org/wiki/Regular_solid
In mathematics, the concepts of essential infimum and essential supremum are related to the notions of infimum and supremum, but adapted to measure theory and functional analysis, where one often deals with statements that are not valid for all elements in a set, but rather almost everywhere, that is, except on a set of measure zero. While the exact definition is not immediately straightforward, intuitively the essential supremum of a function is the smallest value that is greater than or equal to the function values everywhere while ignoring what the function does at a set of points of measure zero. For example, if one takes the function f ( x ) {\displaystyle f(x)} that is equal to zero everywhere except at x = 0 {\displaystyle x=0} where f ( 0 ) = 1 , {\displaystyle f(0)=1,} then the supremum of the function equals one. However, its essential supremum is zero because we are allowed to ignore what the function does at the single point where f {\displaystyle f} is peculiar. The essential infimum is defined in a similar way.
https://en.wikipedia.org/wiki/Ess_sup
In mathematics, the conductor of an elliptic curve over the field of rational numbers, or more generally a local or global field, is an integral ideal analogous to the Artin conductor of a Galois representation. It is given as a product of prime ideals, together with associated exponents, which encode the ramification in the field extensions generated by the points of finite order in the group law of the elliptic curve. The primes involved in the conductor are precisely the primes of bad reduction of the curve: this is the Néron–Ogg–Shafarevich criterion. Ogg's formula expresses the conductor in terms of the discriminant and the number of components of the special fiber over a local field, which can be computed using Tate's algorithm.
https://en.wikipedia.org/wiki/Conductor_of_an_elliptic_curve
In mathematics, the conductor-discriminant formula or Führerdiskriminantenproduktformel, introduced by Hasse (1926, 1930) for abelian extensions and by Artin (1931) for Galois extensions, is a formula calculating the relative discriminant of a finite Galois extension L / K {\displaystyle L/K} of local or global fields from the Artin conductors of the irreducible characters I r r ( G ) {\displaystyle \mathrm {Irr} (G)} of the Galois group G = G ( L / K ) {\displaystyle G=G(L/K)} .
https://en.wikipedia.org/wiki/Conductor-discriminant_formula
In mathematics, the cone condition is a property which may be satisfied by a subset of a Euclidean space. Informally, it requires that for each point in the subset a cone with vertex in that point must be contained in the subset itself, and so the subset is "non-flat".
https://en.wikipedia.org/wiki/Cone_condition
In mathematics, the cone of curves (sometimes the Kleiman-Mori cone) of an algebraic variety X {\displaystyle X} is a combinatorial invariant of importance to the birational geometry of X {\displaystyle X} .
https://en.wikipedia.org/wiki/Cone_of_curves
In mathematics, the conformal dimension of a metric space X is the infimum of the Hausdorff dimension over the conformal gauge of X, that is, the class of all metric spaces quasisymmetric to X.
https://en.wikipedia.org/wiki/Conformal_dimension
In mathematics, the conformal group of an inner product space is the group of transformations from the space to itself that preserve angles. More formally, it is the group of transformations that preserve the conformal geometry of the space. Several specific conformal groups are particularly important: The conformal orthogonal group.
https://en.wikipedia.org/wiki/Conformal_group_of_spacetime
If V is a vector space with a quadratic form Q, then the conformal orthogonal group CO(V, Q) is the group of linear transformations T of V for which there exists a scalar λ such that for all x in V Q ( T x ) = λ 2 Q ( x ) {\displaystyle Q(Tx)=\lambda ^{2}Q(x)} For a definite quadratic form, the conformal orthogonal group is equal to the orthogonal group times the group of dilations.The conformal group of the sphere is generated by the inversions in circles. This group is also known as the Möbius group. In Euclidean space En, n > 2, the conformal group is generated by inversions in hyperspheres. In a pseudo-Euclidean space Ep,q, the conformal group is Conf(p, q) ≃ O(p + 1, q + 1) / Z2.All conformal groups are Lie groups.
https://en.wikipedia.org/wiki/Conformal_group_of_spacetime
In mathematics, the conformal radius is a way to measure the size of a simply connected planar domain D viewed from a point z in it. As opposed to notions using Euclidean distance (say, the radius of the largest inscribed disk with center z), this notion is well-suited to use in complex analysis, in particular in conformal maps and conformal geometry. A closely related notion is the transfinite diameter or (logarithmic) capacity of a compact simply connected set D, which can be considered as the inverse of the conformal radius of the complement E = Dc viewed from infinity.
https://en.wikipedia.org/wiki/Conformal_radius
In mathematics, the congruence lattice problem asks whether every algebraic distributive lattice is isomorphic to the congruence lattice of some other lattice. The problem was posed by Robert P. Dilworth, and for many years it was one of the most famous and long-standing open problems in lattice theory; it had a deep impact on the development of lattice theory itself. The conjecture that every distributive lattice is a congruence lattice is true for all distributive lattices with at most ℵ1 compact elements, but F. Wehrung provided a counterexample for distributive lattices with ℵ2 compact elements using a construction based on Kuratowski's free set theorem.
https://en.wikipedia.org/wiki/Huhn's_theorem
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterative algorithm, applicable to sparse systems that are too large to be handled by a direct implementation or other direct methods such as the Cholesky decomposition. Large sparse systems often arise when numerically solving partial differential equations or optimization problems.
https://en.wikipedia.org/wiki/Preconditioned_conjugate_gradient_method
The conjugate gradient method can also be used to solve unconstrained optimization problems such as energy minimization. It is commonly attributed to Magnus Hestenes and Eduard Stiefel, who programmed it on the Z4, and extensively researched it.The biconjugate gradient method provides a generalization to non-symmetric matrices. Various nonlinear conjugate gradient methods seek minima of nonlinear optimization problems.
https://en.wikipedia.org/wiki/Preconditioned_conjugate_gradient_method
In mathematics, the conjugate of an expression of the form a + b d {\displaystyle a+b{\sqrt {d}}} is a − b d , {\displaystyle a-b{\sqrt {d}},} provided that d {\displaystyle {\sqrt {d}}} does not appear in a and b. One says also that the two expressions are conjugate. In particular, the two solutions of a quadratic equation are conjugate, as per the ± {\displaystyle \pm } in the quadratic formula x = − b ± b 2 − 4 a c 2 a {\displaystyle x={\frac {-b\pm {\sqrt {b^{2}-4ac}}}{2a}}} . Complex conjugation is the special case where the square root is i = − 1 , {\displaystyle i={\sqrt {-1}},} the imaginary unit.
https://en.wikipedia.org/wiki/Conjugate_(square_roots)
In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m × n {\displaystyle m\times n} complex matrix A {\displaystyle {\boldsymbol {A}}} is an n × m {\displaystyle n\times m} matrix obtained by transposing A {\displaystyle {\boldsymbol {A}}} and applying complex conjugate on each entry (the complex conjugate of a + i b {\displaystyle a+ib} being a − i b {\displaystyle a-ib} , for real numbers a {\displaystyle a} and b {\displaystyle b} ). It is often denoted as A H {\displaystyle {\boldsymbol {A}}^{\mathrm {H} }} or A ∗ {\displaystyle {\boldsymbol {A}}^{*}} or A ′ {\displaystyle {\boldsymbol {A}}'} , and very commonly in physics as A † {\displaystyle {\boldsymbol {A}}^{\dagger }} . For real matrices, the conjugate transpose is just the transpose, A H = A T {\displaystyle {\boldsymbol {A}}^{\mathrm {H} }={\boldsymbol {A}}^{\mathsf {T}}} .
https://en.wikipedia.org/wiki/Adjoint_matrix
In mathematics, the connective constant is a numerical quantity associated with self-avoiding walks on a lattice. It is studied in connection with the notion of universality in two-dimensional statistical physics models. While the connective constant depends on the choice of lattice so itself is not universal (similarly to other lattice-dependent quantities such as the critical probability threshold for percolation), it is nonetheless an important quantity that appears in conjectures for universal laws. Furthermore, the mathematical techniques used to understand the connective constant, for example in the recent rigorous proof by Duminil-Copin and Smirnov that the connective constant of the hexagonal lattice has the precise value 2 + 2 {\displaystyle {\sqrt {2+{\sqrt {2}}}}} , may provide clues to a possible approach for attacking other important open problems in the study of self-avoiding walks, notably the conjecture that self-avoiding walks converge in the scaling limit to the Schramm–Loewner evolution.
https://en.wikipedia.org/wiki/Connective_constant
In mathematics, the constant problem is the problem of deciding whether a given expression is equal to zero.
https://en.wikipedia.org/wiki/Constant_problem
In mathematics, the constant sheaf on a topological space X {\displaystyle X} associated to a set A {\displaystyle A} is a sheaf of sets on X {\displaystyle X} whose stalks are all equal to A {\displaystyle A} . It is denoted by A _ {\displaystyle {\underline {A}}} or A X {\displaystyle A_{X}} . The constant presheaf with value A {\displaystyle A} is the presheaf that assigns to each open subset of X {\displaystyle X} the value A {\displaystyle A} , and all of whose restriction maps are the identity map A → A {\displaystyle A\to A} .
https://en.wikipedia.org/wiki/Constant_sheaf
The constant sheaf associated to A {\displaystyle A} is the sheafification of the constant presheaf associated to A {\displaystyle A} . This sheaf identifies with the sheaf of locally constant A {\displaystyle A} -valued functions on X {\displaystyle X} .In certain cases, the set A {\displaystyle A} may be replaced with an object A {\displaystyle A} in some category C {\displaystyle {\textbf {C}}} (e.g. when C {\displaystyle {\textbf {C}}} is the category of abelian groups, or commutative rings). Constant sheaves of abelian groups appear in particular as coefficients in sheaf cohomology.
https://en.wikipedia.org/wiki/Constant_sheaf
In mathematics, the continuous Hahn polynomials are a family of orthogonal polynomials in the Askey scheme of hypergeometric orthogonal polynomials. They are defined in terms of generalized hypergeometric functions by p n ( x ; a , b , c , d ) = i n ( a + c ) n ( a + d ) n n ! 3 F 2 ( − n , n + a + b + c + d − 1 , a + i x a + c , a + d ; 1 ) {\displaystyle p_{n}(x;a,b,c,d)=i^{n}{\frac {(a+c)_{n}(a+d)_{n}}{n!
https://en.wikipedia.org/wiki/Continuous_Hahn_polynomials
}}{}_{3}F_{2}\left({\begin{array}{c}-n,n+a+b+c+d-1,a+ix\\a+c,a+d\end{array}};1\right)} Roelof Koekoek, Peter A. Lesky, and René F. Swarttouw (2010, 14) give a detailed list of their properties. Closely related polynomials include the dual Hahn polynomials Rn(x;γ,δ,N), the Hahn polynomials Qn(x;a,b,c), and the continuous dual Hahn polynomials Sn(x;a,b,c). These polynomials all have q-analogs with an extra parameter q, such as the q-Hahn polynomials Qn(x;α,β, N;q), and so on.
https://en.wikipedia.org/wiki/Continuous_Hahn_polynomials
In mathematics, the continuous big q-Hermite polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. Roelof Koekoek, Peter A. Lesky, and René F. Swarttouw (2010, 14) give a detailed list of their properties.
https://en.wikipedia.org/wiki/Continuous_big_q-Hermite_polynomials
In mathematics, the continuous dual Hahn polynomials are a family of orthogonal polynomials in the Askey scheme of hypergeometric orthogonal polynomials. They are defined in terms of generalized hypergeometric functions by S n ( x 2 ; a , b , c ) = 3 F 2 ( − n , a + i x , a − i x ; a + b , a + c ; 1 ) . {\displaystyle S_{n}(x^{2};a,b,c)={}_{3}F_{2}(-n,a+ix,a-ix;a+b,a+c;1).\ } Roelof Koekoek, Peter A. Lesky, and René F. Swarttouw (2010, 14) give a detailed list of their properties. Closely related polynomials include the dual Hahn polynomials Rn(x;γ,δ,N), the continuous Hahn polynomials pn(x,a,b, a, b), and the Hahn polynomials. These polynomials all have q-analogs with an extra parameter q, such as the q-Hahn polynomials Qn(x;α,β, N;q), and so on.
https://en.wikipedia.org/wiki/Continuous_dual_Hahn_polynomials
In mathematics, the continuous dual q-Hahn polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. Roelof Koekoek, Peter A. Lesky, and René F. Swarttouw (2010, 14) give a detailed list of their properties.
https://en.wikipedia.org/wiki/Continuous_dual_q-Hahn_polynomials
In mathematics, the continuous q-Hahn polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. Roelof Koekoek, Peter A. Lesky, and René F. Swarttouw (2010, 14) give a detailed list of their properties.
https://en.wikipedia.org/wiki/Continuous_q-Hahn_polynomials
In mathematics, the continuous q-Hermite polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. Roelof Koekoek, Peter A. Lesky, and René F. Swarttouw (2010, 14) give a detailed list of their properties.
https://en.wikipedia.org/wiki/Continuous_q-Hermite_polynomials
In mathematics, the continuous q-Jacobi polynomials P(α,β)n(x|q), introduced by Askey & Wilson (1985), are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. Roelof Koekoek, Peter A. Lesky, and René F. Swarttouw (2010, 14) give a detailed list of their properties.
https://en.wikipedia.org/wiki/Continuous_q-Jacobi_polynomials
In mathematics, the continuous q-Laguerre polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme. Roelof Koekoek, Peter A. Lesky, and René F. Swarttouw (2010, 14) give a detailed list of their properties.
https://en.wikipedia.org/wiki/Continuous_q-Laguerre_polynomials
In mathematics, the continuous q-Legendre polynomials are a family of basic hypergeometric orthogonal polynomials in the basic Askey scheme.Koekoek, Lesky & Swarttouw (2010) give a detailed list of their properties.
https://en.wikipedia.org/wiki/Continuous_q-Legendre_polynomials
In mathematics, the continuous wavelet transform (CWT) is a formal (i.e., non-numerical) tool that provides an overcomplete representation of a signal by letting the translation and scale parameter of the wavelets vary continuously. The continuous wavelet transform of a function x ( t ) {\displaystyle x(t)} at a scale (a>0) a ∈ R + ∗ {\displaystyle a\in \mathbb {R^{+*}} } and translational value b ∈ R {\displaystyle b\in \mathbb {R} } is expressed by the following integral where ψ ( t ) {\displaystyle \psi (t)} is a continuous function in both the time domain and the frequency domain called the mother wavelet and the overline represents operation of complex conjugate. The main purpose of the mother wavelet is to provide a source function to generate the daughter wavelets which are simply the translated and scaled versions of the mother wavelet. To recover the original signal x ( t ) {\displaystyle x(t)} , the first inverse continuous wavelet transform can be exploited.
https://en.wikipedia.org/wiki/Continuous_wavelet_transform
x ( t ) = C ψ − 1 ∫ 0 ∞ ∫ − ∞ ∞ X w ( a , b ) 1 | a | 1 / 2 ψ ~ ( t − b a ) d b d a a 2 {\displaystyle x(t)=C_{\psi }^{-1}\int _{0}^{\infty }\int _{-\infty }^{\infty }X_{w}(a,b){\frac {1}{|a|^{1/2}}}{\tilde {\psi }}\left({\frac {t-b}{a}}\right)\,db\ {\frac {da}{a^{2}}}} ψ ~ ( t ) {\displaystyle {\tilde {\psi }}(t)} is the dual function of ψ ( t ) {\displaystyle \psi (t)} and C ψ = ∫ − ∞ ∞ ψ ^ ¯ ( ω ) ψ ~ ^ ( ω ) | ω | d ω {\displaystyle C_{\psi }=\int _{-\infty }^{\infty }{\frac {{\overline {\hat {\psi }}}(\omega ){\hat {\tilde {\psi }}}(\omega )}{|\omega |}}\,d\omega } is admissible constant, where hat means Fourier transform operator. Sometimes, ψ ~ ( t ) = ψ ( t ) {\displaystyle {\tilde {\psi }}(t)=\psi (t)} , then the admissible constant becomes C ψ = ∫ − ∞ + ∞ | ψ ^ ( ω ) | 2 | ω | d ω {\displaystyle C_{\psi }=\int _{-\infty }^{+\infty }{\frac {\left|{\hat {\psi }}(\omega )\right|^{2}}{\left|\omega \right|}}\,d\omega } Traditionally, this constant is called wavelet admissible constant. A wavelet whose admissible constant satisfies 0 < C ψ < ∞ {\displaystyle 0
https://en.wikipedia.org/wiki/Continuous_wavelet_transform
In mathematics, the continuum function is κ ↦ 2 κ {\displaystyle \kappa \mapsto 2^{\kappa }} , i.e. raising 2 to the power of κ using cardinal exponentiation. Given a cardinal number, it is the cardinality of the power set of a set of the given cardinality.
https://en.wikipedia.org/wiki/Continuum_function
In mathematics, the convergence condition by Courant–Friedrichs–Lewy is a necessary condition for convergence while solving certain partial differential equations (usually hyperbolic PDEs) numerically. It arises in the numerical analysis of explicit time integration schemes, when these are used for the numerical solution. As a consequence, the time step must be less than a certain time in many explicit time-marching computer simulations, otherwise the simulation produces incorrect results. The condition is named after Richard Courant, Kurt Friedrichs, and Hans Lewy who described it in their 1928 paper.
https://en.wikipedia.org/wiki/Courant–Friedrichs–Lewy_condition
In mathematics, the converse of a theorem of the form P → Q will be Q → P. The converse may or may not be true, and even if true, the proof may be difficult. For example, the Four-vertex theorem was proved in 1912, but its converse was proved only in 1997.In practice, when determining the converse of a mathematical theorem, aspects of the antecedent may be taken as establishing context. That is, the converse of "Given P, if Q then R" will be "Given P, if R then Q". For example, the Pythagorean theorem can be stated as: Given a triangle with sides of length a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , if the angle opposite the side of length c {\displaystyle c} is a right angle, then a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} . The converse, which also appears in Euclid's Elements (Book I, Proposition 48), can be stated as: Given a triangle with sides of length a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , if a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} , then the angle opposite the side of length c {\displaystyle c} is a right angle.
https://en.wikipedia.org/wiki/Converse_(logic)
In mathematics, the converse relation, or transpose, of a binary relation is the relation that occurs when the order of the elements is switched in the relation. For example, the converse of the relation 'child of' is the relation 'parent of'. In formal terms, if X {\displaystyle X} and Y {\displaystyle Y} are sets and L ⊆ X × Y {\displaystyle L\subseteq X\times Y} is a relation from X {\displaystyle X} to Y , {\displaystyle Y,} then L T {\displaystyle L^{\operatorname {T} }} is the relation defined so that y L T x {\displaystyle yL^{\operatorname {T} }x} if and only if x L y . {\displaystyle xLy.}
https://en.wikipedia.org/wiki/Converse_relation
In set-builder notation, L T = { ( y , x ) ∈ Y × X: ( x , y ) ∈ L } . {\displaystyle L^{\operatorname {T} }=\{(y,x)\in Y\times X:(x,y)\in L\}.} The notation is analogous with that for an inverse function.
https://en.wikipedia.org/wiki/Converse_relation
Although many functions do not have an inverse, every relation does have a unique converse. The unary operation that maps a relation to the converse relation is an involution, so it induces the structure of a semigroup with involution on the binary relations on a set, or, more generally, induces a dagger category on the category of relations as detailed below. As a unary operation, taking the converse (sometimes called conversion or transposition) commutes with the order-related operations of the calculus of relations, that is it commutes with union, intersection, and complement.
https://en.wikipedia.org/wiki/Converse_relation
Since a relation may be represented by a logical matrix, and the logical matrix of the converse relation is the transpose of the original, the converse relation is also called the transpose relation. It has also been called the opposite or dual of the original relation, or the inverse of the original relation, or the reciprocal L ∘ {\displaystyle L^{\circ }} of the relation L . {\displaystyle L.} Other notations for the converse relation include L C , L − 1 , L ˘ , L ∘ , {\displaystyle L^{\operatorname {C} },L^{-1},{\breve {L}},L^{\circ },} or L ∨ . {\displaystyle L^{\vee }.}
https://en.wikipedia.org/wiki/Converse_relation
In mathematics, the convolution power is the n-fold iteration of the convolution with itself. Thus if x {\displaystyle x} is a function on Euclidean space Rd and n {\displaystyle n} is a natural number, then the convolution power is defined by x ∗ n = x ∗ x ∗ x ∗ ⋯ ∗ x ∗ x ⏟ n , x ∗ 0 = δ 0 {\displaystyle x^{*n}=\underbrace {x*x*x*\cdots *x*x} _{n},\quad x^{*0}=\delta _{0}} where ∗ denotes the convolution operation of functions on Rd and δ0 is the Dirac delta distribution. This definition makes sense if x is an integrable function (in L1), a rapidly decreasing distribution (in particular, a compactly supported distribution) or is a finite Borel measure. If x is the distribution function of a random variable on the real line, then the nth convolution power of x gives the distribution function of the sum of n independent random variables with identical distribution x. The central limit theorem states that if x is in L1 and L2 with mean zero and variance σ2, then P ( x ∗ n σ n < β ) → Φ ( β ) a s n → ∞ {\displaystyle P\left({\frac {x^{*n}}{\sigma {\sqrt {n}}}}<\beta \right)\to \Phi (\beta )\quad {\rm {{as}\ n\to \infty }}} where Φ is the cumulative standard normal distribution on the real line.
https://en.wikipedia.org/wiki/Convolution_power
Equivalently, x ∗ n / σ n {\displaystyle x^{*n}/\sigma {\sqrt {n}}} tends weakly to the standard normal distribution. In some cases, it is possible to define powers x*t for arbitrary real t > 0. If μ is a probability measure, then μ is infinitely divisible provided there exists, for each positive integer n, a probability measure μ1/n such that μ 1 / n ∗ n = μ .
https://en.wikipedia.org/wiki/Convolution_power
{\displaystyle \mu _{1/n}^{*n}=\mu .} That is, a measure is infinitely divisible if it is possible to define all nth roots. Not every probability measure is infinitely divisible, and a characterization of infinitely divisible measures is of central importance in the abstract theory of stochastic processes.
https://en.wikipedia.org/wiki/Convolution_power
Intuitively, a measure should be infinitely divisible provided it has a well-defined "convolution logarithm." The natural candidate for measures having such a logarithm are those of (generalized) Poisson type, given in the form π α , μ = e − α ∑ n = 0 ∞ α n n !
https://en.wikipedia.org/wiki/Convolution_power
μ ∗ n . {\displaystyle \pi _{\alpha ,\mu }=e^{-\alpha }\sum _{n=0}^{\infty }{\frac {\alpha ^{n}}{n! }}\mu ^{*n}.}
https://en.wikipedia.org/wiki/Convolution_power
In fact, the Lévy–Khinchin theorem states that a necessary and sufficient condition for a measure to be infinitely divisible is that it must lie in the closure, with respect to the vague topology, of the class of Poisson measures (Stroock 1993, §3.2). Many applications of the convolution power rely on being able to define the analog of analytic functions as formal power series with powers replaced instead by the convolution power. Thus if F ( z ) = ∑ n = 0 ∞ a n z n {\displaystyle \textstyle {F(z)=\sum _{n=0}^{\infty }a_{n}z^{n}}} is an analytic function, then one would like to be able to define F ∗ ( x ) = a 0 δ 0 + ∑ n = 1 ∞ a n x ∗ n .
https://en.wikipedia.org/wiki/Convolution_power
{\displaystyle F^{*}(x)=a_{0}\delta _{0}+\sum _{n=1}^{\infty }a_{n}x^{*n}.} If x ∈ L1(Rd) or more generally is a finite Borel measure on Rd, then the latter series converges absolutely in norm provided that the norm of x is less than the radius of convergence of the original series defining F(z). In particular, it is possible for such measures to define the convolutional exponential exp ∗ ⁡ ( x ) = δ 0 + ∑ n = 1 ∞ x ∗ n n !
https://en.wikipedia.org/wiki/Convolution_power
. {\displaystyle \exp ^{*}(x)=\delta _{0}+\sum _{n=1}^{\infty }{\frac {x^{*n}}{n!}}.} It is not generally possible to extend this definition to arbitrary distributions, although a class of distributions on which this series still converges in an appropriate weak sense is identified by Ben Chrouda, El Oued & Ouerdiane (2002).
https://en.wikipedia.org/wiki/Convolution_power
In mathematics, the convolution theorem states that under suitable conditions the Fourier transform of a convolution of two functions (or signals) is the pointwise product of their Fourier transforms. More generally, convolution in one domain (e.g., time domain) equals point-wise multiplication in the other domain (e.g., frequency domain). Other versions of the convolution theorem are applicable to various Fourier-related transforms.
https://en.wikipedia.org/wiki/Convolution_theorem
In mathematics, the corona or corona set of a topological space X is the complement βX\X of the space in its Stone–Čech compactification βX. A topological space is said to be σ-compact if it is the union of countably many compact subspaces, and locally compact if every point has a neighbourhood with compact closure. The corona of a σ-compact and locally compact Hausdorff space is a sub-Stonean space, i.e., any two open σ-compact disjoint subsets have disjoint compact closures.
https://en.wikipedia.org/wiki/Corona_set
In mathematics, the corona theorem is a result about the spectrum of the bounded holomorphic functions on the open unit disc, conjectured by Kakutani (1941) and proved by Lennart Carleson (1962). The commutative Banach algebra and Hardy space H∞ consists of the bounded holomorphic functions on the open unit disc D. Its spectrum S (the closed maximal ideals) contains D as an open subspace because for each z in D there is a maximal ideal consisting of functions f with f(z) = 0.The subspace D cannot make up the entire spectrum S, essentially because the spectrum is a compact space and D is not. The complement of the closure of D in S was called the corona by Newman (1959), and the corona theorem states that the corona is empty, or in other words the open unit disc D is dense in the spectrum. A more elementary formulation is that elements f1,...,fn generate the unit ideal of H∞ if and only if there is some δ>0 such that | f 1 | + ⋯ + | f n | ≥ δ {\displaystyle |f_{1}|+\cdots +|f_{n}|\geq \delta } everywhere in the unit ball.Newman showed that the corona theorem can be reduced to an interpolation problem, which was then proved by Carleson.
https://en.wikipedia.org/wiki/Corona_theorem
In 1979 Thomas Wolff gave a simplified (but unpublished) proof of the corona theorem, described in (Koosis 1980) and (Gamelin 1980). Cole later showed that this result cannot be extended to all open Riemann surfaces (Gamelin 1978).
https://en.wikipedia.org/wiki/Corona_theorem
As a by-product, of Carleson's work, the Carleson measure was invented which itself is a very useful tool in modern function theory. It remains an open question whether there are versions of the corona theorem for every planar domain or for higher-dimensional domains. Note that if one assumes the continuity up to the boundary in the corona theorem, then the conclusion follows easily from the theory of commutative Banach algebra (Rudin 1991).
https://en.wikipedia.org/wiki/Corona_theorem
In mathematics, the correlation immunity of a Boolean function is a measure of the degree to which its outputs are uncorrelated with some subset of its inputs. Specifically, a Boolean function is said to be correlation-immune of order m if every subset of m or fewer variables in x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\ldots ,x_{n}} is statistically independent of the value of f ( x 1 , x 2 , … , x n ) {\displaystyle f(x_{1},x_{2},\ldots ,x_{n})} .
https://en.wikipedia.org/wiki/Correlation_immunity
In mathematics, the coset construction (or GKO construction) is a method of constructing unitary highest weight representations of the Virasoro algebra, introduced by Peter Goddard, Adrian Kent and David Olive (1986). The construction produces the complete discrete series of highest weight representations of the Virasoro algebra and demonstrates their unitarity, thus establishing the classification of unitary highest weight representations.
https://en.wikipedia.org/wiki/Coset_construction
In mathematics, the coshc function appears frequently in papers about optical scattering, Heisenberg spacetime and hyperbolic geometry. For z ≠ 0 {\displaystyle z\neq 0} , it is defined as It is a solution of the following differential equation:
https://en.wikipedia.org/wiki/Coshc_function
In mathematics, the cotangent complex is a common generalisation of the cotangent sheaf, normal bundle and virtual tangent bundle of a map of geometric spaces such as manifolds or schemes. If f: X → Y {\displaystyle f:X\to Y} is a morphism of geometric or algebraic objects, the corresponding cotangent complex L X / Y ∙ {\displaystyle \mathbf {L} _{X/Y}^{\bullet }} can be thought of as a universal "linearization" of it, which serves to control the deformation theory of f {\displaystyle f} . It is constructed as an object in a certain derived category of sheaves on X {\displaystyle X} using the methods of homotopical algebra.
https://en.wikipedia.org/wiki/Cotangent_complex
Restricted versions of cotangent complexes were first defined in various cases by a number of authors in the early 1960s. In the late 1960s, Michel André and Daniel Quillen independently came up with the correct definition for a morphism of commutative rings, using simplicial methods to make precise the idea of the cotangent complex as given by taking the (non-abelian) left derived functor of Kähler differentials. Luc Illusie then globalized this definition to the general situation of a morphism of ringed topoi, thereby incorporating morphisms of ringed spaces, schemes, and algebraic spaces into the theory.
https://en.wikipedia.org/wiki/Cotangent_complex
In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a differential operator, to be contrasted with the approach given by a principal connection on the frame bundle – see affine connection. In the special case of a manifold isometrically embedded into a higher-dimensional Euclidean space, the covariant derivative can be viewed as the orthogonal projection of the Euclidean directional derivative onto the manifold's tangent space. In this case the Euclidean derivative is broken into two parts, the extrinsic normal component (dependent on the embedding) and the intrinsic covariant derivative component.
https://en.wikipedia.org/wiki/Covariant_differential
The name is motivated by the importance of changes of coordinate in physics: the covariant derivative transforms covariantly under a general coordinate transformation, that is, linearly via the Jacobian matrix of the transformation.This article presents an introduction to the covariant derivative of a vector field with respect to a vector field, both in a coordinate-free language and using a local coordinate system and the traditional index notation. The covariant derivative of a tensor field is presented as an extension of the same concept. The covariant derivative generalizes straightforwardly to a notion of differentiation associated to a connection on a vector bundle, also known as a Koszul connection.
https://en.wikipedia.org/wiki/Covariant_differential
In mathematics, the crank conjecture was a conjecture about the existence of the crank of a partition that separates partitions of a number congruent to 6 mod 11 into 11 equal classes. The conjecture was introduced by Dyson (1944) and proved by Andrews and Garvan (1987).
https://en.wikipedia.org/wiki/Crank_conjecture
In mathematics, the crenel function is a periodic discontinuous function P(x) defined as 1 for x belonging to a given interval and 0 outside of it. It can be presented as a difference between two Heaviside step functions of amplitude 1. It is used in crystallography to account for irregularities in the occupation of atomic sites by given atoms in solids, such as periodic domain structures, where some regions are enriched and some are depleted with certain atoms.Mathematically, P ( x ) = { 1 , x ∈ , 0 , x ∉ , {\displaystyle P(x)={\begin{cases}1,&x\in ,\\0,&x\notin ,\end{cases}}} The coefficients of its Fourier series are P k ( Δ , x ) = exp ⁡ ( 2 π i k x ) sin ⁡ ( π k Δ ) π k = Δ ⋅ s i n c ( π k Δ ) ⋅ e 2 π i k x .
https://en.wikipedia.org/wiki/Crenel_function
{\displaystyle P_{k}(\Delta ,x)={\frac {\exp(2\pi i\,kx)\sin(\pi k\Delta )}{\pi k}}=\Delta \cdot \mathrm {sinc} (\pi k\Delta )\cdot \mathrm {e} ^{2\pi i\,kx}.} with the Sinc function. == References ==
https://en.wikipedia.org/wiki/Crenel_function
In mathematics, the cross product or vector product (occasionally directed area product, to emphasize its geometric significance) is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space (named here E {\displaystyle E} ), and is denoted by the symbol × {\displaystyle \times } . Given two linearly independent vectors a and b, the cross product, a × b (read "a cross b"), is a vector that is perpendicular to both a and b, and thus normal to the plane containing them. The units of the cross-product are the product of the units of each vector. It has many applications in mathematics, physics, engineering, and computer programming.
https://en.wikipedia.org/wiki/Xyzzy_(mnemonic)
It should not be confused with the dot product (projection product). If two vectors have the same direction or have the exact opposite direction from each other (that is, they are not linearly independent), or if either one has zero length, then their cross product is zero. More generally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular, the magnitude of the product of two perpendicular vectors is the product of their lengths.
https://en.wikipedia.org/wiki/Xyzzy_(mnemonic)
The cross product is anticommutative (that is, a × b = − b × a) and is distributive over addition (that is, a × (b + c) = a × b + a × c). The space E {\displaystyle E} together with the cross product is an algebra over the real numbers, which is neither commutative nor associative, but is a Lie algebra with the cross product being the Lie bracket. Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on a choice of orientation (or "handedness") of the space (it's why an oriented space is needed).
https://en.wikipedia.org/wiki/Xyzzy_(mnemonic)
In connection with the cross product, the exterior product of vectors can be used in arbitrary dimensions (with a bivector or 2-form result) and is independent of the orientation of the space. The product can be generalized in various ways, using the orientation and metric structure just as for the traditional 3-dimensional cross product, one can, in n dimensions, take the product of n − 1 vectors to produce a vector perpendicular to all of them. But if the product is limited to non-trivial binary products with vector results, it exists only in three and seven dimensions. The cross-product in seven dimensions has undesirable properties, however (e.g. it fails to satisfy the Jacobi identity), so it is not used in mathematical physics to represent quantities such as multi-dimensional space-time. (See § Generalizations, below, for other dimensions.)
https://en.wikipedia.org/wiki/Xyzzy_(mnemonic)
In mathematics, the curvature of a measure defined on the Euclidean plane R2 is a quantification of how much the measure's "distribution of mass" is "curved". It is related to notions of curvature in geometry. In the form presented below, the concept was introduced in 1995 by the mathematician Mark S. Melnikov; accordingly, it may be referred to as the Melnikov curvature or Menger-Melnikov curvature. Melnikov and Verdera (1995) established a powerful connection between the curvature of measures and the Cauchy kernel.
https://en.wikipedia.org/wiki/Curvature_of_a_measure
In mathematics, the curve complex is a simplicial complex C(S) associated to a finite-type surface S, which encodes the combinatorics of simple closed curves on S. The curve complex turned out to be a fundamental tool in the study of the geometry of the Teichmüller space, of mapping class groups and of Kleinian groups. It was introduced by W.J.Harvey in 1978.
https://en.wikipedia.org/wiki/Curve_complex
In mathematics, the curve-shortening flow is a process that modifies a smooth curve in the Euclidean plane by moving its points perpendicularly to the curve at a speed proportional to the curvature. The curve-shortening flow is an example of a geometric flow, and is the one-dimensional case of the mean curvature flow. Other names for the same process include the Euclidean shortening flow, geometric heat flow, and arc length evolution. As the points of any smooth simple closed curve move in this way, the curve remains simple and smooth.
https://en.wikipedia.org/wiki/Curve-shortening_flow