text
stringlengths
9
3.55k
source
stringlengths
31
280
In mathematics, a cyclotomic unit (or circular unit) is a unit of an algebraic number field which is the product of numbers of the form (ζan − 1) for ζn an nth root of unity and 0 < a < n.
https://en.wikipedia.org/wiki/Cyclotomic_unit
In mathematics, a càdlàg (French: "continue à droite, limite à gauche"), RCLL ("right continuous with left limits"), or corlol ("continuous on (the) right, limit on (the) left") function is a function defined on the real numbers (or a subset of them) that is everywhere right-continuous and has left limits everywhere. Càdlàg functions are important in the study of stochastic processes that admit (or even require) jumps, unlike Brownian motion, which has continuous sample paths. The collection of càdlàg functions on a given domain is known as Skorokhod space. Two related terms are càglàd, standing for "continue à gauche, limite à droite", the left-right reversal of càdlàg, and càllàl for "continue à l'un, limite à l’autre" (continuous on one side, limit on the other side), for a function which at each point of the domain is either càdlàg or càglàd.
https://en.wikipedia.org/wiki/Skorokhod_space
In mathematics, a de Rham curve is a certain type of fractal curve named in honor of Georges de Rham. The Cantor function, Cesàro curve, Minkowski's question mark function, the Lévy C curve, the blancmange curve, and Koch curve are all special cases of the general de Rham curve.
https://en.wikipedia.org/wiki/Cesàro_fractal
In mathematics, a decomposable measure (also known as a strictly localizable measure) is a measure that is a disjoint union of finite measures. This is a generalization of σ-finite measures, which are the same as those that are a disjoint union of countably many finite measures. There are several theorems in measure theory such as the Radon–Nikodym theorem that are not true for arbitrary measures but are true for σ-finite measures. Several such theorems remain true for the more general class of decomposable measures. This extra generality is not used much as most decomposable measures that occur in practice are σ-finite.
https://en.wikipedia.org/wiki/Decomposable_measure
In mathematics, a definite quadratic form is a quadratic form over some real vector space V that has the same sign (always positive or always negative) for every non-zero vector of V. According to that sign, the quadratic form is called positive-definite or negative-definite. A semidefinite (or semi-definite) quadratic form is defined in much the same way, except that "always positive" and "always negative" are replaced by "never negative" and "never positive", respectively. In other words, it may take on zero values for some non-zero vectors of V. An indefinite quadratic form takes on both positive and negative values and is called an isotropic quadratic form. More generally, these definitions apply to any vector space over an ordered field.
https://en.wikipedia.org/wiki/Negative-definite_bilinear_form
In mathematics, a deformation ring is a ring that controls liftings of a representation of a Galois group from a finite field to a local field. In particular for any such lifting problem there is often a universal deformation ring that classifies all such liftings, and whose spectrum is the universal deformation space. A key step in Wiles's proof of the modularity theorem was to study the relation between universal deformation rings and Hecke algebras.
https://en.wikipedia.org/wiki/Deformation_ring
In mathematics, a degenerate case is a limiting case of a class of objects which appears to be qualitatively different from (and usually simpler than) the rest of the class, and the term degeneracy is the condition of being a degenerate case.The definitions of many classes of composite or structured objects often implicitly include inequalities. For example, the angles and the side lengths of a triangle are supposed to be positive. The limiting cases, where one or several of these inequalities become equalities, are degeneracies. In the case of triangles, one has a degenerate triangle if at least one side length or angle is zero.
https://en.wikipedia.org/wiki/Degeneracy_(math)
Equivalently, it becomes a "line segment".Often, the degenerate cases are the exceptional cases where changes to the usual dimension or the cardinality of the object (or of some part of it) occur. For example, a triangle is an object of dimension two, and a degenerate triangle is contained in a line, which makes its dimension one. This is similar to the case of a circle, whose dimension shrinks from two to zero as it degenerates into a point.
https://en.wikipedia.org/wiki/Degeneracy_(math)
As another example, the solution set of a system of equations that depends on parameters generally has a fixed cardinality and dimension, but cardinality and/or dimension may be different for some exceptional values, called degenerate cases. In such a degenerate case, the solution set is said to be degenerate. For some classes of composite objects, the degenerate cases depend on the properties that are specifically studied.
https://en.wikipedia.org/wiki/Degeneracy_(math)
In particular, the class of objects may often be defined or characterized by systems of equations. In most scenarios, a given class of objects may be defined by several different systems of equations, and these different systems of equations may lead to different degenerate cases, while characterizing the same non-degenerate cases. This may be the reason for which there is no general definition of degeneracy, despite the fact that the concept is widely used and defined (if needed) in each specific situation.
https://en.wikipedia.org/wiki/Degeneracy_(math)
A degenerate case thus has special features which makes it non-generic, or a special case. However, not all non-generic or special cases are degenerate.
https://en.wikipedia.org/wiki/Degeneracy_(math)
For example, right triangles, isosceles triangles and equilateral triangles are non-generic and non-degenerate. In fact, degenerate cases often correspond to singularities, either in the object or in some configuration space. For example, a conic section is degenerate if and only if it has singular points (e.g., point, line, intersecting lines).
https://en.wikipedia.org/wiki/Degeneracy_(math)
In mathematics, a degenerate distribution is, according to some, a probability distribution in a space with support only on a manifold of lower dimension, and according to others a distribution with support only at a single point. By the latter definition, it is a deterministic distribution and takes only a single value. Examples include a two-headed coin and rolling a die whose sides all show the same number. This distribution satisfies the definition of "random variable" even though it does not appear random in the everyday sense of the word; hence it is considered degenerate.In the case of a real-valued random variable, the degenerate distribution is a one-point distribution, localized at a point k0 on the real line. The probability mass function equals 1 at this point and 0 elsewhere.The degenerate univariate distribution can be viewed as the limiting case of a continuous distribution whose variance goes to 0 causing the probability density function to be a delta function at k0, with infinite height there but area equal to 1.The cumulative distribution function of the univariate degenerate distribution is: F k 0 ( x ) = { 1 , if x ≥ k 0 0 , if x < k 0 {\displaystyle F_{k_{0}}(x)=\left\{{\begin{matrix}1,&{\mbox{if }}x\geq k_{0}\\0,&{\mbox{if }}x
https://en.wikipedia.org/wiki/Constant_random_variable
In mathematics, a delta operator is a shift-equivariant linear operator Q: K ⟶ K {\displaystyle Q\colon \mathbb {K} \longrightarrow \mathbb {K} } on the vector space of polynomials in a variable x {\displaystyle x} over a field K {\displaystyle \mathbb {K} } that reduces degrees by one. To say that Q {\displaystyle Q} is shift-equivariant means that if g ( x ) = f ( x + a ) {\displaystyle g(x)=f(x+a)} , then ( Q g ) ( x ) = ( Q f ) ( x + a ) . {\displaystyle {(Qg)(x)=(Qf)(x+a)}.\,} In other words, if f {\displaystyle f} is a "shift" of g {\displaystyle g} , then Q f {\displaystyle Qf} is also a shift of Q g {\displaystyle Qg} , and has the same "shifting vector" a {\displaystyle a} .
https://en.wikipedia.org/wiki/Delta_operator
To say that an operator reduces degree by one means that if f {\displaystyle f} is a polynomial of degree n {\displaystyle n} , then Q f {\displaystyle Qf} is either a polynomial of degree n − 1 {\displaystyle n-1} , or, in case n = 0 {\displaystyle n=0} , Q f {\displaystyle Qf} is 0. Sometimes a delta operator is defined to be a shift-equivariant linear transformation on polynomials in x {\displaystyle x} that maps x {\displaystyle x} to a nonzero constant. Seemingly weaker than the definition given above, this latter characterization can be shown to be equivalent to the stated definition when K {\displaystyle \mathbb {K} } has characteristic zero, since shift-equivariance is a fairly strong condition.
https://en.wikipedia.org/wiki/Delta_operator
In mathematics, a delta-matroid or Δ-matroid is a family of sets obeying an exchange axiom generalizing an axiom of matroids. A non-empty family of sets is a delta-matroid if, for every two sets E {\displaystyle E} and F {\displaystyle F} in the family, and for every element e {\displaystyle e} in their symmetric difference E △ F {\displaystyle E\triangle F} , there exists an f ∈ E △ F {\displaystyle f\in E\triangle F} such that E △ { e , f } {\displaystyle E\triangle \{e,f\}} is in the family. For the basis sets of a matroid, the corresponding exchange axiom requires in addition that e ∈ E {\displaystyle e\in E} and f ∈ F {\displaystyle f\in F} , ensuring that E {\displaystyle E} and F {\displaystyle F} have the same cardinality. For a delta-matroid, either of the two elements may belong to either of the two sets, and it is also allowed for the two elements to be equal.
https://en.wikipedia.org/wiki/Delta-matroid
An alternative and equivalent definition is that a family of sets forms a delta-matroid when the convex hull of its indicator vectors (the analogue of a matroid polytope) has the property that every edge length is either one or the square root of two. Delta-matroids were defined by André Bouchet in 1987. Algorithms for matroid intersection and the matroid parity problem can be extended to some cases of delta-matroids.Delta-matroids have also been used to study constraint satisfaction problems.
https://en.wikipedia.org/wiki/Delta-matroid
As a special case, an even delta-matroid is a delta-matroid in which either all sets have even number of elements, or all sets have an odd number of elements. If a constraint satisfaction problem has a Boolean variable on each edge of a planar graph, and if the variables of the edges incident to each vertex of the graph are constrained to belong to an even delta-matroid (possibly a different even delta-matroid for each vertex), then the problem can be solved in polynomial time. This result plays a key role in a characterization of the planar Boolean constraint satisfaction problems that can be solved in polynomial time. == References ==
https://en.wikipedia.org/wiki/Delta-matroid
In mathematics, a dendrite is a certain type of topological space that may be characterized either as a locally connected dendroid or equivalently as a locally connected continuum that contains no simple closed curves.
https://en.wikipedia.org/wiki/Dendrite_(mathematics)
In mathematics, a dendroid is a type of topological space, satisfying the properties that it is hereditarily unicoherent (meaning that every subcontinuum of X is unicoherent), arcwise connected, and forms a continuum. The term dendroid was introduced by Bronisław Knaster lecturing at the University of Wrocław, although these spaces were studied earlier by Karol Borsuk and others.Borsuk (1954) proved that dendroids have the fixed-point property: Every continuous function from a dendroid to itself has a fixed point. Cook (1970) proved that every dendroid is tree-like, meaning that it has arbitrarily fine open covers whose nerve is a tree. The more general question of whether every tree-like continuum has the fixed-point property, posed by Bing (1951), was solved in the negative by David P. Bellamy, who gave an example of a tree-like continuum without the fixed-point property.
https://en.wikipedia.org/wiki/Dendroid_(topology)
In Knaster's original publication on dendroids, in 1961, he posed the problem of characterizing the dendroids which can be embedded into the Euclidean plane. This problem remains open.
https://en.wikipedia.org/wiki/Dendroid_(topology)
Another problem posed in the same year by Knaster, on the existence of an uncountable collection of dendroids with the property that no dendroid in the collection has a continuous surjection onto any other dendroid in the collection, was solved by Minc (2010) and Islas (2007), who gave an example of such a family.A locally connected dendroid is called a dendrite. A cone over the Cantor set (called a Cantor fan) is an example of a dendroid that is not a dendrite. == References ==
https://en.wikipedia.org/wiki/Dendroid_(topology)
In mathematics, a dendroidal set is a generalization of simplicial sets introduced by Moerdijk & Weiss (2007). They have the same relation to (colored symmetric) operads, also called symmetric multicategories, that simplicial sets have to categories.
https://en.wikipedia.org/wiki/Dendroidal_set
In mathematics, a dense graph is a graph in which the number of edges is close to the maximal number of edges (where every pair of vertices is connected by one edge). The opposite, a graph with only a few edges, is a sparse graph. The distinction of what constitutes a dense or sparse graph is ill-defined, and is often represented by 'roughly equal to' statements. Due to this, the way that density is defined often depends on the context of the problem.
https://en.wikipedia.org/wiki/Dense_graph
The graph density of simple graphs is defined to be the ratio of the number of edges |E| with respect to the maximum possible edges. For undirected simple graphs, the graph density is: D = | E | ( | V | 2 ) = 2 | E | | V | ( | V | − 1 ) {\displaystyle D={\frac {|E|}{\binom {|V|}{2}}}={\frac {2|E|}{|V|(|V|-1)}}} For directed, simple graphs, the maximum possible edges is twice that of undirected graphs (as there are two directions to an edge) so the density is: D = | E | 2 ( | V | 2 ) = | E | | V | ( | V | − 1 ) {\displaystyle D={\frac {|E|}{2{\binom {|V|}{2}}}}={\frac {|E|}{|V|(|V|-1)}}} where E is the number of edges and V is the number of vertices in the graph.
https://en.wikipedia.org/wiki/Dense_graph
The maximum number of edges for an undirected graph is ( | V | 2 ) = | V | ( | V | − 1 ) 2 {\displaystyle {\binom {|V|}{2}}={\frac {|V|(|V|-1)}{2}}} , so the maximal density is 1 (for complete graphs) and the minimal density is 0 (Coleman & Moré 1983). For families of graphs of increasing size, one often calls them sparse if D → 0 {\displaystyle D\rightarrow 0} as | V | → ∞ {\displaystyle |V|\rightarrow \infty } . Sometimes, in computer science, a more restrictive definition of sparse is used like | E | = O ( | V | log ⁡ | V | ) {\displaystyle |E|=O(|V|\log |V|)} or even | E | = O ( | V | ) {\displaystyle |E|=O(|V|)} .
https://en.wikipedia.org/wiki/Dense_graph
In mathematics, a dependence relation is a binary relation which generalizes the relation of linear dependence. Let X {\displaystyle X} be a set. A (binary) relation ◃ {\displaystyle \triangleleft } between an element a {\displaystyle a} of X {\displaystyle X} and a subset S {\displaystyle S} of X {\displaystyle X} is called a dependence relation, written a ◃ S {\displaystyle a\triangleleft S} , if it satisfies the following properties: if a ∈ S {\displaystyle a\in S} , then a ◃ S {\displaystyle a\triangleleft S} ; if a ◃ S {\displaystyle a\triangleleft S} , then there is a finite subset S 0 {\displaystyle S_{0}} of S {\displaystyle S} , such that a ◃ S 0 {\displaystyle a\triangleleft S_{0}} ; if T {\displaystyle T} is a subset of X {\displaystyle X} such that b ∈ S {\displaystyle b\in S} implies b ◃ T {\displaystyle b\triangleleft T} , then a ◃ S {\displaystyle a\triangleleft S} implies a ◃ T {\displaystyle a\triangleleft T} ; if a ◃ S {\displaystyle a\triangleleft S} but a ⋪ S − { b } {\displaystyle a\ntriangleleft S-\lbrace b\rbrace } for some b ∈ S {\displaystyle b\in S} , then b ◃ ( S − { b } ) ∪ { a } {\displaystyle b\triangleleft (S-\lbrace b\rbrace )\cup \lbrace a\rbrace } .Given a dependence relation ◃ {\displaystyle \triangleleft } on X {\displaystyle X} , a subset S {\displaystyle S} of X {\displaystyle X} is said to be independent if a ⋪ S − { a } {\displaystyle a\ntriangleleft S-\lbrace a\rbrace } for all a ∈ S . {\displaystyle a\in S.}
https://en.wikipedia.org/wiki/Dependence_relation
If S ⊆ T {\displaystyle S\subseteq T} , then S {\displaystyle S} is said to span T {\displaystyle T} if t ◃ S {\displaystyle t\triangleleft S} for every t ∈ T . {\displaystyle t\in T.} S {\displaystyle S} is said to be a basis of X {\displaystyle X} if S {\displaystyle S} is independent and S {\displaystyle S} spans X .
https://en.wikipedia.org/wiki/Dependence_relation
{\displaystyle X.} Remark.
https://en.wikipedia.org/wiki/Dependence_relation
If X {\displaystyle X} is a non-empty set with a dependence relation ◃ {\displaystyle \triangleleft } , then X {\displaystyle X} always has a basis with respect to ◃ . {\displaystyle \triangleleft .} Furthermore, any two bases of X {\displaystyle X} have the same cardinality.
https://en.wikipedia.org/wiki/Dependence_relation
In mathematics, a derivation is a function on an algebra which generalizes certain features of the derivative operator. Specifically, given an algebra A over a ring or a field K, a K-derivation is a K-linear map D: A → A that satisfies Leibniz's law: D ( a b ) = a D ( b ) + D ( a ) b . {\displaystyle D(ab)=aD(b)+D(a)b.} More generally, if M is an A-bimodule, a K-linear map D: A → M that satisfies the Leibniz law is also called a derivation.
https://en.wikipedia.org/wiki/Homogeneous_derivation
The collection of all K-derivations of A to itself is denoted by DerK(A). The collection of K-derivations of A into an A-module M is denoted by DerK(A, M). Derivations occur in many different contexts in diverse areas of mathematics.
https://en.wikipedia.org/wiki/Homogeneous_derivation
The partial derivative with respect to a variable is an R-derivation on the algebra of real-valued differentiable functions on Rn. The Lie derivative with respect to a vector field is an R-derivation on the algebra of differentiable functions on a differentiable manifold; more generally it is a derivation on the tensor algebra of a manifold. It follows that the adjoint representation of a Lie algebra is a derivation on that algebra.
https://en.wikipedia.org/wiki/Homogeneous_derivation
The Pincherle derivative is an example of a derivation in abstract algebra. If the algebra A is noncommutative, then the commutator with respect to an element of the algebra A defines a linear endomorphism of A to itself, which is a derivation over K. That is, = G + F {\displaystyle =G+F} where {\displaystyle } is the commutator with respect to N {\displaystyle N} . An algebra A equipped with a distinguished derivation d forms a differential algebra, and is itself a significant object of study in areas such as differential Galois theory.
https://en.wikipedia.org/wiki/Homogeneous_derivation
In mathematics, a derivation ∂ {\displaystyle \partial } of a commutative ring A {\displaystyle A} is called a locally nilpotent derivation (LND) if every element of A {\displaystyle A} is annihilated by some power of ∂ {\displaystyle \partial } . One motivation for the study of locally nilpotent derivations comes from the fact that some of the counterexamples to Hilbert's 14th problem are obtained as the kernels of a derivation on a polynomial ring.Over a field k {\displaystyle k} of characteristic zero, to give a locally nilpotent derivation on the integral domain A {\displaystyle A} , finitely generated over the field, is equivalent to giving an action of the additive group ( k , + ) {\displaystyle (k,+)} to the affine variety X = Spec ⁡ ( A ) {\displaystyle X=\operatorname {Spec} (A)} . Roughly speaking, an affine variety admitting "plenty" of actions of the additive group is considered similar to an affine space.
https://en.wikipedia.org/wiki/Locally_nilpotent_derivation
In mathematics, a dessin d'enfant is a type of graph embedding used to study Riemann surfaces and to provide combinatorial invariants for the action of the absolute Galois group of the rational numbers. The name of these embeddings is French for a "child's drawing"; its plural is either dessins d'enfant, "child's drawings", or dessins d'enfants, "children's drawings". A dessin d'enfant is a graph, with its vertices colored alternately black and white, embedded in an oriented surface that, in many cases, is simply a plane. For the coloring to exist, the graph must be bipartite.
https://en.wikipedia.org/wiki/Dessin_d'enfant
The faces of the embedding are required be topological disks. The surface and the embedding may be described combinatorially using a rotation system, a cyclic order of the edges surrounding each vertex of the graph that describes the order in which the edges would be crossed by a path that travels clockwise on the surface in a small loop around the vertex. Any dessin can provide the surface it is embedded in with a structure as a Riemann surface.
https://en.wikipedia.org/wiki/Dessin_d'enfant
It is natural to ask which Riemann surfaces arise in this way. The answer is provided by Belyi's theorem, which states that the Riemann surfaces that can be described by dessins are precisely those that can be defined as algebraic curves over the field of algebraic numbers. The absolute Galois group transforms these particular curves into each other, and thereby also transforms the underlying dessins. For a more detailed treatment of this subject, see Schneps (1994) or Lando & Zvonkin (2004).
https://en.wikipedia.org/wiki/Dessin_d'enfant
In mathematics, a determinantal point process is a stochastic point process, the probability distribution of which is characterized as a determinant of some function. Such processes arise as important tools in random matrix theory, combinatorics, physics, and wireless network modeling.
https://en.wikipedia.org/wiki/Determinantal_point_process
In mathematics, a developable surface (or torse: archaic) is a smooth surface with zero Gaussian curvature. That is, it is a surface that can be flattened onto a plane without distortion (i.e. it can be bent without stretching or compression). Conversely, it is a surface which can be made by transforming a plane (i.e. "folding", "bending", "rolling", "cutting" and/or "gluing"). In three dimensions all developable surfaces are ruled surfaces (but not vice versa). There are developable surfaces in four-dimensional space R 4 {\displaystyle \mathbb {R} ^{4}} which are not ruled.The envelope of a single parameter family of planes is called a developable surface.
https://en.wikipedia.org/wiki/Developable_surface
In mathematics, a diagonal form is an algebraic form (homogeneous polynomial) without cross-terms involving different indeterminates. That is, it is ∑ i = 1 n a i x i m {\displaystyle \sum _{i=1}^{n}a_{i}{x_{i}}^{m}\ } for some given degree m. Such forms F, and the hypersurfaces F = 0 they define in projective space, are very special in geometric terms, with many symmetries. They also include famous cases like the Fermat curves, and other examples well known in the theory of Diophantine equations. A great deal has been worked out about their theory: algebraic geometry, local zeta-functions via Jacobi sums, Hardy-Littlewood circle method.
https://en.wikipedia.org/wiki/Fermat_hypersurface
In mathematics, a dicut is a partition of the vertices of a directed graph into two subsets, so that each edge that has an endpoint in both subsets is directed from the first subset to the second. Each strongly connected component of the graph must be entirely contained in one of the two subsets, so a strongly connected graph has no nontrivial dicuts.The second of the two subsets in a dicut, a subset of vertices with no edges that exit the subset, is called a closure. The closure problem is the algorithmic problem of finding a dicut, in an edge-weighted directed graph, whose total weight is as large as possible. It can be solved in polynomial time.In planar graphs, dicuts and cycles are dual concepts.
https://en.wikipedia.org/wiki/Directed_cut
The dual graph of a directed graph, embedded in the plane, is a graph with a vertex for each face of the given graph, and a dual edge between two dual vertices when the corresponding two faces are separated by an edge. Each dual edge crosses one of the original graph edges, turned by 90° clockwise. For a dicut in the given graph, the duals of the edges that cross the dicut form a directed cycle in the dual graph, and vice versa.A dijoin can be defined as a set of edges that crosses all dicuts; when the edges of a dijoin are contracted, the result is a strongly connected graph.
https://en.wikipedia.org/wiki/Directed_cut
Woodall's conjecture, an unsolved problem in this area, states that in any directed graph the minimum number of edges in a dicut (the unweighted minimum closure) equals the maximum number of disjoint dijoins that can be found in the graph (a packing of dijoins). A fractional weighted version of the conjecture, posed by Jack Edmonds and Rick Giles, was refuted by Alexander Schrijver. In the other direction, the Lucchesi–Younger theorem states that the minimum size of a dijoin equals the maximum number of disjoint dicuts that can be found in a given graph. == References ==
https://en.wikipedia.org/wiki/Directed_cut
In mathematics, a diffeology on a set generalizes the concept of smooth charts in a differentiable manifold, declaring what the "smooth parametrizations" in the set are. The concept was first introduced by Jean-Marie Souriau in the 1980s under the name Espace différentiel and later developed by his students Paul Donato and Patrick Iglesias. A related idea was introduced by Kuo-Tsaï Chen (陳國才, Chen Guocai) in the 1970s, using convex sets instead of open sets for the domains of the plots.
https://en.wikipedia.org/wiki/Diffeology
In mathematics, a diffeomorphism is an isomorphism of smooth manifolds. It is an invertible function that maps one differentiable manifold to another such that both the function and its inverse are differentiable.
https://en.wikipedia.org/wiki/Diffeomorphism_group
In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its domain. A differentiable function is smooth (the function is locally well approximated as a linear function at each interior point) and does not contain any break, angle, or cusp. If x0 is an interior point in the domain of a function f, then f is said to be differentiable at x0 if the derivative f ′ ( x 0 ) {\displaystyle f'(x_{0})} exists.
https://en.wikipedia.org/wiki/Continuously_differentiable
In other words, the graph of f has a non-vertical tangent line at the point (x0, f(x0)). f is said to be differentiable on U if it is differentiable at every point of U. f is said to be continuously differentiable if its derivative is also a continuous function over the domain of the function f {\displaystyle f} . Generally speaking, f is said to be of class C k {\displaystyle C^{k}} if its first k {\displaystyle k} derivatives f ′ ( x ) , f ′ ′ ( x ) , … , f ( k ) ( x ) {\displaystyle f^{\prime }(x),f^{\prime \prime }(x),\ldots ,f^{(k)}(x)} exist and are continuous over the domain of the function f {\displaystyle f} .
https://en.wikipedia.org/wiki/Continuously_differentiable
In mathematics, a differentiable manifold (also differential manifold) is a type of manifold that is locally similar enough to a vector space to allow one to apply calculus. Any manifold can be described by a collection of charts (atlas). One may then apply ideas from calculus while working within the individual charts, since each chart lies within a vector space to which the usual rules of calculus apply. If the charts are suitably compatible (namely, the transition from one chart to another is differentiable), then computations done in one chart are valid in any other differentiable chart.
https://en.wikipedia.org/wiki/Smooth_manifold
In formal terms, a differentiable manifold is a topological manifold with a globally defined differential structure. Any topological manifold can be given a differential structure locally by using the homeomorphisms in its atlas and the standard differential structure on a vector space. To induce a global differential structure on the local coordinate systems induced by the homeomorphisms, their compositions on chart intersections in the atlas must be differentiable functions on the corresponding vector space.
https://en.wikipedia.org/wiki/Smooth_manifold
In other words, where the domains of charts overlap, the coordinates defined by each chart are required to be differentiable with respect to the coordinates defined by every chart in the atlas. The maps that relate the coordinates defined by the various charts to one another are called transition maps. The ability to define such a local differential structure on an abstract space allows one to extend the definition of differentiability to spaces without global coordinate systems.
https://en.wikipedia.org/wiki/Smooth_manifold
A locally differential structure allows one to define the globally differentiable tangent space, differentiable functions, and differentiable tensor and vector fields. Differentiable manifolds are very important in physics.
https://en.wikipedia.org/wiki/Smooth_manifold
Special kinds of differentiable manifolds form the basis for physical theories such as classical mechanics, general relativity, and Yang–Mills theory. It is possible to develop a calculus for differentiable manifolds. This leads to such mathematical machinery as the exterior calculus. The study of calculus on differentiable manifolds is known as differential geometry. "Differentiability" of a manifold has been given several meanings, including: continuously differentiable, k-times differentiable, smooth (which itself has many meanings), and analytic.
https://en.wikipedia.org/wiki/Smooth_manifold
In mathematics, a differentiable manifold M {\displaystyle M} of dimension n is called parallelizable if there exist smooth vector fields on the manifold, such that at every point p {\displaystyle p} of M {\displaystyle M} the tangent vectors provide a basis of the tangent space at p {\displaystyle p} . Equivalently, the tangent bundle is a trivial bundle, so that the associated principal bundle of linear frames has a global section on M . {\displaystyle M.} A particular choice of such a basis of vector fields on M {\displaystyle M} is called a parallelization (or an absolute parallelism) of M {\displaystyle M} .
https://en.wikipedia.org/wiki/Parallelizable_manifold
In mathematics, a differential algebraic group is a differential algebraic variety with a compatible group structure. Differential algebraic groups were introduced by Cassidy (1972).
https://en.wikipedia.org/wiki/Differential_algebraic_group
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. Such relations are common; therefore, differential equations play a prominent role in many disciplines including engineering, physics, economics, and biology.
https://en.wikipedia.org/wiki/Order_(differential_equation)
The study of differential equations consists mainly of the study of their solutions (the set of functions that satisfy each equation), and of the properties of their solutions. Only the simplest differential equations are soluble by explicit formulas; however, many properties of solutions of a given differential equation may be determined without computing them exactly. Often when a closed-form expression for the solutions is not available, solutions may be approximated numerically using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.
https://en.wikipedia.org/wiki/Order_(differential_equation)
In mathematics, a differential field K is differentially closed if every finite system of differential equations with a solution in some differential field extending K already has a solution in K. This concept was introduced by Robinson (1959). Differentially closed fields are the analogues for differential equations of algebraically closed fields for polynomial equations.
https://en.wikipedia.org/wiki/Differentially_closed_field
In mathematics, a differential invariant is an invariant for the action of a Lie group on a space that involves the derivatives of graphs of functions in the space. Differential invariants are fundamental in projective differential geometry, and the curvature is often studied from this point of view. Differential invariants were introduced in special cases by Sophus Lie in the early 1880s and studied by Georges Henri Halphen at the same time. Lie (1884) was the first general work on differential invariants, and established the relationship between differential invariants, invariant differential equations, and invariant differential operators.
https://en.wikipedia.org/wiki/Differential_invariant
Differential invariants are contrasted with geometric invariants. Whereas differential invariants can involve a distinguished choice of independent variables (or a parameterization), geometric invariants do not. Élie Cartan's method of moving frames is a refinement that, while less general than Lie's methods of differential invariants, always yields invariants of the geometrical kind.
https://en.wikipedia.org/wiki/Differential_invariant
In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and returns another function (in the style of a higher-order function in computer science). This article considers mainly linear differential operators, which are the most common type. However, non-linear differential operators also exist, such as the Schwarzian derivative.
https://en.wikipedia.org/wiki/Ring_of_differential_operators
In mathematics, a differential poset is a partially ordered set (or poset for short) satisfying certain local properties. (The formal definition is given below.) This family of posets was introduced by Stanley (1988) as a generalization of Young's lattice (the poset of integer partitions ordered by inclusion), many of whose combinatorial properties are shared by all differential posets. In addition to Young's lattice, the other most significant example of a differential poset is the Young–Fibonacci lattice.
https://en.wikipedia.org/wiki/Differential_poset
In mathematics, a differential variational inequality (DVI) is a dynamical system that incorporates ordinary differential equations and variational inequalities or complementarity problems. DVIs are useful for representing models involving both dynamics and inequality constraints. Examples of such problems include, for example, mechanical impact problems, electrical circuits with ideal diodes, Coulomb friction problems for contacting bodies, and dynamic economic and related problems such as dynamic traffic networks and networks of queues (where the constraints can either be upper limits on queue length or that the queue length cannot become negative). DVIs are related to a number of other concepts including differential inclusions, projected dynamical systems, evolutionary inequalities, and parabolic variational inequalities.
https://en.wikipedia.org/wiki/Differential_variational_inequality
Differential variational inequalities were first formally introduced by Pang and Stewart, whose definition should not be confused with the differential variational inequality used in Aubin and Cellina (1984). Differential variational inequalities have the form to find u ( t ) ∈ K {\displaystyle u(t)\in K} such that ⟨ v − u ( t ) , F ( t , x ( t ) , u ( t ) ) ⟩ ≥ 0 {\displaystyle \langle v-u(t),F(t,x(t),u(t))\rangle \geq 0} for every v ∈ K {\displaystyle v\in K} and almost all t; K a closed convex set, where d x d t = f ( t , x ( t ) , u ( t ) ) , x ( t 0 ) = x 0 . {\displaystyle {\frac {dx}{dt}}=f(t,x(t),u(t)),\quad x(t_{0})=x_{0}.} Closely associated with DVIs are dynamic/differential complementarity problems: if K is a closed convex cone, then the variational inequality is equivalent to the complementarity problem: K ∋ u ( t ) ⊥ F ( t , x ( t ) , u ( t ) ) ∈ K ∗ . {\displaystyle K\ni u(t)\quad \perp \quad F(t,x(t),u(t))\in K^{*}.}
https://en.wikipedia.org/wiki/Differential_variational_inequality
In mathematics, a diffiety () is a geometrical object which plays the same role in the modern theory of partial differential equations that algebraic varieties play for algebraic equations, that is, to encode the space of solutions in a more conceptual way. The term was coined in 1984 by Alexandre Mikhailovich Vinogradov as portmanteau from differential variety.
https://en.wikipedia.org/wiki/Diffiety
In mathematics, a digital manifold is a special kind of combinatorial manifold which is defined in digital space i.e. grid cell space. A combinatorial manifold is a kind of manifold which is a discretization of a manifold. It usually means a piecewise linear manifold made by simplicial complexes.
https://en.wikipedia.org/wiki/Digital_manifold
In mathematics, a dihedral group is the group of symmetries of a regular polygon, which includes rotations and reflections. Dihedral groups are among the simplest examples of finite groups, and they play an important role in group theory, geometry, and chemistry. The notation for the dihedral group differs in geometry and abstract algebra.
https://en.wikipedia.org/wiki/Dihedral_symmetry
In geometry, Dn or Dihn refers to the symmetries of the n-gon, a group of order 2n. In abstract algebra, D2n refers to this same dihedral group. This article uses the geometric convention, Dn.
https://en.wikipedia.org/wiki/Dihedral_symmetry
In mathematics, a dijoin is a subset of the edges of a directed graph, with the property that contracting every edge in the dijoin produces a strongly connected graph. Equivalently, a dijoin is a subset of the edges that, for every dicut, includes at least one edge crossing the dicut. Here, a dicut is a partition of the vertices into two subsets, so that each edge that has an endpoint in both subsets is directed from the first subset to the second. Woodall's conjecture, an unsolved problem in this area, states that in any directed graph the minimum number of edges in a dicut (the unweighted minimum closure) equals the maximum number of disjoint dijoins that can be found in the graph (a packing of dijoins).
https://en.wikipedia.org/wiki/Dijoin
A fractional weighted version of the conjecture, posed by Jack Edmonds and Rick Giles, was refuted by Alexander Schrijver.The Lucchesi–Younger theorem states that the minimum size of a dijoin, in any given directed graph, equals the maximum number of disjoint dicuts that can be found in the graph. The minimum weight dijoin in a weighted graph can be found in polynomial time, and is a special case of the submodular flow problem.In planar graphs, dijoins and feedback arc sets are dual concepts. The dual graph of a directed graph, embedded in the plane, is a graph with a vertex for each face of the given graph, and a dual edge between two dual vertices when the corresponding two faces are separated by an edge.
https://en.wikipedia.org/wiki/Dijoin
Each dual edge crosses one of the original graph edges, turned by 90° clockwise. A feedback arc set is a subset of the edges that includes at least one edge from every directed cycle.
https://en.wikipedia.org/wiki/Dijoin
For a dijoin in the given graph, the corresponding set of edges forms a directed cut in the dual graph, and vice versa. This relationship between these two problems allows the feedback arc set problem to be solved efficiently for planar graphs, even though it is NP-hard for other types of graphs. == References ==
https://en.wikipedia.org/wiki/Dijoin
In mathematics, a dilation is a function f {\displaystyle f} from a metric space M {\displaystyle M} into itself that satisfies the identity d ( f ( x ) , f ( y ) ) = r d ( x , y ) {\displaystyle d(f(x),f(y))=rd(x,y)} for all points x , y ∈ M {\displaystyle x,y\in M} , where d ( x , y ) {\displaystyle d(x,y)} is the distance from x {\displaystyle x} to y {\displaystyle y} and r {\displaystyle r} is some positive real number.In Euclidean space, such a dilation is a similarity of the space. Dilations change the size but not the shape of an object or figure. Every dilation of a Euclidean space that is not a congruence has a unique fixed point that is called the center of dilation. Some congruences have fixed points and others do not.
https://en.wikipedia.org/wiki/Dilation_theory
In mathematics, a direct limit is a way to construct a (typically large) object from many (typically smaller) objects that are put together in a specific way. These objects may be groups, rings, vector spaces or in general objects from any category. The way they are put together is specified by a system of homomorphisms (group homomorphism, ring homomorphism, or in general morphisms in the category) between those smaller objects. The direct limit of the objects A i {\displaystyle A_{i}} , where i {\displaystyle i} ranges over some directed set I {\displaystyle I} , is denoted by lim → ⁡ A i {\displaystyle \varinjlim A_{i}} .
https://en.wikipedia.org/wiki/Direct_limit
(This is a slight abuse of notation as it suppresses the system of homomorphisms that is crucial for the structure of the limit.) Direct limits are a special case of the concept of colimit in category theory. Direct limits are dual to inverse limits, which are also a special case of limits in category theory.
https://en.wikipedia.org/wiki/Direct_limit
In mathematics, a direct limit of groups is the direct limit of a direct system of groups. These are central objects of study in algebraic topology, especially stable homotopy theory and homological algebra. They are sometimes called stable groups, though this term normally means something quite different in model theory. Certain examples of stable groups are easier to study than "unstable" groups, the groups occurring in the limit. This is a priori surprising, given that they are generally infinite-dimensional, constructed as limits of groups with finite-dimensional representations.
https://en.wikipedia.org/wiki/Direct_limit_of_groups
In mathematics, a directed set (or a directed preorder or a filtered set) is a nonempty set A {\displaystyle A} together with a reflexive and transitive binary relation ≤ {\displaystyle \,\leq \,} (that is, a preorder), with the additional property that every pair of elements has an upper bound. In other words, for any a {\displaystyle a} and b {\displaystyle b} in A {\displaystyle A} there must exist c {\displaystyle c} in A {\displaystyle A} with a ≤ c {\displaystyle a\leq c} and b ≤ c . {\displaystyle b\leq c.} A directed set's preorder is called a direction.
https://en.wikipedia.org/wiki/Directed_set
The notion defined above is sometimes called an upward directed set. A downward directed set is defined analogously, meaning that every pair of elements is bounded below. Some authors (and this article) assume that a directed set is directed upward, unless otherwise stated.
https://en.wikipedia.org/wiki/Directed_set
Other authors call a set directed if and only if it is directed both upward and downward.Directed sets are a generalization of nonempty totally ordered sets. That is, all totally ordered sets are directed sets (contrast partially ordered sets, which need not be directed). Join-semilattices (which are partially ordered sets) are directed sets as well, but not conversely.
https://en.wikipedia.org/wiki/Directed_set
Likewise, lattices are directed sets both upward and downward. In topology, directed sets are used to define nets, which generalize sequences and unite the various notions of limit used in analysis. Directed sets also give rise to direct limits in abstract algebra and (more generally) category theory.
https://en.wikipedia.org/wiki/Directed_set
In mathematics, a discrete series representation is an irreducible unitary representation of a locally compact topological group G that is a subrepresentation of the left regular representation of G on L²(G). In the Plancherel measure, such representations have positive measure. The name comes from the fact that they are exactly the representations that occur discretely in the decomposition of the regular representation.
https://en.wikipedia.org/wiki/Discrete_series_representation
In mathematics, a discrete valuation is an integer valuation on a field K; that is, a function: ν: K → Z ∪ { ∞ } {\displaystyle \nu :K\to \mathbb {Z} \cup \{\infty \}} satisfying the conditions: ν ( x ⋅ y ) = ν ( x ) + ν ( y ) {\displaystyle \nu (x\cdot y)=\nu (x)+\nu (y)} ν ( x + y ) ≥ min { ν ( x ) , ν ( y ) } {\displaystyle \nu (x+y)\geq \min {\big \{}\nu (x),\nu (y){\big \}}} ν ( x ) = ∞ ⟺ x = 0 {\displaystyle \nu (x)=\infty \iff x=0} for all x , y ∈ K {\displaystyle x,y\in K} . Note that often the trivial valuation which takes on only the values 0 , ∞ {\displaystyle 0,\infty } is explicitly excluded. A field with a non-trivial discrete valuation is called a discrete valuation field.
https://en.wikipedia.org/wiki/Discrete_valuation_field
In mathematics, a disjoint union (or discriminated union) of a family of sets ( A i: i ∈ I ) {\displaystyle (A_{i}:i\in I)} is a set A , {\displaystyle A,} often denoted by ⨆ i ∈ I A i , {\textstyle \bigsqcup _{i\in I}A_{i},} with an injection of each A i {\displaystyle A_{i}} into A , {\displaystyle A,} such that the images of these injections form a partition of A {\displaystyle A} (that is, each element of A {\displaystyle A} belongs to exactly one of these images). A disjoint union of a family of pairwise disjoint sets is their union. In category theory, the disjoint union is the coproduct of the category of sets, and thus defined up to a bijection. In this context, the notation ∐ i ∈ I A i {\textstyle \coprod _{i\in I}A_{i}} is often used.
https://en.wikipedia.org/wiki/Disjoint_unions
The disjoint union of two sets A {\displaystyle A} and B {\displaystyle B} is written with infix notation as A ⊔ B {\displaystyle A\sqcup B} . Some authors use the alternative notation A ⊎ B {\displaystyle A\uplus B} or A ∪ ⋅ ⁡ B {\displaystyle A\operatorname {{\cup }\!\!\! {\cdot }\,} B} (along with the corresponding ⨄ i ∈ I A i {\textstyle \biguplus _{i\in I}A_{i}} or ⋃ ⋅ i ∈ I ⁡ A i {\textstyle \operatorname {{\bigcup }\!\!\!
https://en.wikipedia.org/wiki/Disjoint_unions
{\cdot }\,} _{i\in I}A_{i}} ). A standard way for building the disjoint union is to define A {\displaystyle A} as the set of ordered pairs ( x , i ) {\displaystyle (x,i)} such that x ∈ A i , {\displaystyle x\in A_{i},} and the injection A i → A {\displaystyle A_{i}\to A} as x ↦ ( x , i ) . {\displaystyle x\mapsto (x,i).}
https://en.wikipedia.org/wiki/Disjoint_unions
In mathematics, a dispersive partial differential equation or dispersive PDE is a partial differential equation that is dispersive. In this context, dispersion means that waves of different wavelength propagate at different phase velocities.
https://en.wikipedia.org/wiki/Dispersive_PDE
In mathematics, a dissipative operator is a linear operator A defined on a linear subspace D(A) of Banach space X, taking values in X such that for all λ > 0 and all x ∈ D(A) ‖ ( λ I − A ) x ‖ ≥ λ ‖ x ‖ . {\displaystyle \|(\lambda I-A)x\|\geq \lambda \|x\|.} A couple of equivalent definitions are given below. A dissipative operator is called maximally dissipative if it is dissipative and for all λ > 0 the operator λI − A is surjective, meaning that the range when applied to the domain D is the whole of the space X. An operator that obeys a similar condition but with a plus sign instead of a minus sign (that is, the negation of a dissipative operator) is called an accretive operator.The main importance of dissipative operators is their appearance in the Lumer–Phillips theorem which characterizes maximally dissipative operators as the generators of contraction semigroups.
https://en.wikipedia.org/wiki/Dissipative_operator
In mathematics, a distinctive feature of algebraic geometry is that some line bundles on a projective variety can be considered "positive", while others are "negative" (or a mixture of the two). The most important notion of positivity is that of an ample line bundle, although there are several related classes of line bundles. Roughly speaking, positivity properties of a line bundle are related to having many global sections. Understanding the ample line bundles on a given variety X amounts to understanding the different ways of mapping X into projective space.
https://en.wikipedia.org/wiki/Ample_line_bundle
In view of the correspondence between line bundles and divisors (built from codimension-1 subvarieties), there is an equivalent notion of an ample divisor. In more detail, a line bundle is called basepoint-free if it has enough sections to give a morphism to projective space. A line bundle is semi-ample if some positive power of it is basepoint-free; semi-ampleness is a kind of "nonnegativity".
https://en.wikipedia.org/wiki/Ample_line_bundle
More strongly, a line bundle on a complete variety X is very ample if it has enough sections to give a closed immersion (or "embedding") of X into projective space. A line bundle is ample if some positive power is very ample. An ample line bundle on a projective variety X has positive degree on every curve in X. The converse is not quite true, but there are corrected versions of the converse, the Nakai–Moishezon and Kleiman criteria for ampleness.
https://en.wikipedia.org/wiki/Ample_line_bundle
In mathematics, a distinguished limit is an appropriately chosen scale factor used in the method of matched asymptotic expansions.
https://en.wikipedia.org/wiki/Distinguished_limit
In mathematics, a distributive lattice is a lattice in which the operations of join and meet distribute over each other. The prototypical examples of such structures are collections of sets for which the lattice operations can be given by set union and intersection. Indeed, these lattices of sets describe the scenery completely: every distributive lattice is—up to isomorphism—given as such a lattice of sets.
https://en.wikipedia.org/wiki/Free_distributive_lattice
In mathematics, a divergent series is an infinite series that is not convergent, meaning that the infinite sequence of the partial sums of the series does not have a finite limit. If a series converges, the individual terms of the series must approach zero. Thus any series in which the individual terms do not approach zero diverges. However, convergence is a stronger condition: not all series whose terms approach zero converge.
https://en.wikipedia.org/wiki/Summation_method
A counterexample is the harmonic series 1 + 1 2 + 1 3 + 1 4 + 1 5 + ⋯ = ∑ n = 1 ∞ 1 n . {\displaystyle 1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\cdots =\sum _{n=1}^{\infty }{\frac {1}{n}}.} The divergence of the harmonic series was proven by the medieval mathematician Nicole Oresme.
https://en.wikipedia.org/wiki/Summation_method
In specialized mathematical contexts, values can be objectively assigned to certain series whose sequences of partial sums diverge, in order to make meaning of the divergence of the series. A summability method or summation method is a partial function from the set of series to values. For example, Cesàro summation assigns Grandi's divergent series 1 − 1 + 1 − 1 + ⋯ {\displaystyle 1-1+1-1+\cdots } the value 1/2.
https://en.wikipedia.org/wiki/Summation_method
Cesàro summation is an averaging method, in that it relies on the arithmetic mean of the sequence of partial sums. Other methods involve analytic continuations of related series. In physics, there are a wide variety of summability methods; these are discussed in greater detail in the article on regularization.
https://en.wikipedia.org/wiki/Summation_method
In mathematics, a diversity is a generalization of the concept of metric space. The concept was introduced in 2012 by Bryant and Tupper, who call diversities "a form of multi-way metric". The concept finds application in nonlinear analysis.Given a set X {\displaystyle X} , let ℘ fin ( X ) {\displaystyle \wp _{\mbox{fin}}(X)} be the set of finite subsets of X {\displaystyle X} . A diversity is a pair ( X , δ ) {\displaystyle (X,\delta )} consisting of a set X {\displaystyle X} and a function δ: ℘ fin ( X ) → R {\displaystyle \delta \colon \wp _{\mbox{fin}}(X)\to \mathbb {R} } satisfying (D1) δ ( A ) ≥ 0 {\displaystyle \delta (A)\geq 0} , with δ ( A ) = 0 {\displaystyle \delta (A)=0} if and only if | A | ≤ 1 {\displaystyle \left|A\right|\leq 1} and (D2) if B ≠ ∅ {\displaystyle B\neq \emptyset } then δ ( A ∪ C ) ≤ δ ( A ∪ B ) + δ ( B ∪ C ) {\displaystyle \delta (A\cup C)\leq \delta (A\cup B)+\delta (B\cup C)} .
https://en.wikipedia.org/wiki/Diversity_(mathematics)
Bryant and Tupper observe that these axioms imply monotonicity; that is, if A ⊆ B {\displaystyle A\subseteq B} , then δ ( A ) ≤ δ ( B ) {\displaystyle \delta (A)\leq \delta (B)} . They state that the term "diversity" comes from the appearance of a special case of their definition in work on phylogenetic and ecological diversities. They give the following examples:
https://en.wikipedia.org/wiki/Diversity_(mathematics)
In mathematics, a divisibility sequence is an integer sequence ( a n ) {\displaystyle (a_{n})} indexed by positive integers n such that if m ∣ n then a m ∣ a n {\displaystyle {\text{if }}m\mid n{\text{ then }}a_{m}\mid a_{n}} for all m, n. That is, whenever one index is a multiple of another one, then the corresponding term also is a multiple of the other term. The concept can be generalized to sequences with values in any ring where the concept of divisibility is defined. A strong divisibility sequence is an integer sequence ( a n ) {\displaystyle (a_{n})} such that for all positive integers m, n, gcd ( a m , a n ) = a gcd ( m , n ) . {\displaystyle \gcd(a_{m},a_{n})=a_{\gcd(m,n)}.} Every strong divisibility sequence is a divisibility sequence: gcd ( m , n ) = m {\displaystyle \gcd(m,n)=m} if and only if m ∣ n {\displaystyle m\mid n} . Therefore, by the strong divisibility property, gcd ( a m , a n ) = a m {\displaystyle \gcd(a_{m},a_{n})=a_{m}} and therefore a m ∣ a n {\displaystyle a_{m}\mid a_{n}} .
https://en.wikipedia.org/wiki/Divisibility_sequence
In mathematics, a divisor of an integer n {\displaystyle n} , also called a factor of n {\displaystyle n} , is an integer m {\displaystyle m} that may be multiplied by some integer to produce n {\displaystyle n} . In this case, one also says that n {\displaystyle n} is a multiple of m . {\displaystyle m.} An integer n {\displaystyle n} is divisible or evenly divisible by another integer m {\displaystyle m} if m {\displaystyle m} is a divisor of n {\displaystyle n} ; this implies dividing n {\displaystyle n} by m {\displaystyle m} leaves no remainder.
https://en.wikipedia.org/wiki/Divisor_(number_theory)