text
stringlengths
9
3.55k
source
stringlengths
31
280
{\displaystyle \left,z\right]+\left,x\right]+\left,y\right]=\left+\left+\left-\left-\left-\left.} An Akivis algebra with = 0 {\displaystyle \left=0} is a Lie algebra, for the Akivis identity reduces to the Jacobi identity. Note that the terms on the right hand side have positive sign for even permutations and negative sign for odd permutations of x , y , z {\displaystyle x,y,z} . Any algebra (even if nonassociative) is an Akivis algebra if we define = x y − y x {\displaystyle \left=xy-yx} and = ( x y ) z − x ( y z ) {\displaystyle \left=(xy)z-x(yz)} . It is known that all Akivis algebras may be represented as a subalgebra of a (possibly nonassociative) algebra in this way (for associative algebras, the associator is identically zero, and the Akivis identity reduces to the Jacobi identity).
https://en.wikipedia.org/wiki/Akivis_algebra
In mathematics, and in particular the study of dynamical systems, the idea of stable and unstable sets or stable and unstable manifolds give a formal mathematical definition to the general notions embodied in the idea of an attractor or repellor. In the case of hyperbolic dynamics, the corresponding notion is that of the hyperbolic set.
https://en.wikipedia.org/wiki/Stable_manifold
In mathematics, and in particular the study of game theory, a function is graph continuous if it exhibits the following properties. The concept was originally defined by Partha Dasgupta and Eric Maskin in 1986 and is a version of continuity that finds application in the study of continuous games.
https://en.wikipedia.org/wiki/Graph_continuous_function
In mathematics, and in particular the theory of group representations, the regular representation of a group G is the linear representation afforded by the group action of G on itself by translation. One distinguishes the left regular representation λ given by left translation and the right regular representation ρ given by the inverse of right translation.
https://en.wikipedia.org/wiki/Regular_module
In mathematics, and in particular universal algebra, the concept of an n-ary group (also called n-group or multiary group) is a generalization of the concept of a group to a set G with an n-ary operation instead of a binary operation. By an n-ary operation is meant any map f: Gn → G from the n-th Cartesian power of G to G. The axioms for an n-ary group are defined in such a way that they reduce to those of a group in the case n = 2. The earliest work on these structures was done in 1904 by Kasner and in 1928 by Dörnte; the first systematic account of (what were then called) polyadic groups was given in 1940 by Emil Leon Post in a famous 143-page paper in the Transactions of the American Mathematical Society.
https://en.wikipedia.org/wiki/N-ary_group
In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element x is an element y that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix A {\displaystyle A} .
https://en.wikipedia.org/wiki/Pseudo_inverse
A matrix A g ∈ R n × m {\displaystyle A^{\mathrm {g} }\in \mathbb {R} ^{n\times m}} is a generalized inverse of a matrix A ∈ R m × n {\displaystyle A\in \mathbb {R} ^{m\times n}} if A A g A = A . {\displaystyle AA^{\mathrm {g} }A=A.} A generalized inverse exists for an arbitrary matrix, and when a matrix has a regular inverse, this inverse is its unique generalized inverse.
https://en.wikipedia.org/wiki/Pseudo_inverse
In mathematics, and more particularly in analytic number theory, Perron's formula is a formula due to Oskar Perron to calculate the sum of an arithmetic function, by means of an inverse Mellin transform.
https://en.wikipedia.org/wiki/Perron_formula
In mathematics, and more particularly in number theory, primorial, denoted by "#", is a function from natural numbers to natural numbers similar to the factorial function, but rather than successively multiplying positive integers, the function only multiplies prime numbers. The name "primorial", coined by Harvey Dubner, draws an analogy to primes similar to the way the name "factorial" relates to factors.
https://en.wikipedia.org/wiki/Primorial
In mathematics, and more particularly in polyhedral combinatorics, Eberhard's theorem partially characterizes the multisets of polygons that can form the faces of simple convex polyhedra. It states that, for given numbers of triangles, quadrilaterals, pentagons, heptagons, and other polygons other than hexagons, there exists a convex polyhedron with those given numbers of faces of each type (and an unspecified number of hexagonal faces) if and only if those numbers of polygons obey a linear equation derived from Euler's polyhedral formula.The theorem is named after Victor Eberhard, a blind German mathematician, who published it in 1888 in his habilitation thesis and in expanded form in an 1891 book on polyhedra.
https://en.wikipedia.org/wiki/Eberhard's_theorem
In mathematics, and more particularly in set theory, a cover (or covering) of a set X {\displaystyle X} is a family of subsets of X {\displaystyle X} whose union is all of X {\displaystyle X} . More formally, if C = { U α: α ∈ A } {\displaystyle C=\lbrace U_{\alpha }:\alpha \in A\rbrace } is an indexed family of subsets U α ⊂ X {\displaystyle U_{\alpha }\subset X} (indexed by the set A {\displaystyle A} ), then C {\displaystyle C} is a cover of X {\displaystyle X} if ⋃ α ∈ A U α = X {\displaystyle \bigcup _{\alpha \in A}U_{\alpha }=X} . Thus the collection { U α: α ∈ A } {\displaystyle \lbrace U_{\alpha }:\alpha \in A\rbrace } is a cover of X {\displaystyle X} if each element of X {\displaystyle X} belongs to at least one of the subsets U α {\displaystyle U_{\alpha }} . A subcover of a cover of a set is a subset of the cover that also covers the set. A cover is called an open cover if each of its elements is an open set.
https://en.wikipedia.org/wiki/Refinement_(topology)
In mathematics, and more particularly in the analytic theory of regular continued fractions, an infinite regular continued fraction x is said to be restricted, or composed of restricted partial quotients, if the sequence of denominators of its partial quotients is bounded; that is x = = a 0 + 1 a 1 + 1 a 2 + 1 a 3 + 1 a 4 + ⋱ = a 0 + K ∞ i = 1 1 a i , {\displaystyle x==a_{0}+{\cfrac {1}{a_{1}+{\cfrac {1}{a_{2}+{\cfrac {1}{a_{3}+{\cfrac {1}{a_{4}+\ddots }}}}}}}}=a_{0}+{\underset {i=1}{\overset {\infty }{K}}}{\frac {1}{a_{i}}},\,} and there is some positive integer M such that all the (integral) partial denominators ai are less than or equal to M.
https://en.wikipedia.org/wiki/Restricted_partial_quotients
In mathematics, and more precisely in analysis, the Wallis integrals constitute a family of integrals introduced by John Wallis.
https://en.wikipedia.org/wiki/Wallis'_integrals
In mathematics, and more precisely in semigroup theory, a nilsemigroup or nilpotent semigroup is a semigroup whose every element is nilpotent.
https://en.wikipedia.org/wiki/Nilpotent_semigroup
In mathematics, and more precisely in semigroup theory, a variety of finite semigroups is a class of semigroups having some nice algebraic properties. Those classes can be defined in two distinct ways, using either algebraic notions or topological notions. Varieties of finite monoids, varieties of finite ordered semigroups and varieties of finite ordered monoids are defined similarly. This notion is very similar to the general notion of variety in universal algebra.
https://en.wikipedia.org/wiki/Variety_of_finite_semigroups
In mathematics, and more precisely in topology, the mapping class group of a surface, sometimes called the modular group or Teichmüller modular group, is the group of homeomorphisms of the surface viewed up to continuous (in the compact-open topology) deformation. It is of fundamental importance for the study of 3-manifolds via their embedded surfaces and is also studied in algebraic geometry in relation to moduli problems for curves. The mapping class group can be defined for arbitrary manifolds (indeed, for arbitrary topological spaces) but the 2-dimensional setting is the most studied in group theory. The mapping class group of surfaces are related to various other groups, in particular braid groups and outer automorphism groups.
https://en.wikipedia.org/wiki/Dehn-Nielsen_theorem
In mathematics, and more specifically in abstract algebra, a *-algebra (or involutive algebra) is a mathematical structure consisting of two involutive rings R and A, where R is commutative and A has the structure of an associative algebra over R. Involutive algebras generalize the idea of a number system equipped with conjugation, for example the complex numbers and complex conjugation, matrices over the complex numbers and conjugate transpose, and linear operators over a Hilbert space and Hermitian adjoints. However, it may happen that an algebra admits no involution.
https://en.wikipedia.org/wiki/Involutive_algebra
In mathematics, and more specifically in abstract algebra, a pseudo-ring is one of the following variants of a ring: A rng, i.e., a structure satisfying all the axioms of a ring except for the existence of a multiplicative identity. A set R with two binary operations + and ⋅ such that (R, +) is an abelian group with identity 0, and a(b + c) + a0 = ab + ac and (b + c)a + 0a = ba + ca for all a, b, c in R. An abelian group (A, +) equipped with a subgroup B and a multiplication B × A → A making B a ring and A a B-module.None of these definitions are equivalent, so it is best to avoid the term "pseudo-ring" or to clarify which meaning is intended.
https://en.wikipedia.org/wiki/Pseudo-ring
In mathematics, and more specifically in abstract algebra, a rng (or non-unital ring or pseudo-ring) is an algebraic structure satisfying the same properties as a ring, but without assuming the existence of a multiplicative identity. The term rng (IPA: ) is meant to suggest that it is a ring without i, that is, without the requirement for an identity element.There is no consensus in the community as to whether the existence of a multiplicative identity must be one of the ring axioms (see Ring (mathematics) § History). The term rng was coined to alleviate this ambiguity when people want to refer explicitly to a ring without the axiom of multiplicative identity. A number of algebras of functions considered in analysis are not unital, for instance the algebra of functions decreasing to zero at infinity, especially those with compact support on some (non-compact) space.
https://en.wikipedia.org/wiki/Rng_(algebra)
In mathematics, and more specifically in abstract algebra, an element x of a *-algebra is self-adjoint if x ∗ = x {\displaystyle x^{*}=x} . A self-adjoint element is also Hermitian, though the reverse doesn't necessarily hold. A collection C of elements of a star-algebra is self-adjoint if it is closed under the involution operation.
https://en.wikipedia.org/wiki/Self_adjoint
For example, if x ∗ = y {\displaystyle x^{*}=y} then since y ∗ = x ∗ ∗ = x {\displaystyle y^{*}=x^{**}=x} in a star-algebra, the set {x,y} is a self-adjoint set even though x and y need not be self-adjoint elements. In functional analysis, a linear operator A: H → H {\displaystyle A:H\to H} on a Hilbert space is called self-adjoint if it is equal to its own adjoint A∗. See self-adjoint operator for a detailed discussion.
https://en.wikipedia.org/wiki/Self_adjoint
If the Hilbert space is finite-dimensional and an orthonormal basis has been chosen, then the operator A is self-adjoint if and only if the matrix describing A with respect to this basis is Hermitian, i.e. if it is equal to its own conjugate transpose. Hermitian matrices are also called self-adjoint. In a dagger category, a morphism f {\displaystyle f} is called self-adjoint if f = f † {\displaystyle f=f^{\dagger }} ; this is possible only for an endomorphism f: a → a {\displaystyle f\colon a\to a} .
https://en.wikipedia.org/wiki/Self_adjoint
In mathematics, and more specifically in algebraic topology and polyhedral combinatorics, the Euler characteristic (or Euler number, or Euler–Poincaré characteristic) is a topological invariant, a number that describes a topological space's shape or structure regardless of the way it is bent. It is commonly denoted by χ {\displaystyle \chi } (Greek lower-case letter chi). The Euler characteristic was originally defined for polyhedra and used to prove various theorems about them, including the classification of the Platonic solids.
https://en.wikipedia.org/wiki/Euler–Poincaré_characteristic
It was stated for Platonic solids in 1537 in an unpublished manuscript by Francesco Maurolico. Leonhard Euler, for whom the concept is named, introduced it for convex polyhedra more generally but failed to rigorously prove that it is an invariant. In modern mathematics, the Euler characteristic arises from homology and, more abstractly, homological algebra.
https://en.wikipedia.org/wiki/Euler–Poincaré_characteristic
In mathematics, and more specifically in analysis, a holonomic function is a smooth function of several variables that is a solution of a system of linear homogeneous differential equations with polynomial coefficients and satisfies a suitable dimension condition in terms of D-modules theory. More precisely, a holonomic function is an element of a holonomic module of smooth functions. Holonomic functions can also be described as differentiably finite functions, also known as D-finite functions.
https://en.wikipedia.org/wiki/Holonomic_function
When a power series in the variables is the Taylor expansion of a holonomic function, the sequence of its coefficients, in one or several indices, is also called holonomic. Holonomic sequences are also called P-recursive sequences: they are defined recursively by multivariate recurrences satisfied by the whole sequence and by suitable specializations of it. The situation simplifies in the univariate case: any univariate sequence that satisfies a linear homogeneous recurrence relation with polynomial coefficients, or equivalently a linear homogeneous difference equation with polynomial coefficients, is holonomic.
https://en.wikipedia.org/wiki/Holonomic_function
In mathematics, and more specifically in combinatorial commutative algebra, a zero-divisor graph is an undirected graph representing the zero divisors of a commutative ring. It has elements of the ring as its vertices, and pairs of elements whose product is zero as its edges.
https://en.wikipedia.org/wiki/Zero-divisor_graph
In mathematics, and more specifically in computer algebra and elimination theory, a regular chain is a particular kind of triangular set of multivariate polynomials over a field, where a triangular set is a finite sequence of polynomials such that each one contains at least one more indeterminate than the preceding one. The condition that a triangular set must satisfy to be a regular chain is that, for every k, every common zero (in an algebraically closed field) of the k first polynomials may be prolongated to a common zero of the (k + 1)th polynomial. In other words, regular chains allow solving systems of polynomial equations by solving successive univariate equations without considering different cases. Regular chains enhance the notion of Wu's characteristic sets in the sense that they provide a better result with a similar method of computation.
https://en.wikipedia.org/wiki/Regular_chain
In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring K over a field K. A Gröbner basis allows many important properties of the ideal and the associated algebraic variety to be deduced easily, such as the dimension and the number of zeros when it is finite. Gröbner basis computation is one of the main practical tools for solving systems of polynomial equations and computing the images of algebraic varieties under projections or rational maps. Gröbner basis computation can be seen as a multivariate, non-linear generalization of both Euclid's algorithm for computing polynomial greatest common divisors, and Gaussian elimination for linear systems.Gröbner bases were introduced by Bruno Buchberger in his 1965 Ph.D. thesis, which also included an algorithm to compute them (Buchberger's algorithm).
https://en.wikipedia.org/wiki/Multivariate_division_algorithm
He named them after his advisor Wolfgang Gröbner. In 2007, Buchberger received the Association for Computing Machinery's Paris Kanellakis Theory and Practice Award for this work. However, the Russian mathematician Nikolai Günther had introduced a similar notion in 1913, published in various Russian mathematical journals.
https://en.wikipedia.org/wiki/Multivariate_division_algorithm
These papers were largely ignored by the mathematical community until their rediscovery in 1987 by Bodo Renschuch et al. An analogous concept for multivariate power series was developed independently by Heisuke Hironaka in 1964, who named them standard bases. This term has been used by some authors to also denote Gröbner bases. The theory of Gröbner bases has been extended by many authors in various directions. It has been generalized to other structures such as polynomials over principal ideal rings or polynomial rings, and also some classes of non-commutative rings and algebras, like Ore algebras.
https://en.wikipedia.org/wiki/Multivariate_division_algorithm
In mathematics, and more specifically in differential geometry, a Hermitian manifold is the complex analogue of a Riemannian manifold. More precisely, a Hermitian manifold is a complex manifold with a smoothly varying Hermitian inner product on each (holomorphic) tangent space. One can also define a Hermitian manifold as a real manifold with a Riemannian metric that preserves a complex structure. A complex structure is essentially an almost complex structure with an integrability condition, and this condition yields a unitary structure (U(n) structure) on the manifold.
https://en.wikipedia.org/wiki/Hermitian_manifold
By dropping this condition, we get an almost Hermitian manifold. On any almost Hermitian manifold, we can introduce a fundamental 2-form (or cosymplectic structure) that depends only on the chosen metric and the almost complex structure. This form is always non-degenerate. With the extra integrability condition that it is closed (i.e., it is a symplectic form), we get an almost Kähler structure. If both the almost complex structure and the fundamental form are integrable, then we have a Kähler structure.
https://en.wikipedia.org/wiki/Hermitian_manifold
In mathematics, and more specifically in geometry, parametrization (or parameterization; also parameterisation, parametrisation) is the process of finding parametric equations of a curve, a surface, or, more generally, a manifold or a variety, defined by an implicit equation. The inverse process is called implicitization. "To parameterize" by itself means "to express in terms of parameters".Parametrization is a mathematical process consisting of expressing the state of a system, process or model as a function of some independent quantities called parameters.
https://en.wikipedia.org/wiki/Parametrization_invariance
The state of the system is generally determined by a finite set of coordinates, and the parametrization thus consists of one function of several real variables for each coordinate. The number of parameters is the number of degrees of freedom of the system. For example, the position of a point that moves on a curve in three-dimensional space is determined by the time needed to reach the point when starting from a fixed origin.
https://en.wikipedia.org/wiki/Parametrization_invariance
If x, y, z are the coordinates of the point, the movement is thus described by a parametric equation x = f ( t ) y = g ( t ) z = h ( t ) , {\displaystyle {\begin{aligned}x&=f(t)\\y&=g(t)\\z&=h(t),\end{aligned}}} where t is the parameter and denotes the time. Such a parametric equation completely determines the curve, without the need of any interpretation of t as time, and is thus called a parametric equation of the curve (this is sometimes abbreviated by saying that one has a parametric curve). One similarly gets the parametric equation of a surface by considering functions of two parameters t and u.
https://en.wikipedia.org/wiki/Parametrization_invariance
In mathematics, and more specifically in graph theory, a directed graph (or digraph) is a graph that is made up of a set of vertices connected by directed edges, often called arcs.
https://en.wikipedia.org/wiki/Weighted_digraph
In mathematics, and more specifically in graph theory, a multigraph is a graph which is permitted to have multiple edges (also called parallel edges), that is, edges that have the same end nodes. Thus two vertices may be connected by more than one edge. There are 2 distinct notions of multiple edges: Edges without own identity: The identity of an edge is defined solely by the two nodes it connects. In this case, the term "multiple edges" means that the same edge can occur several times between these two nodes.
https://en.wikipedia.org/wiki/Multigraph
Edges with own identity: Edges are primitive entities just like nodes. When multiple edges connect two nodes, these are different edges.A multigraph is different from a hypergraph, which is a graph in which an edge can connect any number of nodes, not just two. For some authors, the terms pseudograph and multigraph are synonymous. For others, a pseudograph is a multigraph that is permitted to have loops.
https://en.wikipedia.org/wiki/Multigraph
In mathematics, and more specifically in graph theory, a polytree (also called directed tree, oriented tree or singly connected network) is a directed acyclic graph whose underlying undirected graph is a tree. In other words, if we replace its directed edges with undirected edges, we obtain an undirected graph that is both connected and acyclic. A polyforest (or directed forest or oriented forest) is a directed acyclic graph whose underlying undirected graph is a forest.
https://en.wikipedia.org/wiki/Oriented_tree
In other words, if we replace its directed edges with undirected edges, we obtain an undirected graph that is acyclic. A polytree is an example of an oriented graph. The term polytree was coined in 1987 by Rebane and Pearl.
https://en.wikipedia.org/wiki/Oriented_tree
In mathematics, and more specifically in homological algebra, a resolution (or left resolution; dually a coresolution or right resolution) is an exact sequence of modules (or, more generally, of objects of an abelian category), which is used to define invariants characterizing the structure of a specific module or object of this category. When, as usually, arrows are oriented to the right, the sequence is supposed to be infinite to the left for (left) resolutions, and to the right for right resolutions. However, a finite resolution is one where only finitely many of the objects in the sequence are non-zero; it is usually represented by a finite exact sequence in which the leftmost object (for resolutions) or the rightmost object (for coresolutions) is the zero-object.Generally, the objects in the sequence are restricted to have some property P (for example to be free).
https://en.wikipedia.org/wiki/Minimal_resolution_(algebra)
Thus one speaks of a P resolution. In particular, every module has free resolutions, projective resolutions and flat resolutions, which are left resolutions consisting, respectively of free modules, projective modules or flat modules. Similarly every module has injective resolutions, which are right resolutions consisting of injective modules.
https://en.wikipedia.org/wiki/Minimal_resolution_(algebra)
In mathematics, and more specifically in homological algebra, the splitting lemma states that in any abelian category, the following statements are equivalent for a short exact sequence 0 ⟶ A ⟶ q B ⟶ r C ⟶ 0. {\displaystyle 0\longrightarrow A\mathrel {\overset {q}{\longrightarrow }} B\mathrel {\overset {r}{\longrightarrow }} C\longrightarrow 0.} If any of these statements holds, the sequence is called a split exact sequence, and the sequence is said to split. In the above short exact sequence, where the sequence splits, it allows one to refine the first isomorphism theorem, which states that: C ≅ B/ker r ≅ B/q(A) (i.e., C isomorphic to the coimage of r or cokernel of q)to: B = q(A) ⊕ u(C) ≅ A ⊕ Cwhere the first isomorphism theorem is then just the projection onto C. It is a categorical generalization of the rank–nullity theorem (in the form V ≅ ker T ⊕ im T) in linear algebra.
https://en.wikipedia.org/wiki/Splitting_lemma
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V → W {\displaystyle V\to W} between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism. If a linear map is a bijection then it is called a linear isomorphism.
https://en.wikipedia.org/wiki/Linear_endomorphism
In the case where V = W {\displaystyle V=W} , a linear map is called a linear endomorphism. Sometimes the term linear operator refers to this case, but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that V {\displaystyle V} and W {\displaystyle W} are real vector spaces (not necessarily with V = W {\displaystyle V=W} ), or it can be used to emphasize that V {\displaystyle V} is a function space, which is a common convention in functional analysis. Sometimes the term linear function has the same meaning as linear map, while in analysis it does not. A linear map from V to W always maps the origin of V to the origin of W. Moreover, it maps linear subspaces in V onto linear subspaces in W (possibly of a lower dimension); for example, it maps a plane through the origin in V to either a plane through the origin in W, a line through the origin in W, or just the origin in W. Linear maps can often be represented as matrices, and simple examples include rotation and reflection linear transformations. In the language of category theory, linear maps are the morphisms of vector spaces.
https://en.wikipedia.org/wiki/Linear_endomorphism
In mathematics, and more specifically in linear algebra, a linear subspace or vector subspace is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a subspace when the context serves to distinguish it from other types of subspaces.
https://en.wikipedia.org/wiki/Vector_subspace
In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives up to some order d + 1. Each of these methods is characterized by the number d, which is known as the order of the method. The algorithm is iterative and has a rate of convergence of d + 1. These methods are named after the American mathematician Alston Scott Householder.
https://en.wikipedia.org/wiki/Householder's_method
In mathematics, and more specifically in order theory, several different types of ordered set have been studied. They include: Cyclic orders, orderings in which triples of elements are either clockwise or counterclockwise Lattices, partial orders in which each pair of elements has a greatest lower bound and a least upper bound. Many different types of lattice have been studied; see map of lattices for a list. Partially ordered sets (or posets), orderings in which some pairs are comparable and others might not be Preorders, a generalization of partial orders allowing ties (represented as equivalences and distinct from incomparabilities) Semiorders, partial orders determined by comparison of numerical values, in which values that are too close to each other are incomparable; a subfamily of partial orders with certain restrictions Total orders, orderings that specify, for every two distinct elements, which one is less than the other Weak orders, generalizations of total orders allowing ties (represented either as equivalences or, in strict weak orders, as transitive incomparabilities) Well-orders, total orders in which every non-empty subset has a least element Well-quasi-orderings, a class of preorders generalizing the well-orders
https://en.wikipedia.org/wiki/List_of_order_structures_in_mathematics
In mathematics, and more specifically in partial differential equations, Duhamel's principle is a general method for obtaining solutions to inhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations. It is also an indispensable tool in the study of nonlinear partial differential equations such as the Navier–Stokes equations and nonlinear Schrödinger equation where one treats the nonlinearity as an inhomogeneity.
https://en.wikipedia.org/wiki/Duhamel's_principle
The philosophy underlying Duhamel's principle is that it is possible to go from solutions of the Cauchy problem (or initial value problem) to solutions of the inhomogeneous problem. Consider, for instance, the example of the heat equation modeling the distribution of heat energy u in Rn. Indicating by ut (x, t) the time derivative of u(x, t), the initial value problem is where g is the initial heat distribution.
https://en.wikipedia.org/wiki/Duhamel's_principle
By contrast, the inhomogeneous problem for the heat equation, corresponds to adding an external heat energy f (x, t) dt at each point. Intuitively, one can think of the inhomogeneous problem as a set of homogeneous problems each starting afresh at a different time slice t = t0. By linearity, one can add up (integrate) the resulting solutions through time t0 and obtain the solution for the inhomogeneous problem. This is the essence of Duhamel's principle.
https://en.wikipedia.org/wiki/Duhamel's_principle
In mathematics, and more specifically in polyhedral combinatorics, a Goldberg polyhedron is a convex polyhedron made from hexagons and pentagons. They were first described in 1937 by Michael Goldberg (1902–1990). They are defined by three properties: each face is either a pentagon or hexagon, exactly three faces meet at each vertex, and they have rotational icosahedral symmetry. They are not necessarily mirror-symmetric; e.g. GP(5,3) and GP(3,5) are enantiomorphs of each other.
https://en.wikipedia.org/wiki/Goldberg_polyhedron
A Goldberg polyhedron is a dual polyhedron of a geodesic sphere. A consequence of Euler's polyhedron formula is that a Goldberg polyhedron always has exactly twelve pentagonal faces. Icosahedral symmetry ensures that the pentagons are always regular and that there are always 12 of them.
https://en.wikipedia.org/wiki/Goldberg_polyhedron
If the vertices are not constrained to a sphere, the polyhedron can be constructed with planar equilateral (but not in general equiangular) faces. Simple examples of Goldberg polyhedra include the dodecahedron and truncated icosahedron. Other forms can be described by taking a chess knight move from one pentagon to the next: first take m steps in one direction, then turn 60° to the left and take n steps.
https://en.wikipedia.org/wiki/Goldberg_polyhedron
Such a polyhedron is denoted GP(m,n). A dodecahedron is GP(1,0) and a truncated icosahedron is GP(1,1).
https://en.wikipedia.org/wiki/Goldberg_polyhedron
A similar technique can be applied to construct polyhedra with tetrahedral symmetry and octahedral symmetry. These polyhedra will have triangles or squares rather than pentagons. These variations are given Roman numeral subscripts denoting the number of sides on the non-hexagon faces: GPIII(n,m), GPIV(n,m), and GPV(n,m).
https://en.wikipedia.org/wiki/Goldberg_polyhedron
In mathematics, and more specifically in projective geometry, a projective frame or projective basis is a tuple of points in a projective space that can be used for defining homogeneous coordinates in this space. More precisely, in a projective space of dimension n, a projective frame is a n + 2-tuple of points such that no hyperplane contains n + 1 of them. A projective frame is sometimes called a simplex, although a simplex in a space of dimension n has at most n + 1 vertices. In this article, only projective spaces over a field K are considered, although most results can be generalized to projective spaces over a division ring.
https://en.wikipedia.org/wiki/Projective_frame
Let P(V) be a projective space of dimension n, where V is a K-vector space of dimension n + 1. Let p: V ∖ { 0 } → P ( V ) {\displaystyle p:V\setminus \{0\}\to \mathbf {P} (V)} be the canonical projection that maps a nonzero vector v to the corresponding point of P(V), which is the vector line that contains v. Every frame of P(V) can be written as ( p ( e 0 ) , … , p ( e n + 1 ) ) , {\displaystyle \left(p(e_{0}),\ldots ,p(e_{n+1})\right),} for some vectors e 0 , … , e n + 1 {\displaystyle e_{0},\dots ,e_{n+1}} of V. The definition implies the existence of nonzero elements of K such that λ 0 e 0 + ⋯ + λ n + 1 e n + 1 = 0 {\displaystyle \lambda _{0}e_{0}+\cdots +\lambda _{n+1}e_{n+1}=0} . Replacing e i {\displaystyle e_{i}} by λ i e i {\displaystyle \lambda _{i}e_{i}} for i ≤ n {\displaystyle i\leq n} and e n + 1 {\displaystyle e_{n+1}} by − λ n + 1 e n + 1 {\displaystyle -\lambda _{n+1}e_{n+1}} , one gets the following characterization of a frame: n + 2 points of P(V) form a frame if and only if they are the image by p of a basis of V and the sum of its elements.Moreover, two bases define the same frame in this way, if and only if the elements of the second one are the products of the elements of the first one by a fixed nonzero element of K. As homographies of P(V) are induced by linear endomorphisms of V, it follows that, given two frames, there is exactly one homography mapping the first one onto the second one.
https://en.wikipedia.org/wiki/Projective_frame
In particular, the only homography fixing the points of a frame is the identity map. This result is much more difficult in synthetic geometry (where projective spaces are defined through axioms).
https://en.wikipedia.org/wiki/Projective_frame
It is sometimes called the first fundamental theorem of projective geometry.Every frame can be written as ( p ( e 0 ) , … , p ( e n ) , p ( e 0 + ⋯ + e n ) ) , {\displaystyle (p(e_{0}),\ldots ,p(e_{n}),p(e_{0}+\cdots +e_{n})),} where ( e 0 , … , e n ) {\displaystyle (e_{0},\dots ,e_{n})} is basis of V. The projective coordinates or homogeneous coordinates of a point p(v) over this frame are the coordinates of the vector v on the basis ( e 0 , … , e n ) . {\displaystyle (e_{0},\dots ,e_{n}).} If one changes the vectors representing the point p(v) and the frame elements, the coordinates are multiplied by a fixed nonzero scalar.
https://en.wikipedia.org/wiki/Projective_frame
Commonly, the projective space Pn(K) = P(Kn+1) is considered. It has a canonical frame consisting of the image by p of the canonical basis of Kn+1 (consisting of the elements having only one nonzero entry, which is equal to 1), and (1, 1, ..., 1). On this basis, the homogeneous coordinates of p(v) are simply the entries (coefficients) of v. Given another projective space P(V) of the same dimension n, and a frame F of it, there is exactly one homography h mapping F onto the canonical frame of P(Kn+1).
https://en.wikipedia.org/wiki/Projective_frame
The projective coordinates of a point a on the frame F are the homogeneous coordinates of h(a) on the canonical frame of Pn(K). In the case of a projective line, a frame consists of three distinct points. If P1(K) is identified with K with a point at infinity ∞ added, then its canonical frame is (∞, 0, 1). Given any frame (a0, a1, a2), the projective coordinates of a point a ≠ a0 are (r, 1), where r is the cross-ratio (a, a2; a1, a0). If a = a0, the cross ratio is the infinity, and the projective coordinates are (1,0).
https://en.wikipedia.org/wiki/Projective_frame
In mathematics, and more specifically in ring theory, Krull's theorem, named after Wolfgang Krull, asserts that a nonzero ring has at least one maximal ideal. The theorem was proved in 1929 by Krull, who used transfinite induction. The theorem admits a simple proof using Zorn's lemma, and in fact is equivalent to Zorn's lemma, which in turn is equivalent to the axiom of choice.
https://en.wikipedia.org/wiki/Krull's_theorem
In mathematics, and more specifically in ring theory, an ideal of a ring is a special subset of its elements. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3. Addition and subtraction of even numbers preserves evenness, and multiplying an even number by any integer (even or odd) results in an even number; these closure and absorption properties are the defining properties of an ideal. An ideal can be used to construct a quotient ring in a way similar to how, in group theory, a normal subgroup can be used to construct a quotient group.
https://en.wikipedia.org/wiki/Algebraic_ideal
Among the integers, the ideals correspond one-for-one with the non-negative integers: in this ring, every ideal is a principal ideal consisting of the multiples of a single non-negative number. However, in other rings, the ideals may not correspond directly to the ring elements, and certain properties of integers, when generalized to rings, attach more naturally to the ideals than to the elements of the ring. For instance, the prime ideals of a ring are analogous to prime numbers, and the Chinese remainder theorem can be generalized to ideals.
https://en.wikipedia.org/wiki/Algebraic_ideal
There is a version of unique prime factorization for the ideals of a Dedekind domain (a type of ring important in number theory). The related, but distinct, concept of an ideal in order theory is derived from the notion of ideal in ring theory. A fractional ideal is a generalization of an ideal, and the usual ideals are sometimes called integral ideals for clarity.
https://en.wikipedia.org/wiki/Algebraic_ideal
In mathematics, and more specifically in the theory of C*-algebras, the noncommutative tori Aθ, also known as irrational rotation algebras for irrational values of θ, form a family of noncommutative C*-algebras which generalize the algebra of continuous functions on the 2-torus. Many topological and geometric properties of the classical 2-torus have algebraic analogues for the noncommutative tori, and as such they are fundamental examples of a noncommutative space in the sense of Alain Connes.
https://en.wikipedia.org/wiki/Noncommutative_torus
In mathematics, and more specifically in the theory of von Neumann algebras, a crossed product is a basic method of constructing a new von Neumann algebra from a von Neumann algebra acted on by a group. It is related to the semidirect product construction for groups. (Roughly speaking, crossed product is the expected structure for a group ring of a semidirect product group. Therefore crossed products have a ring theory aspect also. This article concentrates on an important case, where they appear in functional analysis.)
https://en.wikipedia.org/wiki/Crossed_product
In mathematics, and more specifically matrix theory, the spread of a matrix is the largest distance in the complex plane between any two eigenvalues of the matrix.
https://en.wikipedia.org/wiki/Spread_of_a_matrix
In mathematics, and more specifically number theory, the hyperfactorial of a positive integer n {\displaystyle n} is the product of the numbers of the form x x {\displaystyle x^{x}} from 1 1 {\displaystyle 1^{1}} to n n {\displaystyle n^{n}} .
https://en.wikipedia.org/wiki/Hyperfactorial
In mathematics, and more specifically number theory, the superfactorial of a positive integer n {\displaystyle n} is the product of the first n {\displaystyle n} factorials. They are a special case of the Jordan–Pólya numbers, which are products of arbitrary collections of factorials.
https://en.wikipedia.org/wiki/Superfactorial
In mathematics, and more specifically, in the theory of fractal dimensions, Frostman's lemma provides a convenient tool for estimating the Hausdorff dimension of sets. Lemma: Let A be a Borel subset of Rn, and let s > 0. Then the following are equivalent: Hs(A) > 0, where Hs denotes the s-dimensional Hausdorff measure. There is an (unsigned) Borel measure μ on Rn satisfying μ(A) > 0, and such that μ ( B ( x , r ) ) ≤ r s {\displaystyle \mu (B(x,r))\leq r^{s}} holds for all x ∈ Rn and r>0.Otto Frostman proved this lemma for closed sets A as part of his PhD dissertation at Lund University in 1935.
https://en.wikipedia.org/wiki/Frostman_lemma
The generalization to Borel sets is more involved, and requires the theory of Suslin sets. A useful corollary of Frostman's lemma requires the notions of the s-capacity of a Borel set A ⊂ Rn, which is defined by C s ( A ) := sup { ( ∫ A × A d μ ( x ) d μ ( y ) | x − y | s ) − 1: μ is a Borel measure and μ ( A ) = 1 } . {\displaystyle C_{s}(A):=\sup {\Bigl \{}{\Bigl (}\int _{A\times A}{\frac {d\mu (x)\,d\mu (y)}{|x-y|^{s}}}{\Bigr )}^{-1}:\mu {\text{ is a Borel measure and }}\mu (A)=1{\Bigr \}}.}
https://en.wikipedia.org/wiki/Frostman_lemma
(Here, we take inf ∅ = ∞ and 1⁄∞ = 0. As before, the measure μ {\displaystyle \mu } is unsigned.) It follows from Frostman's lemma that for Borel A ⊂ Rn d i m H ( A ) = sup { s ≥ 0: C s ( A ) > 0 } . {\displaystyle \mathrm {dim} _{H}(A)=\sup\{s\geq 0:C_{s}(A)>0\}.}
https://en.wikipedia.org/wiki/Frostman_lemma
In mathematics, and particularly category theory, a coherence condition is a collection of conditions requiring that various compositions of elementary morphisms are equal. Typically the elementary morphisms are part of the data of the category. A coherence theorem states that, in order to be assured that all these equalities hold, it suffices to check a small number of identities.
https://en.wikipedia.org/wiki/Coherence_axiom
In mathematics, and particularly complex dynamics, the escaping set of an entire function ƒ consists of all points that tend to infinity under the repeated application of ƒ. That is, a complex number z 0 ∈ C {\displaystyle z_{0}\in \mathbb {C} } belongs to the escaping set if and only if the sequence defined by z n + 1 := f ( z n ) {\displaystyle z_{n+1}:=f(z_{n})} converges to infinity as n {\displaystyle n} gets large. The escaping set of f {\displaystyle f} is denoted by I ( f ) {\displaystyle I(f)} .For example, for f ( z ) = e z {\displaystyle f(z)=e^{z}} , the origin belongs to the escaping set, since the sequence 0 , 1 , e , e e , e e e , … {\displaystyle 0,1,e,e^{e},e^{e^{e}},\dots } tends to infinity.
https://en.wikipedia.org/wiki/Escaping_set
In mathematics, and particularly functional analysis, the Helly space, named after Eduard Helly, consists of all monotonically increasing functions ƒ: → , where denotes the closed interval given by the set of all x such that 0 ≤ x ≤ 1. In other words, for all 0 ≤ x ≤ 1 we have 0 ≤ ƒ(x) ≤ 1 and also if x ≤ y then ƒ(x) ≤ ƒ(y). Let the closed interval be denoted simply by I. We can form the space II by taking the uncountable Cartesian product of closed intervals: I I = ∏ i ∈ I I i {\displaystyle I^{I}=\prod _{i\in I}I_{i}} The space II is exactly the space of functions ƒ: → . For each point x in we assign the point ƒ(x) in Ix = .
https://en.wikipedia.org/wiki/Helly_space
In mathematics, and particularly general topology, the half-disk topology is an example of a topology given to the set X {\displaystyle X} , given by all points ( x , y ) {\displaystyle (x,y)} in the plane such that y ≥ 0 {\displaystyle y\geq 0} . The set X {\displaystyle X} can be termed the closed upper half plane. To give the set X {\displaystyle X} a topology means to say which subsets of X {\displaystyle X} are "open", and to do so in a way that the following axioms are met: The union of open sets is an open set. The finite intersection of open sets is an open set. The set X {\displaystyle X} and the empty set ∅ {\displaystyle \emptyset } are open sets.
https://en.wikipedia.org/wiki/Half-disk_topology
In mathematics, and particularly homology theory, Steenrod's Problem (named after mathematician Norman Steenrod) is a problem concerning the realisation of homology classes by singular manifolds.
https://en.wikipedia.org/wiki/Steenrod_problem
In mathematics, and particularly in axiomatic set theory, the diamond principle ◊ is a combinatorial principle introduced by Ronald Jensen in Jensen (1972) that holds in the constructible universe (L) and that implies the continuum hypothesis. Jensen extracted the diamond principle from his proof that the axiom of constructibility (V = L) implies the existence of a Suslin tree.
https://en.wikipedia.org/wiki/Diamond_principle
In mathematics, and particularly in axiomatic set theory, ♣S (clubsuit) is a family of combinatorial principles that are a weaker version of the corresponding ◊S; it was introduced in 1975 by Adam Ostaszewski.
https://en.wikipedia.org/wiki/Clubsuit
In mathematics, and particularly in category theory, a polygraph is a generalisation of a directed graph. It is also known as a computad. They were introduced as "polygraphs" by Albert Burroni and as "computads" by Ross Street.In the same way that a directed multigraph can freely generate a category, an n-computad is the "most general" structure which can generate a free n-category. == References ==
https://en.wikipedia.org/wiki/Polygraph_(mathematics)
In mathematics, and particularly in functional analysis, Fichera's existence principle is an existence and uniqueness theorem for solution of functional equations, proved by Gaetano Fichera in 1954. More precisely, given a general vector space V and two linear maps from it onto two Banach spaces, the principle states necessary and sufficient conditions for a linear transformation between the two dual Banach spaces to be invertible for every vector in V.
https://en.wikipedia.org/wiki/Fichera's_existence_principle
In mathematics, and particularly in graph theory, the dimension of a graph is the least integer n such that there exists a "classical representation" of the graph in the Euclidean space of dimension n with all the edges having unit length. In a classical representation, the vertices must be distinct points, but the edges may cross one another.The dimension of a graph G is written dim ⁡ G {\displaystyle \dim G} . For example, the Petersen graph can be drawn with unit edges in E 2 {\displaystyle E^{2}} , but not in E 1 {\displaystyle E^{1}}: its dimension is therefore 2 (see the figure to the right). This concept was introduced in 1965 by Paul Erdős, Frank Harary and William Tutte. It generalises the concept of unit distance graph to more than 2 dimensions.
https://en.wikipedia.org/wiki/Dimension_(graph_theory)
In mathematics, and particularly in number theory, N is a primary pseudoperfect number if it satisfies the Egyptian fraction equation 1 N + ∑ p | N 1 p = 1 , {\displaystyle {\frac {1}{N}}+\sum _{p\,|\;\!N}{\frac {1}{p}}=1,} where the sum is over only the prime divisors of N.
https://en.wikipedia.org/wiki/Primary_pseudoperfect_number
In mathematics, and particularly in potential theory, Dirichlet's principle is the assumption that the minimizer of a certain energy functional is a solution to Poisson's equation.
https://en.wikipedia.org/wiki/Dirichlet_principle
In mathematics, and particularly in set theory, category theory, type theory, and the foundations of mathematics, a universe is a collection that contains all the entities one wishes to consider in a given situation. In set theory, universes are often classes that contain (as elements) all sets for which one hopes to prove a particular theorem. These classes can serve as inner models for various axiomatic systems such as ZFC or Morse–Kelley set theory.
https://en.wikipedia.org/wiki/Universe_(mathematics)
Universes are of critical importance to formalizing concepts in category theory inside set-theoretical foundations. For instance, the canonical motivating example of a category is Set, the category of all sets, which cannot be formalized in a set theory without some notion of a universe. In type theory, a universe is a type whose elements are types.
https://en.wikipedia.org/wiki/Universe_(mathematics)
In mathematics, and particularly in the field of complex analysis, the Hadamard factorization theorem asserts that every entire function with finite order can be represented as a product involving its zeroes and an exponential of a polynomial. It is named for Jacques Hadamard. The theorem may be viewed as an extension of the fundamental theorem of algebra, which asserts that every polynomial may be factored into linear factors, one for each root. It is closely related to Weierstrass factorization theorem, which does not restrict to entire functions with finite orders.
https://en.wikipedia.org/wiki/Hadamard_factorization_theorem
In mathematics, and particularly in the field of complex analysis, the Weierstrass factorization theorem asserts that every entire function can be represented as a (possibly infinite) product involving its zeroes. The theorem may be viewed as an extension of the fundamental theorem of algebra, which asserts that every polynomial may be factored into linear factors, one for each root. The theorem, which is named for Karl Weierstrass, is closely related to a second result that every sequence tending to infinity has an associated entire function with zeroes at precisely the points of that sequence. A generalization of the theorem extends it to meromorphic functions and allows one to consider a given meromorphic function as a product of three factors: terms depending on the function's zeros and poles, and an associated non-zero holomorphic function.
https://en.wikipedia.org/wiki/Weierstrass_factorization_theorem
In mathematics, and particularly in the theory of formal languages, shortlex is a total ordering for finite sequences of objects that can themselves be totally ordered. In the shortlex ordering, sequences are primarily sorted by cardinality (length) with the shortest sequences first, and sequences of the same length are sorted into lexicographical order. Shortlex ordering is also called radix, length-lexicographic, military, or genealogical ordering.In the context of strings on a totally ordered alphabet, the shortlex order is identical to the lexicographical order, except that shorter strings precede longer strings. For example, the shortlex order of the set of strings on the English alphabet (in its usual order) is , where ε denotes the empty string. The strings in this ordering over a fixed finite alphabet can be placed into one-to-one order-preserving correspondence with the non-negative integers, giving the bijective numeration system for representing numbers. The shortlex ordering is also important in the theory of automatic groups.
https://en.wikipedia.org/wiki/Shortlex_order
In mathematics, and particularly ordinary differential equations (ODEs), a monodromy matrix is the fundamental matrix of a system of ODEs evaluated at the period of the coefficients of the system. It is used for the analysis of periodic solutions of ODEs in Floquet theory.
https://en.wikipedia.org/wiki/Monodromy_operator
In mathematics, and particularly ordinary differential equations, a characteristic multiplier is an eigenvalue of a monodromy matrix. The logarithm of a characteristic multiplier is also known as characteristic exponent. They appear in Floquet theory of periodic differential operators and in the Frobenius method.
https://en.wikipedia.org/wiki/Characteristic_multiplier
In mathematics, and particularly singularity theory, the Milnor number, named after John Milnor, is an invariant of a function germ. If f is a complex-valued holomorphic function germ then the Milnor number of f, denoted μ(f), is either a nonnegative integer, or is infinite. It can be considered both a geometric invariant and an algebraic invariant. This is why it plays an important role in algebraic geometry and singularity theory.
https://en.wikipedia.org/wiki/Milnor_number
In mathematics, and particularly topology, a fiber bundle (or, in Commonwealth English: fibre bundle) is a space that is locally a product space, but globally may have a different topological structure. Specifically, the similarity between a space E {\displaystyle E} and a product space B × F {\displaystyle B\times F} is defined using a continuous surjective map, π: E → B , {\displaystyle \pi :E\to B,} that in small regions of E {\displaystyle E} behaves just like a projection from corresponding regions of B × F {\displaystyle B\times F} to B . {\displaystyle B.} The map π , {\displaystyle \pi ,} called the projection or submersion of the bundle, is regarded as part of the structure of the bundle.
https://en.wikipedia.org/wiki/Trivialization_(mathematics)
The space E {\displaystyle E} is known as the total space of the fiber bundle, B {\displaystyle B} as the base space, and F {\displaystyle F} the fiber. In the trivial case, E {\displaystyle E} is just B × F , {\displaystyle B\times F,} and the map π {\displaystyle \pi } is just the projection from the product space to the first factor. This is called a trivial bundle.
https://en.wikipedia.org/wiki/Trivialization_(mathematics)
Examples of non-trivial fiber bundles include the Möbius strip and Klein bottle, as well as nontrivial covering spaces. Fiber bundles, such as the tangent bundle of a manifold and other more general vector bundles, play an important role in differential geometry and differential topology, as do principal bundles.
https://en.wikipedia.org/wiki/Trivialization_(mathematics)
Mappings between total spaces of fiber bundles that "commute" with the projection maps are known as bundle maps, and the class of fiber bundles forms a category with respect to such mappings. A bundle map from the base space itself (with the identity mapping as projection) to E {\displaystyle E} is called a section of E . {\displaystyle E.} Fiber bundles can be specialized in a number of ways, the most common of which is requiring that the transition maps between the local trivial patches lie in a certain topological group, known as the structure group, acting on the fiber F {\displaystyle F} .
https://en.wikipedia.org/wiki/Trivialization_(mathematics)
In mathematics, and specifically differential geometry, a connection form is a manner of organizing the data of a connection using the language of moving frames and differential forms. Historically, connection forms were introduced by Élie Cartan in the first half of the 20th century as part of, and one of the principal motivations for, his method of moving frames. The connection form generally depends on a choice of a coordinate frame, and so is not a tensorial object. Various generalizations and reinterpretations of the connection form were formulated subsequent to Cartan's initial work.
https://en.wikipedia.org/wiki/Connection_one-form