text
stringlengths
9
3.55k
source
stringlengths
31
280
In mathematics, a translation of axes in two dimensions is a mapping from an xy-Cartesian coordinate system to an x'y'-Cartesian coordinate system in which the x' axis is parallel to the x axis and k units away, and the y' axis is parallel to the y axis and h units away. This means that the origin O' of the new coordinate system has coordinates (h, k) in the original system. The positive x' and y' directions are taken to be the same as the positive x and y directions. A point P has coordinates (x, y) with respect to the original system and coordinates (x', y') with respect to the new system, where or equivalently In the new coordinate system, the point P will appear to have been translated in the opposite direction.
https://en.wikipedia.org/wiki/Translation_of_axes
For example, if the xy-system is translated a distance h to the right and a distance k upward, then P will appear to have been translated a distance h to the left and a distance k downward in the x'y'-system . A translation of axes in more than two dimensions is defined similarly. A translation of axes is a rigid transformation, but not a linear map. (See Affine transformation.)
https://en.wikipedia.org/wiki/Translation_of_axes
In mathematics, a translation plane is a projective plane which admits a certain group of symmetries (described below). Along with the Hughes planes and the Figueroa planes, translation planes are among the most well-studied of the known non-Desarguesian planes, and the vast majority of known non-Desarguesian planes are either translation planes, or can be obtained from a translation plane via successive iterations of dualization and/or derivation.In a projective plane, let P represent a point, and l represent a line. A central collineation with center P and axis l is a collineation fixing every point on l and every line through P. It is called an elation if P is on l, otherwise it is called a homology. The central collineations with center P and axis l form a group.
https://en.wikipedia.org/wiki/Translation_plane
A line l in a projective plane Π is a translation line if the group of all elations with axis l acts transitively on the points of the affine plane obtained by removing l from the plane Π, Πl (the affine derivative of Π). A projective plane with a translation line is called a translation plane. The affine plane obtained by removing the translation line is called an affine translation plane. While it is often easier to work with projective planes, in this context several authors use the term translation plane to mean affine translation plane.
https://en.wikipedia.org/wiki/Translation_plane
In mathematics, a transverse knot is a smooth embedding of a circle into a three-dimensional contact manifold such that the tangent vector at every point of the knot is transverse to the contact plane at that point. Any Legendrian knot can be C0-perturbed in a direction transverse to the contact planes to obtain a transverse knot. This yields a bijection between the set of isomorphism classes of transverse knots and the set of isomorphism classes of Legendrian knots modulo negative Legendrian stabilization.
https://en.wikipedia.org/wiki/Transverse_knot
In mathematics, a tree is an undirected graph in which any two vertices are connected by exactly one simple path. Any connected graph without simple cycles is a tree. A tree data structure simulates a hierarchical tree structure with a set of linked nodes.
https://en.wikipedia.org/wiki/XML_tree
A hierarchy consists of an order defined on a set. The term hierarchy is used to stress a hierarchical relation among the elements. The XML specification defines an XML document as a well-formed text if it satisfies a list of syntax rules defined in the specification.
https://en.wikipedia.org/wiki/XML_tree
This specification is long, however 2 key points relating to the tree structure of an XML document are: The begin, end, and empty-element tags that delimit the elements are correctly nested, with none missing and none overlapping A single "root" element contains all the other elementsThese features resemble those of trees, in that there is a single root node, and an order to the elements. XML has appeared as a first-class data type in other languages.
https://en.wikipedia.org/wiki/XML_tree
The JavaScript (E4X) extension explicitly defines two specific objects (XML and XMLList), which support XML document nodes and XML node lists as distinct objects and use a dot-notation specifying parent-child relationships. These data structures represent XML documents as a tree structure. An XML Tree represented graphically can be as simple as an ASCII chart or a more graphically complex hierarchy.
https://en.wikipedia.org/wiki/XML_tree
For instance, the XML document and the ASCII tree have the same structure. XML Trees do not show the content in an Instance document, only the structure of the document. In this example Product is the Root Element of the tree and the two child nodes of Product are Name and Details. Details contains two child nodes, Description and Price. The tree command in Windows and *nix also produce a similar tree structure and path.
https://en.wikipedia.org/wiki/XML_tree
In mathematics, a tree of primitive Pythagorean triples is a data tree in which each node branches to three subsequent nodes with the infinite set of all nodes giving all (and only) primitive Pythagorean triples without duplication. A Pythagorean triple is a set of three positive integers a, b, and c having the property that they can be respectively the two legs and the hypotenuse of a right triangle, thus satisfying the equation a 2 + b 2 = c 2 {\displaystyle a^{2}+b^{2}=c^{2}} ; the triple is said to be primitive if and only if the greatest common divisor of a, b, and c is one. Primitive Pythagorean triple a, b, and c are also pairwise coprime. The set of all primitive Pythagorean triples has the structure of a rooted tree, specifically a ternary tree, in a natural way.
https://en.wikipedia.org/wiki/Tree_of_primitive_Pythagorean_triples
This was first discovered by B. Berggren in 1934.F. J. M. Barning showed that when any of the three matrices A = B = C = {\displaystyle {\begin{array}{lcr}A={\begin{bmatrix}1&-2&2\\2&-1&2\\2&-2&3\end{bmatrix}}&B={\begin{bmatrix}1&2&2\\2&1&2\\2&2&3\end{bmatrix}}&C={\begin{bmatrix}-1&2&2\\-2&1&2\\-2&2&3\end{bmatrix}}\end{array}}} is multiplied on the right by a column vector whose components form a Pythagorean triple, then the result is another column vector whose components are a different Pythagorean triple. If the initial triple is primitive, then so is the one that results.
https://en.wikipedia.org/wiki/Tree_of_primitive_Pythagorean_triples
Thus each primitive Pythagorean triple has three "children". All primitive Pythagorean triples are descended in this way from the triple (3, 4, 5), and no primitive triple appears more than once. The result may be graphically represented as an infinite ternary tree with (3, 4, 5) at the root node (see classic tree at right). This tree also appeared in papers of A. Hall in 1970 and A. R. Kanga in 1990. In 2008 V. E. Firstov showed generally that only three such trichotomy trees exist and give explicitly a tree similar to Berggren's but starting with initial node (4, 3, 5).
https://en.wikipedia.org/wiki/Tree_of_primitive_Pythagorean_triples
In mathematics, a triangle group is a group that can be realized geometrically by sequences of reflections across the sides of a triangle. The triangle can be an ordinary Euclidean triangle, a triangle on the sphere, or a hyperbolic triangle. Each triangle group is the symmetry group of a tiling of the Euclidean plane, the sphere, or the hyperbolic plane by congruent triangles called Möbius triangles, each one a fundamental domain for the action.
https://en.wikipedia.org/wiki/Triangle_group
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called lower triangular if all the entries above the main diagonal are zero. Similarly, a square matrix is called upper triangular if all the entries below the main diagonal are zero. Because matrix equations with triangular matrices are easier to solve, they are very important in numerical analysis. By the LU decomposition algorithm, an invertible matrix may be written as the product of a lower triangular matrix L and an upper triangular matrix U if and only if all its leading principal minors are non-zero.
https://en.wikipedia.org/wiki/Triangular_form
In mathematics, a triangulated category is a category with the additional structure of a "translation functor" and a class of "exact triangles". Prominent examples are the derived category of an abelian category, as well as the stable homotopy category. The exact triangles generalize the short exact sequences in an abelian category, as well as fiber sequences and cofiber sequences in topology. Much of homological algebra is clarified and extended by the language of triangulated categories, an important example being the theory of sheaf cohomology.
https://en.wikipedia.org/wiki/Exact_triangle
In the 1960s, a typical use of triangulated categories was to extend properties of sheaves on a space X to complexes of sheaves, viewed as objects of the derived category of sheaves on X. More recently, triangulated categories have become objects of interest in their own right. Many equivalences between triangulated categories of different origins have been proved or conjectured. For example, the homological mirror symmetry conjecture predicts that the derived category of a Calabi–Yau manifold is equivalent to the Fukaya category of its "mirror" symplectic manifold. Shift operator is a decategorified analogue of triangulated category.
https://en.wikipedia.org/wiki/Exact_triangle
In mathematics, a tricategory is a kind of structure of category theory studied in higher-dimensional category theory. Whereas a weak 2-category is said to be a bicategory, a weak 3-category is said to be a tricategory (Gordon, Power & Street 1995; Baez & Dolan 1996; Leinster 1998).Tetracategories are the corresponding notion in dimension four. Dimensions beyond three are seen as increasingly significant to the relationship between knot theory and physics. John Baez, R. Gordon, A. J. Power and Ross Street have done much of the significant work with categories beyond bicategories thus far.
https://en.wikipedia.org/wiki/Tricategory
In mathematics, a trident curve (also trident of Newton or parabola of Descartes) is any member of the family of curves that have the formula: x y + a x 3 + b x 2 + c x = d {\displaystyle xy+ax^{3}+bx^{2}+cx=d} Trident curves are cubic plane curves with an ordinary double point in the real projective plane at x = 0, y = 1, z = 0; if we substitute x = x/z and y = 1/z into the equation of the trident curve, we get a x 3 + b x 2 z + c x z 2 + x z = d z 3 , {\displaystyle ax^{3}+bx^{2}z+cxz^{2}+xz=dz^{3},} which has an ordinary double point at the origin. Trident curves are therefore rational plane algebraic curves of genus zero.
https://en.wikipedia.org/wiki/Trident_curve
In mathematics, a trinomial expansion is the expansion of a power of a sum of three terms into monomials. The expansion is given by ( a + b + c ) n = ∑ i , j , k i + j + k = n ( n i , j , k ) a i b j c k , {\displaystyle (a+b+c)^{n}=\sum _{{i,j,k} \atop {i+j+k=n}}{n \choose i,j,k}\,a^{i}\,b^{\;\!j}\;\!c^{k},} where n is a nonnegative integer and the sum is taken over all combinations of nonnegative indices i, j, and k such that i + j + k = n. The trinomial coefficients are given by ( n i , j , k ) = n ! i ! j !
https://en.wikipedia.org/wiki/Trinomial_expansion
k ! . {\displaystyle {n \choose i,j,k}={\frac {n!}{i!\,j!\,k!}}\,.} This formula is a special case of the multinomial formula for m = 3. The coefficients can be defined with a generalization of Pascal's triangle to three dimensions, called Pascal's pyramid or Pascal's tetrahedron.
https://en.wikipedia.org/wiki/Trinomial_expansion
In mathematics, a trivial group or zero group is a group consisting of a single element. All such groups are isomorphic, so one often speaks of the trivial group. The single element of the trivial group is the identity element and so it is usually denoted as such: 0 , 1 , {\displaystyle 0,1,} or e {\displaystyle e} depending on the context. If the group operation is denoted ⋅ {\displaystyle \,\cdot \,} then it is defined by e ⋅ e = e .
https://en.wikipedia.org/wiki/Trivial_group
{\displaystyle e\cdot e=e.} The similarly defined trivial monoid is also a group since its only element is its own inverse, and is hence the same as the trivial group. The trivial group is distinct from the empty set, which has no elements, hence lacks an identity element, and so cannot be a group.
https://en.wikipedia.org/wiki/Trivial_group
In mathematics, a trivial semigroup (a semigroup with one element) is a semigroup for which the cardinality of the underlying set is one. The number of distinct nonisomorphic semigroups with one element is one. If S = { a } is a semigroup with one element, then the Cayley table of S is The only element in S is the zero element 0 of S and is also the identity element 1 of S. However not all semigroup theorists consider the unique element in a semigroup with one element as the zero element of the semigroup. They define zero elements only in semigroups having at least two elements.In spite of its extreme triviality, the semigroup with one element is important in many situations.
https://en.wikipedia.org/wiki/Trivial_semigroup
It is the starting point for understanding the structure of semigroups. It serves as a counterexample in illuminating many situations. For example, the semigroup with one element is the only semigroup in which 0 = 1, that is, the zero element and the identity element are equal. Further, if S is a semigroup with one element, the semigroup obtained by adjoining an identity element to S is isomorphic to the semigroup obtained by adjoining a zero element to S. The semigroup with one element is also a group. In the language of category theory, any semigroup with one element is a terminal object in the category of semigroups.
https://en.wikipedia.org/wiki/Trivial_semigroup
In mathematics, a tube domain is a generalization of the notion of a vertical strip (or half-plane) in the complex plane to several complex variables. A strip can be thought of as the collection of complex numbers whose real part lie in a given subset of the real line and whose imaginary part is unconstrained; likewise, a tube is the set of complex vectors whose real part is in some given collection of real vectors, and whose imaginary part is unconstrained. Tube domains are domains of the Laplace transform of a function of several real variables (see multidimensional Laplace transform). Hardy spaces on tubes can be defined in a manner in which a version of the Paley–Wiener theorem from one variable continues to hold, and characterizes the elements of Hardy spaces as the Laplace transforms of functions with appropriate integrability properties.
https://en.wikipedia.org/wiki/Tube_domain
Tubes over convex sets are domains of holomorphy. The Hardy spaces on tubes over convex cones have an especially rich structure, so that precise results are known concerning the boundary values of Hp functions.
https://en.wikipedia.org/wiki/Tube_domain
In mathematical physics, the future tube is the tube domain associated to the interior of the past null cone in Minkowski space, and has applications in relativity theory and quantum gravity. Certain tubes over cones support a Bergman metric in terms of which they become bounded symmetric domains. One of these is the Siegel half-space which is fundamental in arithmetic.
https://en.wikipedia.org/wiki/Tube_domain
In mathematics, a tubular neighborhood of a submanifold of a smooth manifold is an open set around it resembling the normal bundle. The idea behind a tubular neighborhood can be explained in a simple example. Consider a smooth curve in the plane without self-intersections. On each point on the curve draw a line perpendicular to the curve.
https://en.wikipedia.org/wiki/Tubular_neighborhood
Unless the curve is straight, these lines will intersect among themselves in a rather complicated fashion. However, if one looks only in a narrow band around the curve, the portions of the lines in that band will not intersect, and will cover the entire band without gaps. This band is a tubular neighborhood.
https://en.wikipedia.org/wiki/Tubular_neighborhood
In general, let S be a submanifold of a manifold M, and let N be the normal bundle of S in M. Here S plays the role of the curve and M the role of the plane containing the curve. Consider the natural map i: N 0 → S {\displaystyle i:N_{0}\to S} which establishes a bijective correspondence between the zero section N 0 {\displaystyle N_{0}} of N and the submanifold S of M. An extension j of this map to the entire normal bundle N with values in M such that j ( N ) {\displaystyle j(N)} is an open set in M and j is a homeomorphism between N and j ( N ) {\displaystyle j(N)} is called a tubular neighbourhood. Often one calls the open set T = j ( N ) , {\displaystyle T=j(N),} rather than j itself, a tubular neighbourhood of S, it is assumed implicitly that the homeomorphism j mapping N to T exists.
https://en.wikipedia.org/wiki/Tubular_neighborhood
In mathematics, a tuple is a finite sequence or ordered list of numbers or, more generally, mathematical objects, which are called the elements of the tuple. An n-tuple is a tuple of n elements, where n is a non-negative integer. There is only one 0-tuple, called the empty tuple. A 1-tuple and a 2-tuple are commonly called respectively a singleton and an ordered pair.
https://en.wikipedia.org/wiki/Empty_tuple
Tuple may be formally defined from ordered pairs by recurrence by starting from ordered pairs; indeed, a n-tuple can be identified with the ordered pair of its (n − 1) first elements and its nth element. Tuples are usually written by listing the elements within parentheses "( )", separated by a comma and a space; for example, (2, 7, 4, 1, 7) denotes a 5-tuple. Sometimes other symbols are used to surround the elements, such as square brackets "" or angle brackets "⟨ ⟩".
https://en.wikipedia.org/wiki/Empty_tuple
Braces "{ }" are used to specify arrays in some programming languages but not in mathematical expressions, as they are the standard notation for sets. The term tuple can often occur when discussing other mathematical objects, such as vectors. In computer science, tuples come in many forms.
https://en.wikipedia.org/wiki/Empty_tuple
Most typed functional programming languages implement tuples directly as product types, tightly associated with algebraic data types, pattern matching, and destructuring assignment. Many programming languages offer an alternative to tuples, known as record types, featuring unordered elements accessed by label. A few programming languages combine ordered tuple product types and unordered record types into a single construct, as in C structs and Haskell records. Relational databases may formally identify their rows (records) as tuples. Tuples also occur in relational algebra; when programming the semantic web with the Resource Description Framework (RDF); in linguistics; and in philosophy.
https://en.wikipedia.org/wiki/Empty_tuple
In mathematics, a twisted cubic is a smooth, rational curve C of degree three in projective 3-space P3. It is a fundamental example of a skew curve. It is essentially unique, up to projective transformation (the twisted cubic, therefore). In algebraic geometry, the twisted cubic is a simple example of a projective variety that is not linear or a hypersurface, in fact not a complete intersection. It is the three-dimensional case of the rational normal curve, and is the image of a Veronese map of degree three on the projective line.
https://en.wikipedia.org/wiki/Twisted_cubic
In mathematics, a twisted polynomial is a polynomial over a field of characteristic p {\displaystyle p} in the variable τ {\displaystyle \tau } representing the Frobenius map x ↦ x p {\displaystyle x\mapsto x^{p}} . In contrast to normal polynomials, multiplication of these polynomials is not commutative, but satisfies the commutation rule τ x = x p τ {\displaystyle \tau x=x^{p}\tau } for all x {\displaystyle x} in the base field. Over an infinite field, the twisted polynomial ring is isomorphic to the ring of additive polynomials, but where multiplication on the latter is given by composition rather than usual multiplication. However, it is often easier to compute in the twisted polynomial ring — this can be applied especially in the theory of Drinfeld modules.
https://en.wikipedia.org/wiki/Noncommutative_polynomials
In mathematics, a twisted sheaf is a variant of a coherent sheaf. Precisely, it is specified by: an open covering in the étale topology Ui, coherent sheaves Fi over Ui, a Čech 2-cocycle θ on the covering Ui as well as the isomorphisms g i j: F j | U i j → ∼ F i | U i j {\displaystyle g_{ij}:F_{j}|_{U_{ij}}{\overset {\sim }{\to }}F_{i}|_{U_{ij}}} satisfying g i i = id F i {\displaystyle g_{ii}=\operatorname {id} _{F_{i}}} , g i j = g j i − 1 , {\displaystyle g_{ij}=g_{ji}^{-1},} g i j ∘ g j k ∘ g k i = θ i j k id F i . {\displaystyle g_{ij}\circ g_{jk}\circ g_{ki}=\theta _{ijk}\operatorname {id} _{F_{i}}.} The notion of twisted sheaves was introduced by Jean Giraud. The above definition due to Căldăraru is down-to-earth but is equivalent to a more sophisticated definition in terms of gerbe; see § 2.1.3 of (Lieblich 2007).
https://en.wikipedia.org/wiki/Twisted_sheaf
In mathematics, a two-graph is a set of (unordered) triples chosen from a finite vertex set X, such that every (unordered) quadruple from X contains an even number of triples of the two-graph. A regular two-graph has the property that every pair of vertices lies in the same number of triples of the two-graph. Two-graphs have been studied because of their connection with equiangular lines and, for regular two-graphs, strongly regular graphs, and also finite groups because many regular two-graphs have interesting automorphism groups. A two-graph is not a graph and should not be confused with other objects called 2-graphs in graph theory, such as 2-regular graphs.
https://en.wikipedia.org/wiki/Two-graph
In mathematics, a unary function is a function that takes one argument. A unary operator belongs to a subset of unary functions, in that its range coincides with its domain. In contrast, a unary function's domain may or may not coincide with its range.
https://en.wikipedia.org/wiki/Unary_function
In mathematics, a unary operation is an operation with only one operand, i.e. a single input. This is in contrast to binary operations, which use two operands. An example is any function f: A → A, where A is a set.
https://en.wikipedia.org/wiki/Unary_functional_symbol
The function f is a unary operation on A. Common notations are prefix notation (e.g. ¬, −), postfix notation (e.g. factorial n! ), functional notation (e.g. sin x or sin(x)), and superscripts (e.g. transpose AT). Other notations exist as well, for example, in the case of the square root, a horizontal bar extending the square root sign over the argument can indicate the extent of the argument.
https://en.wikipedia.org/wiki/Unary_functional_symbol
In mathematics, a unicoherent space is a topological space X {\displaystyle X} that is connected and in which the following property holds: For any closed, connected A , B ⊂ X {\displaystyle A,B\subset X} with X = A ∪ B {\displaystyle X=A\cup B} , the intersection A ∩ B {\displaystyle A\cap B} is connected. For example, any closed interval on the real line is unicoherent, but a circle is not. If a unicoherent space is more strongly hereditarily unicoherent (meaning that every subcontinuum is unicoherent) and arcwise connected, then it is called a dendroid. If in addition it is locally connected then it is called a dendrite. The Phragmen–Brouwer theorem states that, for locally connected spaces, unicoherence is equivalent to a separation property of the closed sets of the space.
https://en.wikipedia.org/wiki/Unicoherent_space
In mathematics, a uniform matroid is a matroid in which the independent sets are exactly the sets containing at most r elements, for some fixed integer r. An alternative definition is that every permutation of the elements is a symmetry.
https://en.wikipedia.org/wiki/Uniform_matroid
In mathematics, a uniform tree is a locally finite tree which is the universal cover of a finite graph. Equivalently, the full automorphism group G=Aut(X) of the tree, which is a locally compact topological group, is unimodular and G\X is finite. Also equivalent is the existence of a uniform X-lattice in G.
https://en.wikipedia.org/wiki/Uniform_tree
In mathematics, a uniformly bounded family of functions is a family of bounded functions that can all be bounded by the same constant. This constant is larger than or equal to the absolute value of any value of any of the functions in the family.
https://en.wikipedia.org/wiki/Uniformly_bounded
In mathematics, a uniformly bounded representation T {\displaystyle T} of a locally compact group G {\displaystyle G} on a Hilbert space H {\displaystyle H} is a homomorphism into the bounded invertible operators which is continuous for the strong operator topology, and such that sup g ∈ G ‖ T g ‖ B ( H ) {\displaystyle \sup _{g\in G}\|T_{g}\|_{B(H)}} is finite. In 1947 Béla Szőkefalvi-Nagy established that any uniformly bounded representation of the integers or the real numbers is unitarizable, i.e. conjugate by an invertible operator to a unitary representation. For the integers this gives a criterion for an invertible operator to be similar to a unitary operator: the operator norms of all the positive and negative powers must be uniformly bounded.
https://en.wikipedia.org/wiki/Uniformly_bounded_representation
The result on unitarizability of uniformly bounded representations was extended in 1950 by Dixmier, Day and Nakamura-Takeda to all locally compact amenable groups, following essentially the method of proof of Sz-Nagy. The result is known to fail for non-amenable groups such as SL(2,R) and the free group on two generators. Dixmier (1950) conjectured that a locally compact group is amenable if and only if every uniformly bounded representation is unitarizable.
https://en.wikipedia.org/wiki/Uniformly_bounded_representation
In mathematics, a uniformly disconnected space is a metric space ( X , d ) {\displaystyle (X,d)} for which there exists λ > 0 {\displaystyle \lambda >0} such that no pair of distinct points x , y ∈ X {\displaystyle x,y\in X} can be connected by a λ {\displaystyle \lambda } -chain. A λ {\displaystyle \lambda } -chain between x {\displaystyle x} and y {\displaystyle y} is a sequence of points x = x 0 , x 1 , … , x n = y {\displaystyle x=x_{0},x_{1},\ldots ,x_{n}=y} in X {\displaystyle X} such that d ( x i , x i + 1 ) ≤ λ d ( x , y ) , ∀ i ∈ { 0 , … , n } {\displaystyle d(x_{i},x_{i+1})\leq \lambda d(x,y),\forall i\in \{0,\ldots ,n\}} .
https://en.wikipedia.org/wiki/Uniformly_disconnected_space
In mathematics, a uniformly smooth space is a normed vector space X {\displaystyle X} satisfying the property that for every ϵ > 0 {\displaystyle \epsilon >0} there exists δ > 0 {\displaystyle \delta >0} such that if x , y ∈ X {\displaystyle x,y\in X} with ‖ x ‖ = 1 {\displaystyle \|x\|=1} and ‖ y ‖ ≤ δ {\displaystyle \|y\|\leq \delta } then ‖ x + y ‖ + ‖ x − y ‖ ≤ 2 + ϵ ‖ y ‖ . {\displaystyle \|x+y\|+\|x-y\|\leq 2+\epsilon \|y\|.} The modulus of smoothness of a normed space X is the function ρX defined for every t > 0 by the formula ρ X ( t ) = sup { ‖ x + y ‖ + ‖ x − y ‖ 2 − 1: ‖ x ‖ = 1 , ‖ y ‖ = t } . {\displaystyle \rho _{X}(t)=\sup {\Bigl \{}{\frac {\|x+y\|+\|x-y\|}{2}}-1\,:\,\|x\|=1,\;\|y\|=t{\Bigr \}}.} The triangle inequality yields that ρX(t ) ≤ t. The normed space X is uniformly smooth if and only if ρX(t ) / t tends to 0 as t tends to 0.
https://en.wikipedia.org/wiki/Uniformly_smooth_space
In mathematics, a unimodular matrix M is a square integer matrix having determinant +1 or −1. Equivalently, it is an integer matrix that is invertible over the integers: there is an integer matrix N that is its inverse (these are equivalent under Cramer's rule). Thus every equation Mx = b, where M and b both have integer components and M is unimodular, has an integer solution. The n × n unimodular matrices form a group called the n × n general linear group over Z {\displaystyle \mathbb {Z} } , which is denoted GL n ⁡ ( Z ) {\displaystyle \operatorname {GL} _{n}(\mathbb {Z} )} .
https://en.wikipedia.org/wiki/Totally_unimodular
In mathematics, a unimodular polynomial matrix is a square polynomial matrix whose inverse exists and is itself a polynomial matrix. Equivalently, a polynomial matrix A is unimodular if its determinant det(A) is a nonzero constant.
https://en.wikipedia.org/wiki/Unimodular_polynomial_matrix
In mathematics, a unipotent element r of a ring R is one such that r − 1 is a nilpotent element; in other words, (r − 1)n is zero for some n. In particular, a square matrix M is a unipotent matrix if and only if its characteristic polynomial P(t) is a power of t − 1. Thus all the eigenvalues of a unipotent matrix are 1. The term quasi-unipotent means that some power is unipotent, for example for a diagonalizable matrix with eigenvalues that are all roots of unity. In the theory of algebraic groups, a group element is unipotent if it acts unipotently in a certain natural group representation. A unipotent affine algebraic group is then a group with all elements unipotent.
https://en.wikipedia.org/wiki/Unipotent_group
In mathematics, a unipotent representation of a reductive group is a representation that has some similarities with unipotent conjugacy classes of groups. Informally, Langlands philosophy suggests that there should be a correspondence between representations of a reductive group and conjugacy classes of a Langlands dual group, and the unipotent representations should be roughly the ones corresponding to unipotent classes in the dual group. Unipotent representations are supposed to be the basic "building blocks" out of which one can construct all other representations in the following sense. Unipotent representations should form a small (preferably finite) set of irreducible representations for each reductive group, such that all irreducible representations can be obtained from unipotent representations of possibly smaller groups by some sort of systematic process, such as (cohomological or parabolic) induction.
https://en.wikipedia.org/wiki/Unipotent_representation
In mathematics, a unique factorization domain (UFD) (also sometimes called a factorial ring following the terminology of Bourbaki) is a ring in which a statement analogous to the fundamental theorem of arithmetic holds. Specifically, a UFD is an integral domain (a nontrivial commutative ring in which the product of any two non-zero elements is non-zero) in which every non-zero non-unit element can be written as a product of prime elements (or irreducible elements), uniquely up to order and units. Important examples of UFDs are the integers and polynomial rings in one or more variables with coefficients coming from the integers or from a field. Unique factorization domains appear in the following chain of class inclusions: rngs ⊃ rings ⊃ commutative rings ⊃ integral domains ⊃ integrally closed domains ⊃ GCD domains ⊃ unique factorization domains ⊃ principal ideal domains ⊃ Euclidean domains ⊃ fields ⊃ algebraically closed fields
https://en.wikipedia.org/wiki/Unique_factorisation
In mathematics, a unique sink orientation is an orientation of the edges of a polytope such that, in every face of the polytope (including the whole polytope as one of the faces), there is exactly one vertex for which all adjoining edges are oriented inward (i.e. towards that vertex). If a polytope is given together with a linear objective function, and edges are oriented from vertices with smaller objective function values to vertices with larger objective values, the result is a unique sink orientation. Thus, unique sink orientations can be used to model linear programs as well as certain nonlinear programs such as the smallest circle problem.
https://en.wikipedia.org/wiki/Unique_sink_orientation
In mathematics, a uniqueness theorem, also called a unicity theorem, is a theorem asserting the uniqueness of an object satisfying certain conditions, or the equivalence of all objects satisfying the said conditions. Examples of uniqueness theorems include: Alexandrov's uniqueness theorem of three-dimensional polyhedra Black hole uniqueness theorem Cauchy–Kowalevski theorem is the main local existence and uniqueness theorem for analytic partial differential equations associated with Cauchy initial value problems. Cauchy–Kowalevski–Kashiwara theorem is a wide generalization of the Cauchy–Kowalevski theorem for systems of linear partial differential equations with analytic coefficients. Division theorem, the uniqueness of quotient and remainder under Euclidean division.
https://en.wikipedia.org/wiki/Uniqueness_theorem
Fundamental theorem of arithmetic, the uniqueness of prime factorization. Holmgren's uniqueness theorem for linear partial differential equations with real analytic coefficients. Picard–Lindelöf theorem, the uniqueness of solutions to first-order differential equations. Thompson uniqueness theorem in finite group theory Uniqueness theorem for Poisson's equation Electromagnetism uniqueness theorem for the solution of Maxwell's equation Uniqueness case in finite group theoryThe word unique is sometimes replaced by essentially unique, whenever one wants to stress that the uniqueness is only referred to the underlying structure, whereas the form may vary in all ways that do not affect the mathematical content.A uniqueness theorem (or its proof) is, at least within the mathematics of differential equations, often combined with an existence theorem (or its proof) to a combined existence and uniqueness theorem (e.g., existence and uniqueness of solution to first-order differential equations with boundary condition).
https://en.wikipedia.org/wiki/Uniqueness_theorem
In mathematics, a unistochastic matrix (also called unitary-stochastic) is a doubly stochastic matrix whose entries are the squares of the absolute values of the entries of some unitary matrix. A square matrix B of size n is doubly stochastic (or bistochastic) if all its entries are non-negative real numbers and each of its rows and columns sum to 1. It is unistochastic if there exists a unitary matrix U such that B i j = | U i j | 2 for i , j = 1 , … , n . {\displaystyle B_{ij}=|U_{ij}|^{2}{\text{ for }}i,j=1,\dots ,n.\,} This definition is analogous to that for an orthostochastic matrix, which is a doubly stochastic matrix whose entries are the squares of the entries in some orthogonal matrix.
https://en.wikipedia.org/wiki/Unistochastic_matrix
Since all orthogonal matrices are necessarily unitary matrices, all orthostochastic matrices are also unistochastic. The converse, however, is not true. First, all 2-by-2 doubly stochastic matrices are both unistochastic and orthostochastic, but for larger n this is not the case.
https://en.wikipedia.org/wiki/Unistochastic_matrix
For example, take n = 3 {\displaystyle n=3} and consider the following doubly stochastic matrix: B = 1 2 . {\displaystyle B={\frac {1}{2}}{\begin{bmatrix}1&1&0\\0&1&1\\1&0&1\end{bmatrix}}.} This matrix is not unistochastic, since any two vectors with moduli equal to the square root of the entries of two columns (or rows) of B cannot be made orthogonal by a suitable choice of phases.
https://en.wikipedia.org/wiki/Unistochastic_matrix
In mathematics, a unit circle is a circle of unit radius—that is, a radius of 1. Frequently, especially in trigonometry, the unit circle is the circle of radius 1 centered at the origin (0, 0) in the Cartesian coordinate system in the Euclidean plane. In topology, it is often denoted as S1 because it is a one-dimensional unit n-sphere.If (x, y) is a point on the unit circle's circumference, then |x| and |y| are the lengths of the legs of a right triangle whose hypotenuse has length 1.
https://en.wikipedia.org/wiki/Unit_circle
Thus, by the Pythagorean theorem, x and y satisfy the equation Since x2 = (−x)2 for all x, and since the reflection of any point on the unit circle about the x- or y-axis is also on the unit circle, the above equation holds for all points (x, y) on the unit circle, not only those in the first quadrant. The interior of the unit circle is called the open unit disk, while the interior of the unit circle combined with the unit circle itself is called the closed unit disk. One may also use other notions of "distance" to define other "unit circles", such as the Riemannian circle; see the article on mathematical norms for additional examples.
https://en.wikipedia.org/wiki/Unit_circle
In mathematics, a unit sphere is simply a sphere of radius one around a given center. More generally, it is the set of points of distance 1 from a fixed central point, where different norms can be used as general notions of "distance". A unit ball is the closed set of points of distance less than or equal to 1 from a fixed central point. Usually the center is at the origin of the space, so one speaks of "the unit ball" or "the unit sphere".
https://en.wikipedia.org/wiki/Unit_ball
Special cases are the unit circle and the unit disk. The importance of the unit sphere is that any sphere can be transformed to a unit sphere by a combination of translation and scaling. In this way the properties of spheres in general can be reduced to the study of the unit sphere.
https://en.wikipedia.org/wiki/Unit_ball
In mathematics, a unit square is a square whose sides have length 1. Often, the unit square refers specifically to the square in the Cartesian plane with corners at the four points (0, 0), (1, 0), (0, 1), and (1, 1).
https://en.wikipedia.org/wiki/Unit_square
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in v ^ {\displaystyle {\hat {\mathbf {v} }}} (pronounced "v-hat").
https://en.wikipedia.org/wiki/Hat_operator
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in v ^ {\displaystyle {\hat {\mathbf {v} }}} (pronounced "v-hat"). The term direction vector, commonly denoted as d, is used to describe a unit vector being used to represent spatial direction and relative direction.
https://en.wikipedia.org/wiki/Unit_vector
2D spatial directions are numerically equivalent to points on the unit circle and spatial directions in 3D are equivalent to a point on the unit sphere. The normalized vector û of a non-zero vector u is the unit vector in the direction of u, i.e., u ^ = u ‖ u ‖ {\displaystyle \mathbf {\hat {u}} ={\frac {\mathbf {u} }{\|\mathbf {u} \|}}} where ‖u‖ is the norm (or length) of u. The term normalized vector is sometimes used as a synonym for unit vector. Unit vectors are often chosen to form the basis of a vector space, and every vector in the space may be written as a linear combination of unit vectors.
https://en.wikipedia.org/wiki/Unit_vector
In mathematics, a unitary representation of a group G is a linear representation π of G on a complex Hilbert space V such that π(g) is a unitary operator for every g ∈ G. The general theory is well-developed in the case that G is a locally compact (Hausdorff) topological group and the representations are strongly continuous. The theory has been widely applied in quantum mechanics since the 1920s, particularly influenced by Hermann Weyl's 1928 book Gruppentheorie and Quantenmechanik. One of the pioneers in constructing a general theory of unitary representations, for any group G rather than just for particular groups useful in applications, was George Mackey.
https://en.wikipedia.org/wiki/Unitary_representation
In mathematics, a unitary spider diagram adds existential points to an Euler or a Venn diagram. The points indicate the existence of an attribute described by the intersection of contours in the Euler diagram. These points may be joined forming a shape like a spider. Joined points represent an "or" condition, also known as a logical disjunction. A spider diagram is a boolean expression involving unitary spider diagrams and the logical symbols ∧ , ∨ , ¬ {\displaystyle \land ,\lor ,\lnot } . For example, it may consist of the conjunction of two spider diagrams, the disjunction of two spider diagrams, or the negation of a spider diagram.
https://en.wikipedia.org/wiki/Spider_diagram
In mathematics, a unitary transformation is a transformation that preserves the inner product: the inner product of two vectors before the transformation is equal to their inner product after the transformation.
https://en.wikipedia.org/wiki/Unitary_transformation
In mathematics, a univariate object is an expression, equation, function or polynomial involving only one variable. Objects involving more than one variable are multivariate. In some cases the distinction between the univariate and multivariate cases is fundamental; for example, the fundamental theorem of algebra and Euclid's algorithm for polynomials are fundamental properties of univariate polynomials that cannot be generalized to multivariate polynomials.
https://en.wikipedia.org/wiki/Univariate
In statistics, a univariate distribution characterizes one variable, although it can be applied in other ways as well. For example, univariate data are composed of a single scalar component. In time series analysis, the whole time series is the "variable": a univariate time series is the series of values over time of a single quantity.
https://en.wikipedia.org/wiki/Univariate
Correspondingly, a "multivariate time series" characterizes the changing values over time of several quantities. In some cases, the terminology is ambiguous, since the values within a univariate time series may be treated using certain types of multivariate statistical analyses and may be represented using multivariate distributions. In addition to the question of scaling, a criterion (variable) in univariate statistics can be described by two important measures (also key figures or parameters): Location & Variation. Measures of Location Scales (e.g. mode, median, arithmetic mean) describe in which area the data is arranged centrally. Measures of Variation (e.g. span, interquartile distance, standard deviation) describe how similar or different the data are scattered.
https://en.wikipedia.org/wiki/Univariate
In mathematics, a univariate polynomial of degree n with real or complex coefficients has n complex roots, if counted with their multiplicities. They form a multiset of n points in the complex plane. This article concerns the geometry of these points, that is the information about their localization in the complex plane that can be deduced from the degree and the coefficients of the polynomial. Some of these geometrical properties are related to a single polynomial, such as upper bounds on the absolute values of the roots, which define a disk containing all roots, or lower bounds on the distance between two roots.
https://en.wikipedia.org/wiki/Properties_of_polynomial_roots
Such bounds are widely used for root-finding algorithms for polynomials, either for tuning them, or for computing their computational complexity. Some other properties are probabilistic, such as the expected number of real roots of a random polynomial of degree n with real coefficients, which is less than 1 + 2 π ln ⁡ ( n ) {\displaystyle 1+{\frac {2}{\pi }}\ln(n)} for n sufficiently large. In this article, a polynomial that is considered is always denoted p ( x ) = a 0 + a 1 x + ⋯ + a n x n , {\displaystyle p(x)=a_{0}+a_{1}x+\cdots +a_{n}x^{n},} where a 0 , … , a n {\displaystyle a_{0},\dots ,a_{n}} are real or complex numbers and a n ≠ 0 {\displaystyle a_{n}\neq 0} ; thus n is the degree of the polynomial.
https://en.wikipedia.org/wiki/Properties_of_polynomial_roots
In mathematics, a universal C*-algebra is a C*-algebra described in terms of generators and relations. In contrast to rings or algebras, where one can consider quotients by free rings to construct universal objects, C*-algebras must be realizable as algebras of bounded operators on a Hilbert space by the Gelfand-Naimark-Segal construction and the relations must prescribe a uniform bound on the norm of each generator. This means that depending on the generators and relations, a universal C*-algebra may not exist. In particular, free C*-algebras do not exist.
https://en.wikipedia.org/wiki/Universal_C*-algebra
In mathematics, a universal graph is an infinite graph that contains every finite (or at-most-countable) graph as an induced subgraph. A universal graph of this type was first constructed by Richard Rado and is now called the Rado graph or random graph. More recent work has focused on universal graphs for a graph family F: that is, an infinite graph belonging to F that contains all finite graphs in F. For instance, the Henson graphs are universal in this sense for the i-clique-free graphs. A universal graph for a family F of graphs can also refer to a member of a sequence of finite graphs that contains all graphs in F; for instance, every finite tree is a subgraph of a sufficiently large hypercube graph so a hypercube can be said to be a universal graph for trees.
https://en.wikipedia.org/wiki/Universal_graph
However it is not the smallest such graph: it is known that there is a universal graph for n-vertex trees, with only n vertices and O(n log n) edges, and that this is optimal. A construction based on the planar separator theorem can be used to show that n-vertex planar graphs have universal graphs with O(n3/2) edges, and that bounded-degree planar graphs have universal graphs with O(n log n) edges.
https://en.wikipedia.org/wiki/Universal_graph
It is also possible to construct universal graphs for planar graphs that have n1+o(1) vertices.Sumner's conjecture states that tournaments are universal for polytrees, in the sense that every tournament with 2n − 2 vertices contains every polytree with n vertices as a subgraph.A family F of graphs has a universal graph of polynomial size, containing every n-vertex graph as an induced subgraph, if and only if it has an adjacency labelling scheme in which vertices may be labeled by O(log n)-bit bitstrings such that an algorithm can determine whether two vertices are adjacent by examining their labels. For, if a universal graph of this type exists, the vertices of any graph in F may be labeled by the identities of the corresponding vertices in the universal graph, and conversely if a labeling scheme exists then a universal graph may be constructed having a vertex for every possible label.In older mathematical terminology, the phrase "universal graph" was sometimes used to denote a complete graph. The notion of universal graph has been adapted and used for solving mean payoff games.
https://en.wikipedia.org/wiki/Universal_graph
In mathematics, a universal quadratic form is a quadratic form over a ring that represents every element of the ring. A non-singular form over a field which represents zero non-trivially is universal.
https://en.wikipedia.org/wiki/Universal_quadratic_form
In mathematics, a universal space is a certain metric space that contains all metric spaces whose dimension is bounded by some fixed constant. A similar definition exists in topological dynamics.
https://en.wikipedia.org/wiki/Universal_space_(topology)
In mathematics, a vampire number or true vampire number is a composite natural number v, with an even number of digits n, that can be factored into two integers x and y each with n/2 digits and not both with trailing zeroes, where v contains all the digits from x and from y, in any order. x and y are called the fangs. As an example, 1260 is a vampire number because it can be expressed as 21 × 60 = 1260. Note that the digits of the factors 21 and 60 can be found, in some scrambled order, in 1260.
https://en.wikipedia.org/wiki/Wonders_of_Numbers
Similarly, 136,948 is a vampire because 136,948 = 146 × 938. Vampire numbers first appeared in a 1994 post by Clifford A. Pickover to the Usenet group sci.math, and the article he later wrote was published in chapter 30 of his book Keys to Infinity.In addition to "Vampire numbers", a term Pickover actually coined, he has coined the following terms in the area of mathematics: Leviathan number, factorion,Carotid–Kundalini function and fractal, batrachion, Juggler sequence, and Legion's number, among others. For characterizing noisy data, he has used Truchet tiles and Noise spheres, the later of which is a term he coined for a particular mapping, and visualization, of noisy data to spherical coordinates.
https://en.wikipedia.org/wiki/Wonders_of_Numbers
In 1990, he asked "Is There a Double Smoothly Undulating Integer? ", and he computed "All Known Replicating Fibonacci Digits Less than One Billion". With his colleague John R. Hendricks, he was the first to compute the smallest perfect (nasik) magic tesseract. The "Pickover sequence" dealing with e and pi was named after him, as was the "Cliff random number generator" and the Pickover attractor, sometimes also referred to as the Clifford Attractor.
https://en.wikipedia.org/wiki/Wonders_of_Numbers
In mathematics, a variable (from Latin variabilis, "changeable") is a symbol that represents a mathematical object. A variable may represent a number, a vector, a matrix, a function, the argument of a function, a set, or an element of a set.Algebraic computations with variables as if they were explicit numbers solve a range of problems in a single computation. For example, the quadratic formula solves any quadratic equation by substituting the numeric values of the coefficients of that equation for the variables that represent them in the quadratic formula. In mathematical logic, a variable is either a symbol representing an unspecified term of the theory (a meta-variable), or a basic object of the theory that is manipulated without referring to its possible intuitive interpretation.
https://en.wikipedia.org/wiki/Variable_(logics)
In mathematics, a variational inequality is an inequality involving a functional, which has to be solved for all possible values of a given variable, belonging usually to a convex set. The mathematical theory of variational inequalities was initially developed to deal with equilibrium problems, precisely the Signorini problem: in that model problem, the functional involved was obtained as the first variation of the involved potential energy. Therefore, it has a variational origin, recalled by the name of the general abstract problem. The applicability of the theory has since been expanded to include problems from economics, finance, optimization and game theory.
https://en.wikipedia.org/wiki/Variational_inequalities
In mathematics, a vector bundle is a topological construction that makes precise the idea of a family of vector spaces parameterized by another space X {\displaystyle X} (for example X {\displaystyle X} could be a topological space, a manifold, or an algebraic variety): to every point x {\displaystyle x} of the space X {\displaystyle X} we associate (or "attach") a vector space V ( x ) {\displaystyle V(x)} in such a way that these vector spaces fit together to form another space of the same kind as X {\displaystyle X} (e.g. a topological space, manifold, or algebraic variety), which is then called a vector bundle over X {\displaystyle X} . The simplest example is the case that the family of vector spaces is constant, i.e., there is a fixed vector space V {\displaystyle V} such that V ( x ) = V {\displaystyle V(x)=V} for all x {\displaystyle x} in X {\displaystyle X}: in this case there is a copy of V {\displaystyle V} for each x {\displaystyle x} in X {\displaystyle X} and these copies fit together to form the vector bundle X × V {\displaystyle X\times V} over X {\displaystyle X} . Such vector bundles are said to be trivial. A more complicated (and prototypical) class of examples are the tangent bundles of smooth (or differentiable) manifolds: to every point of such a manifold we attach the tangent space to the manifold at that point.
https://en.wikipedia.org/wiki/Vector_bundle_morphism
Tangent bundles are not, in general, trivial bundles. For example, the tangent bundle of the sphere is non-trivial by the hairy ball theorem. In general, a manifold is said to be parallelizable if, and only if, its tangent bundle is trivial.
https://en.wikipedia.org/wiki/Vector_bundle_morphism
Vector bundles are almost always required to be locally trivial, which means they are examples of fiber bundles. Also, the vector spaces are usually required to be over the real or complex numbers, in which case the vector bundle is said to be a real or complex vector bundle (respectively). Complex vector bundles can be viewed as real vector bundles with additional structure. In the following, we focus on real vector bundles in the category of topological spaces.
https://en.wikipedia.org/wiki/Vector_bundle_morphism
In mathematics, a vector bundle is said to be flat if it is endowed with a linear connection with vanishing curvature, i.e. a flat connection.
https://en.wikipedia.org/wiki/Flat_vector_bundle
In mathematics, a vector measure is a function defined on a family of sets and taking vector values satisfying certain properties. It is a generalization of the concept of finite measure, which takes nonnegative real values only.
https://en.wikipedia.org/wiki/Lyapunov's_theorem
In mathematics, a vector-valued differential form on a manifold M is a differential form on M with values in a vector space V. More generally, it is a differential form with values in some vector bundle E over M. Ordinary differential forms can be viewed as R-valued differential forms. An important case of vector-valued differential forms are Lie algebra-valued forms. (A connection form is an example of such a form.)
https://en.wikipedia.org/wiki/Vector-valued_differential_form
In mathematics, a versor is a quaternion of norm one (a unit quaternion). Each versor has the form q = exp ⁡ ( a r ) = cos ⁡ a + r sin ⁡ a , r 2 = − 1 , a ∈ , {\displaystyle q=\exp(a\mathbf {r} )=\cos a+\mathbf {r} \sin a,\quad \mathbf {r} ^{2}=-1,\quad a\in ,} where the r2 = −1 condition means that r is a unit-length vector quaternion (or that the first component of r is zero, and the last three components of r are a unit vector in 3 dimensions). The corresponding 3-dimensional rotation has the angle 2a about the axis r in axis–angle representation. In case a = π/2 (a right angle), then q = r {\displaystyle q=\mathbf {r} } , and the resulting unit vector is termed a right versor. The collection of versors with quaternion multiplication forms a group, and the set of versors is a 3-sphere in the 4-dimensional quaternion algebra.
https://en.wikipedia.org/wiki/Unit_quaternion
In mathematics, a vertex cycle cover (commonly called simply cycle cover) of a graph G is a set of cycles which are subgraphs of G and contain all vertices of G. If the cycles of the cover have no vertices in common, the cover is called vertex-disjoint or sometimes simply disjoint cycle cover. This is sometimes known as exact vertex cycle cover. In this case the set of the cycles constitutes a spanning subgraph of G. A disjoint cycle cover of an undirected graph (if it exists) can be found in polynomial time by transforming the problem into a problem of finding a perfect matching in a larger graph.If the cycles of the cover have no edges in common, the cover is called edge-disjoint or simply disjoint cycle cover.
https://en.wikipedia.org/wiki/Vertex_cycle_cover
Similar definitions exist for digraphs, in terms of directed cycles. Finding a vertex-disjoint cycle cover of a directed graph can also be performed in polynomial time by a similar reduction to perfect matching. However, adding the condition that each cycle should have length at least 3 makes the problem NP-hard.
https://en.wikipedia.org/wiki/Vertex_cycle_cover
In mathematics, a vertex operator algebra (VOA) is an algebraic structure that plays an important role in two-dimensional conformal field theory and string theory. In addition to physical applications, vertex operator algebras have proven useful in purely mathematical contexts such as monstrous moonshine and the geometric Langlands correspondence. The related notion of vertex algebra was introduced by Richard Borcherds in 1986, motivated by a construction of an infinite-dimensional Lie algebra due to Igor Frenkel. In the course of this construction, one employs a Fock space that admits an action of vertex operators attached to elements of a lattice.
https://en.wikipedia.org/wiki/Vertex_algebra
Borcherds formulated the notion of vertex algebra by axiomatizing the relations between the lattice vertex operators, producing an algebraic structure that allows one to construct new Lie algebras by following Frenkel's method. The notion of vertex operator algebra was introduced as a modification of the notion of vertex algebra, by Frenkel, James Lepowsky, and Arne Meurman in 1988, as part of their project to construct the moonshine module. They observed that many vertex algebras that appear 'in nature' carry an action of the Virasoro algebra, and satisfy a bounded-below property with respect to an energy operator.
https://en.wikipedia.org/wiki/Vertex_algebra
Motivated by this observation, they added the Virasoro action and bounded-below property as axioms. We now have post-hoc motivation for these notions from physics, together with several interpretations of the axioms that were not initially known.
https://en.wikipedia.org/wiki/Vertex_algebra