text
stringlengths
9
3.55k
source
stringlengths
31
280
In mathematics, an algebra homomorphism is a homomorphism between two algebras. More precisely, if A and B are algebras over a field (or a ring) K, it is a function F: A → B {\displaystyle F\colon A\to B} such that, for all k in K and x, y in A, one has F ( k x ) = k F ( x ) {\displaystyle F(kx)=kF(x)} F ( x + y ) = F ( x ) + F ( y ) {\displaystyle F(x+y)=F(x)+F(y)} F ( x y ) = F ( x ) F ( y ) {\displaystyle F(xy)=F(x)F(y)} The first two conditions say that F is a K-linear map, and the last condition says that F preserves the algebra multiplication. So, if the algebras are associative, F is a rng homomorphism, and, if the algebras are rings and F preserves the identity, it is a ring homomorphism. If F admits an inverse homomorphism, or equivalently if it is bijective, F is said to be an isomorphism between A and B.
https://en.wikipedia.org/wiki/Algebra_homomorphism
In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition and scalar multiplication by elements of a field and satisfying the axioms implied by "vector space" and "bilinear".The multiplication operation in an algebra may or may not be associative, leading to the notions of associative algebras and non-associative algebras. Given an integer n, the ring of real square matrices of order n is an example of an associative algebra over the field of real numbers under matrix addition and matrix multiplication since matrix multiplication is associative. Three-dimensional Euclidean space with multiplication given by the vector cross product is an example of a nonassociative algebra over the field of real numbers since the vector cross product is nonassociative, satisfying the Jacobi identity instead.
https://en.wikipedia.org/wiki/Algebra_(ring_theory)
An algebra is unital or unitary if it has an identity element with respect to the multiplication. The ring of real square matrices of order n forms a unital algebra since the identity matrix of order n is the identity element with respect to matrix multiplication. It is an example of a unital associative algebra, a (unital) ring that is also a vector space.
https://en.wikipedia.org/wiki/Algebra_(ring_theory)
Many authors use the term algebra to mean associative algebra, or unital associative algebra, or in some subjects such as algebraic geometry, unital associative commutative algebra. Replacing the field of scalars by a commutative ring leads to the more general notion of an algebra over a ring. Algebras are not to be confused with vector spaces equipped with a bilinear form, like inner product spaces, as, for such a space, the result of a product is not in the space, but rather in the field of coefficients.
https://en.wikipedia.org/wiki/Algebra_(ring_theory)
In mathematics, an algebra such as ( R , + , ⋅ ) {\displaystyle (\mathbb {R} ,+,\cdot )} has multiplication ⋅ {\displaystyle \cdot } whose associativity is well-defined on the nose. This means for any real numbers a , b , c ∈ R {\displaystyle a,b,c\in \mathbb {R} } we have a ⋅ ( b ⋅ c ) − ( a ⋅ b ) ⋅ c = 0 {\displaystyle a\cdot (b\cdot c)-(a\cdot b)\cdot c=0} .But, there are algebras R {\displaystyle R} which are not necessarily associative, meaning if a , b , c ∈ R {\displaystyle a,b,c\in R} then a ⋅ ( b ⋅ c ) − ( a ⋅ b ) ⋅ c ≠ 0 {\displaystyle a\cdot (b\cdot c)-(a\cdot b)\cdot c\neq 0} in general. There is a notion of algebras, called A ∞ {\displaystyle A_{\infty }} -algebras, which still have a property on the multiplication which still acts like the first relation, meaning associativity holds, but only holds up to a homotopy, which is a way to say after an operation "compressing" the information in the algebra, the multiplication is associative. This means although we get something which looks like the second equation, the one of inequality, we actually get equality after "compressing" the information in the algebra.
https://en.wikipedia.org/wiki/Homotopy_associative_algebra
The study of A ∞ {\displaystyle A_{\infty }} -algebras is a subset of homotopical algebra, where there is a homotopical notion of associative algebras through a differential graded algebra with a multiplication operation and a series of higher homotopies giving the failure for the multiplication to be associative. Loosely, an A ∞ {\displaystyle A_{\infty }} -algebra ( A ∙ , m i ) {\displaystyle (A^{\bullet },m_{i})} is a Z {\displaystyle \mathbb {Z} } -graded vector space over a field k {\displaystyle k} with a series of operations m i {\displaystyle m_{i}} on the i {\displaystyle i} -th tensor powers of A ∙ {\displaystyle A^{\bullet }} . The m 1 {\displaystyle m_{1}} corresponds to a chain complex differential, m 2 {\displaystyle m_{2}} is the multiplication map, and the higher m i {\displaystyle m_{i}} are a measure of the failure of associativity of the m 2 {\displaystyle m_{2}} .
https://en.wikipedia.org/wiki/Homotopy_associative_algebra
When looking at the underlying cohomology algebra H ( A ∙ , m 1 ) {\displaystyle H(A^{\bullet },m_{1})} , the map m 2 {\displaystyle m_{2}} should be an associative map. Then, these higher maps m 3 , m 4 , … {\displaystyle m_{3},m_{4},\ldots } should be interpreted as higher homotopies, where m 3 {\displaystyle m_{3}} is the failure of m 2 {\displaystyle m_{2}} to be associative, m 4 {\displaystyle m_{4}} is the failure for m 3 {\displaystyle m_{3}} to be higher associative, and so forth.
https://en.wikipedia.org/wiki/Homotopy_associative_algebra
Their structure was originally discovered by Jim Stasheff while studying A∞-spaces, but this was interpreted as a purely algebraic structure later on. These are spaces equipped with maps that are associative only up to homotopy, and the A∞ structure keeps track of these homotopies, homotopies of homotopies, and so forth. They are ubiquitous in homological mirror symmetry because of their necessity in defining the structure of the Fukaya category of D-branes on a Calabi–Yau manifold who have only a homotopy associative structure.
https://en.wikipedia.org/wiki/Homotopy_associative_algebra
In mathematics, an algebraic cycle on an algebraic variety V is a formal linear combination of subvarieties of V. These are the part of the algebraic topology of V that is directly accessible by algebraic methods. Understanding the algebraic cycles on a variety can give profound insights into the structure of the variety. The most trivial case is codimension zero cycles, which are linear combinations of the irreducible components of the variety. The first non-trivial case is of codimension one subvarieties, called divisors.
https://en.wikipedia.org/wiki/Algebraic_cycle
The earliest work on algebraic cycles focused on the case of divisors, particularly divisors on algebraic curves. Divisors on algebraic curves are formal linear combinations of points on the curve. Classical work on algebraic curves related these to intrinsic data, such as the regular differentials on a compact Riemann surface, and to extrinsic properties, such as embeddings of the curve into projective space.
https://en.wikipedia.org/wiki/Algebraic_cycle
While divisors on higher-dimensional varieties continue to play an important role in determining the structure of the variety, on varieties of dimension two or more there are also higher codimension cycles to consider. The behavior of these cycles is strikingly different from that of divisors. For example, every curve has a constant N such that every divisor of degree zero is linearly equivalent to a difference of two effective divisors of degree at most N. David Mumford proved that, on a smooth complete complex algebraic surface S with positive geometric genus, the analogous statement for the group CH 2 ⁡ ( S ) {\displaystyle \operatorname {CH} ^{2}(S)} of rational equivalence classes of codimension two cycles in S is false.
https://en.wikipedia.org/wiki/Algebraic_cycle
The hypothesis that the geometric genus is positive essentially means (by the Lefschetz theorem on (1,1)-classes) that the cohomology group H 2 ( S ) {\displaystyle H^{2}(S)} contains transcendental information, and in effect Mumford's theorem implies that, despite CH 2 ⁡ ( S ) {\displaystyle \operatorname {CH} ^{2}(S)} having a purely algebraic definition, it shares transcendental information with H 2 ( S ) {\displaystyle H^{2}(S)} . Mumford's theorem has since been greatly generalized.The behavior of algebraic cycles ranks among the most important open questions in modern mathematics.
https://en.wikipedia.org/wiki/Algebraic_cycle
The Hodge conjecture, one of the Clay Mathematics Institute's Millennium Prize Problems, predicts that the topology of a complex algebraic variety forces the existence of certain algebraic cycles. The Tate conjecture makes a similar prediction for étale cohomology. Alexander Grothendieck's standard conjectures on algebraic cycles yield enough cycles to construct his category of motives and would imply that algebraic cycles play a vital role in any cohomology theory of algebraic varieties. Conversely, Alexander Beilinson proved that the existence of a category of motives implies the standard conjectures. Additionally, cycles are connected to algebraic K-theory by Bloch's formula, which expresses groups of cycles modulo rational equivalence as the cohomology of K-theory sheaves.
https://en.wikipedia.org/wiki/Algebraic_cycle
In mathematics, an algebraic differential equation is a differential equation that can be expressed by means of differential algebra. There are several such notions, according to the concept of differential algebra used. The intention is to include equations formed by means of differential operators, in which the coefficients are rational functions of the variables (e.g. the hypergeometric equation).
https://en.wikipedia.org/wiki/Polynomial_vector_field
Algebraic differential equations are widely used in computer algebra and number theory. A simple concept is that of a polynomial vector field, in other words a vector field expressed with respect to a standard co-ordinate basis as the first partial derivatives with polynomial coefficients. This is a type of first-order algebraic differential operator.
https://en.wikipedia.org/wiki/Polynomial_vector_field
In mathematics, an algebraic equation or polynomial equation is an equation of the form P = 0 {\displaystyle P=0} where P is a polynomial with coefficients in some field, often the field of the rational numbers. For many authors, the term algebraic equation refers only to univariate equations, that is polynomial equations that involve only one variable. On the other hand, a polynomial equation may involve several variables. In the case of several variables (the multivariate case), the term polynomial equation is usually preferred to algebraic equation.
https://en.wikipedia.org/wiki/Polynomial_equation
For example, x 5 − 3 x + 1 = 0 {\displaystyle x^{5}-3x+1=0} is an algebraic equation with integer coefficients and y 4 + x y 2 − x 3 3 + x y 2 + y 2 + 1 7 = 0 {\displaystyle y^{4}+{\frac {xy}{2}}-{\frac {x^{3}}{3}}+xy^{2}+y^{2}+{\frac {1}{7}}=0} is a multivariate polynomial equation over the rationals. Some but not all polynomial equations with rational coefficients have a solution that is an algebraic expression that can be found using a finite number of operations that involve only those same types of coefficients (that is, can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but for degree five or more it can only be done for some equations, not all. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root-finding algorithm) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).
https://en.wikipedia.org/wiki/Polynomial_equation
In mathematics, an algebraic expression is an expression built up from constant algebraic numbers, variables, and the algebraic operations (addition, subtraction, multiplication, division and exponentiation by an exponent that is a rational number). For example, 3x2 − 2xy + c is an algebraic expression. Since taking the square root is the same as raising to the power 1/2, the following is also an algebraic expression: 1 − x 2 1 + x 2 {\displaystyle {\sqrt {\frac {1-x^{2}}{1+x^{2}}}}} An algebraic equation is an equation involving only algebraic expressions. By contrast, transcendental numbers like π and e are not algebraic, since they are not derived from integer constants and algebraic operations.
https://en.wikipedia.org/wiki/Algebraic_expression
Usually, π is constructed as a geometric relationship, and the definition of e requires an infinite number of algebraic operations. A rational expression is an expression that may be rewritten to a rational fraction by using the properties of the arithmetic operations (commutative properties and associative properties of addition and multiplication, distributive property and rules for the operations on the fractions). In other words, a rational expression is an expression which may be constructed from the variables and the constants by using only the four operations of arithmetic.
https://en.wikipedia.org/wiki/Algebraic_expression
Thus, 3 x − 2 x y + c y − 1 {\displaystyle {\frac {3x-2xy+c}{y-1}}} is a rational expression, whereas 1 − x 2 1 + x 2 {\displaystyle {\sqrt {\frac {1-x^{2}}{1+x^{2}}}}} is not. A rational equation is an equation in which two rational fractions (or rational expressions) of the form P ( x ) Q ( x ) {\displaystyle {\frac {P(x)}{Q(x)}}} are set equal to each other.
https://en.wikipedia.org/wiki/Algebraic_expression
These expressions obey the same rules as fractions. The equations can be solved by cross-multiplying. Division by zero is undefined, so that a solution causing formal division by zero is rejected.
https://en.wikipedia.org/wiki/Algebraic_expression
In mathematics, an algebraic extension is a field extension L/K such that every element of the larger field L is algebraic over the smaller field K; that is, every element of L is a root of a non-zero polynomial with coefficients in K. A field extension that is not algebraic, is said to be transcendental, and must contain transcendental elements, that is, elements that are not algebraic.The algebraic extensions of the field Q {\displaystyle \mathbb {Q} } of the rational numbers are called algebraic number fields and are the main objects of study of algebraic number theory. Another example of a common algebraic extension is the extension C / R {\displaystyle \mathbb {C} /\mathbb {R} } of the real numbers by the complex numbers.
https://en.wikipedia.org/wiki/Algebraic_extension
In mathematics, an algebraic function field (often abbreviated as function field) of n variables over a field k is a finitely generated field extension K/k which has transcendence degree n over k. Equivalently, an algebraic function field of n variables over k may be defined as a finite field extension of the field K = k(x1,...,xn) of rational functions in n variables over k.
https://en.wikipedia.org/wiki/Algebraic_function_field
In mathematics, an algebraic function is a function that can be defined as the root of a polynomial equation. Quite often algebraic functions are algebraic expressions using a finite number of terms, involving only the algebraic operations addition, subtraction, multiplication, division, and raising to a fractional power. Examples of such functions are: f ( x ) = 1 / x {\displaystyle f(x)=1/x} f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} f ( x ) = 1 + x 3 x 3 / 7 − 7 x 1 / 3 {\displaystyle f(x)={\frac {\sqrt {1+x^{3}}}{x^{3/7}-{\sqrt {7}}x^{1/3}}}} Some algebraic functions, however, cannot be expressed by such finite expressions (this is the Abel–Ruffini theorem). This is the case, for example, for the Bring radical, which is the function implicitly defined by f ( x ) 5 + f ( x ) + x = 0 {\displaystyle f(x)^{5}+f(x)+x=0} .In more precise terms, an algebraic function of degree n in one variable x is a function y = f ( x ) , {\displaystyle y=f(x),} that is continuous in its domain and satisfies a polynomial equation a n ( x ) y n + a n − 1 ( x ) y n − 1 + ⋯ + a 0 ( x ) = 0 {\displaystyle a_{n}(x)y^{n}+a_{n-1}(x)y^{n-1}+\cdots +a_{0}(x)=0} where the coefficients ai(x) are polynomial functions of x, with integer coefficients.
https://en.wikipedia.org/wiki/Algebraic_function
It can be shown that the same class of functions is obtained if algebraic numbers are accepted for the coefficients of the ai(x)'s. If transcendental numbers occur in the coefficients the function is, in general, not algebraic, but it is algebraic over the field generated by these coefficients. The value of an algebraic function at a rational number, and more generally, at an algebraic number is always an algebraic number.
https://en.wikipedia.org/wiki/Algebraic_function
Sometimes, coefficients a i ( x ) {\displaystyle a_{i}(x)} that are polynomial over a ring R are considered, and one then talks about "functions algebraic over R". A function which is not algebraic is called a transcendental function, as it is for example the case of exp ⁡ x , tan ⁡ x , ln ⁡ x , Γ ( x ) {\displaystyle \exp x,\tan x,\ln x,\Gamma (x)} .
https://en.wikipedia.org/wiki/Algebraic_function
A composition of transcendental functions can give an algebraic function: f ( x ) = cos ⁡ arcsin ⁡ x = 1 − x 2 {\displaystyle f(x)=\cos \arcsin x={\sqrt {1-x^{2}}}} . As a polynomial equation of degree n has up to n roots (and exactly n roots over an algebraically closed field, such as the complex numbers), a polynomial equation does not implicitly define a single function, but up to n functions, sometimes also called branches. Consider for example the equation of the unit circle: y 2 + x 2 = 1.
https://en.wikipedia.org/wiki/Algebraic_function
{\displaystyle y^{2}+x^{2}=1.\,} This determines y, except only up to an overall sign; accordingly, it has two branches: y = ± 1 − x 2 . {\displaystyle y=\pm {\sqrt {1-x^{2}}}.\,} An algebraic function in m variables is similarly defined as a function y = f ( x 1 , … , x m ) {\displaystyle y=f(x_{1},\dots ,x_{m})} which solves a polynomial equation in m + 1 variables: p ( y , x 1 , x 2 , … , x m ) = 0. {\displaystyle p(y,x_{1},x_{2},\dots ,x_{m})=0.}
https://en.wikipedia.org/wiki/Algebraic_function
It is normally assumed that p should be an irreducible polynomial. The existence of an algebraic function is then guaranteed by the implicit function theorem. Formally, an algebraic function in m variables over the field K is an element of the algebraic closure of the field of rational functions K(x1, ..., xm).
https://en.wikipedia.org/wiki/Algebraic_function
In mathematics, an algebraic group is an algebraic variety endowed with a group structure that is compatible with its structure as an algebraic variety. Thus the study of algebraic groups belongs both to algebraic geometry and group theory. Many groups of geometric transformations are algebraic groups; for example, orthogonal groups, general linear groups, projective groups, Euclidean groups, etc. Many matrix groups are also algebraic. Other algebraic groups occur naturally in algebraic geometry, such as elliptic curves and Jacobian varieties.
https://en.wikipedia.org/wiki/Group_variety
An important class of algebraic groups is given by the affine algebraic groups, those whose underlying algebraic variety is an affine variety; they are exactly the algebraic subgroups of the general linear group, and are therefore also called linear algebraic groups. Another class is formed by the abelian varieties, which are the algebraic groups whose underlying variety is a projective variety. Chevalley's structure theorem states that every algebraic group can be constructed from groups in those two families.
https://en.wikipedia.org/wiki/Group_variety
In mathematics, an algebraic manifold is an algebraic variety which is also a manifold. As such, algebraic manifolds are a generalisation of the concept of smooth curves and surfaces defined by polynomials. An example is the sphere, which can be defined as the zero set of the polynomial x2 + y2 + z2 – 1, and hence is an algebraic variety.
https://en.wikipedia.org/wiki/Complex_projective_manifold
For an algebraic manifold, the ground field will be the real numbers or complex numbers; in the case of the real numbers, the manifold of real points is sometimes called a Nash manifold. Every sufficiently small local patch of an algebraic manifold is isomorphic to km where k is the ground field. Equivalently the variety is smooth (free from singular points). The Riemann sphere is one example of a complex algebraic manifold, since it is the complex projective line.
https://en.wikipedia.org/wiki/Complex_projective_manifold
In mathematics, an algebraic matroid is a matroid, a combinatorial structure, that expresses an abstraction of the relation of algebraic independence.
https://en.wikipedia.org/wiki/Algebraic_matroid
In mathematics, an algebraic number field (or simply number field) is an extension field K {\displaystyle K} of the field of rational numbers Q {\displaystyle \mathbb {Q} } such that the field extension K / Q {\displaystyle K/\mathbb {Q} } has finite degree (and hence is an algebraic field extension). Thus K {\displaystyle K} is a field that contains Q {\displaystyle \mathbb {Q} } and has finite dimension when considered as a vector space over Q {\displaystyle \mathbb {Q} } . The study of algebraic number fields, and, more generally, of algebraic extensions of the field of rational numbers, is the central topic of algebraic number theory. This study reveals hidden structures behind usual rational numbers, by using algebraic methods.
https://en.wikipedia.org/wiki/Power_basis
In mathematics, an algebraic representation of a group G on a k-algebra A is a linear representation π: G → G L ( A ) {\displaystyle \pi :G\to GL(A)} such that, for each g in G, π ( g ) {\displaystyle \pi (g)} is an algebra automorphism. Equipped with such a representation, the algebra A is then called a G-algebra. For example, if V is a linear representation of a group G, then the representation put on the tensor algebra T ( A ) {\displaystyle T(A)} is an algebraic representation of G. If A is a commutative G-algebra, then Spec ⁡ ( A ) {\displaystyle \operatorname {Spec} (A)} is an affine G-scheme.
https://en.wikipedia.org/wiki/Algebraic_representation
In mathematics, an algebraic stack is a vast generalization of algebraic spaces, or schemes, which are foundational for studying moduli theory. Many moduli spaces are constructed using techniques specific to algebraic stacks, such as Artin's representability theorem, which is used to construct the moduli space of pointed algebraic curves M g , n {\displaystyle {\mathcal {M}}_{g,n}} and the moduli stack of elliptic curves. Originally, they were introduced by Grothendieck to keep track of automorphisms on moduli spaces, a technique which allows for treating these moduli spaces as if their underlying schemes or algebraic spaces are smooth. But, through many generalizations the notion of algebraic stacks was finally discovered by Michael Artin.
https://en.wikipedia.org/wiki/Algebraic_stack
In mathematics, an algebraic structure ( R , T ) {\displaystyle (R,T)} consisting of a non-empty set R {\displaystyle R} and a ternary mapping T: R 3 → R {\displaystyle T\colon R^{3}\to R\,} may be called a ternary system. A planar ternary ring (PTR) or ternary field is special type of ternary system used by Marshall Hall to construct projective planes by means of coordinates. A planar ternary ring is not a ring in the traditional sense, but any field gives a planar ternary ring where the operation T {\displaystyle T} is defined by T ( a , b , c ) = a b + c {\displaystyle T(a,b,c)=ab+c} . Thus, we can think of a planar ternary ring as a generalization of a field where the ternary operation takes the place of both addition and multiplication.
https://en.wikipedia.org/wiki/Ternary_ring
There is wide variation in the terminology. Planar ternary rings or ternary fields as defined here have been called by other names in the literature, and the term "planar ternary ring" can mean a variant of the system defined here. The term "ternary ring" often means a planar ternary ring, but it can also simply mean a ternary system.
https://en.wikipedia.org/wiki/Ternary_ring
In mathematics, an algebraic structure consists of a nonempty set A (called the underlying set, carrier set or domain), a collection of operations on A (typically binary operations such as addition and multiplication), and a finite set of identities, known as axioms, that these operations must satisfy. An algebraic structure may be based on other algebraic structures with operations and axioms involving several structures. For instance, a vector space involves a second structure called a field, and an operation called scalar multiplication between elements of the field (called scalars), and elements of the vector space (called vectors). Abstract algebra is the name that is commonly given to the study of algebraic structures.
https://en.wikipedia.org/wiki/Structure_(algebraic)
The general theory of algebraic structures has been formalized in universal algebra. Category theory is another formalization that includes also other mathematical structures and functions between structures of the same type (homomorphisms). In universal algebra, an algebraic structure is called an algebra; this term may be ambiguous, since, in other contexts, an algebra is an algebraic structure that is a vector space over a field or a module over a commutative ring. The collection of all structures of a given type (same operations and same laws) is called a variety in universal algebra; this term is also used with a completely different meaning in algebraic geometry, as an abbreviation of algebraic variety. In category theory, the collection of all structures of a given type and homomorphisms between them form a concrete category.
https://en.wikipedia.org/wiki/Structure_(algebraic)
In mathematics, an algebraic surface is an algebraic variety of dimension two. In the case of geometry over the field of complex numbers, an algebraic surface has complex dimension two (as a complex manifold, when it is non-singular) and so of dimension four as a smooth manifold. The theory of algebraic surfaces is much more complicated than that of algebraic curves (including the compact Riemann surfaces, which are genuine surfaces of (real) dimension two). Many results were obtained, however, in the Italian school of algebraic geometry, and are up to 100 years old.
https://en.wikipedia.org/wiki/Algebraic_surface
In mathematics, an algebraic torus, where a one dimensional torus is typically denoted by G m {\displaystyle \mathbf {G} _{\mathbf {m} }} , G m {\displaystyle \mathbb {G} _{m}} , or T {\displaystyle \mathbb {T} } , is a type of commutative affine algebraic group commonly found in projective algebraic geometry and toric geometry. Higher dimensional algebraic tori can be modelled as a product of algebraic groups G m {\displaystyle \mathbf {G} _{\mathbf {m} }} . These groups were named by analogy with the theory of tori in Lie group theory (see Cartan subgroup).
https://en.wikipedia.org/wiki/Algebraic_torus
For example, over the complex numbers C {\displaystyle \mathbb {C} } the algebraic torus G m {\displaystyle \mathbf {G} _{\mathbf {m} }} is isomorphic to the group scheme C ∗ = Spec ( C ) {\displaystyle \mathbb {C} ^{*}={\text{Spec}}(\mathbb {C} )} , which is the scheme theoretic analogue of the Lie group U ( 1 ) ⊂ C {\displaystyle U(1)\subset \mathbb {C} } . In fact, any G m {\displaystyle \mathbf {G} _{\mathbf {m} }} -action on a complex vector space can be pulled back to a U ( 1 ) {\displaystyle U(1)} -action from the inclusion U ( 1 ) ⊂ C ∗ {\displaystyle U(1)\subset \mathbb {C} ^{*}} as real manifolds. Tori are of fundamental importance in the theory of algebraic groups and Lie groups and in the study of the geometric objects associated to them such as symmetric spaces and buildings.
https://en.wikipedia.org/wiki/Algebraic_torus
In mathematics, an algebraic variety V in projective space is a complete intersection if the ideal of V is generated by exactly codim V elements. That is, if V has dimension m and lies in projective space Pn, there should exist n − m homogeneous polynomials: F i ( X 0 , ⋯ , X n ) , 1 ≤ i ≤ n − m , {\displaystyle F_{i}(X_{0},\cdots ,X_{n}),1\leq i\leq n-m,} in the homogeneous coordinates Xj, which generate all other homogeneous polynomials that vanish on V. Geometrically, each Fi defines a hypersurface; the intersection of these hypersurfaces should be V. The intersection of n − m hypersurfaces will always have dimension at least m, assuming that the field of scalars is an algebraically closed field such as the complex numbers. The question is essentially, can we get the dimension down to m, with no extra points in the intersection? This condition is fairly hard to check as soon as the codimension n − m ≥ 2. When n − m = 1 then V is automatically a hypersurface and there is nothing to prove.
https://en.wikipedia.org/wiki/Complete_intersection
In mathematics, an algebroid function is a solution of an algebraic equation whose coefficients are analytic functions. So y(z) is an algebroid function if it satisfies a d ( z ) y d + … + a 0 ( z ) = 0 , {\displaystyle a_{d}(z)y^{d}+\ldots +a_{0}(z)=0,} where a k ( z ) {\displaystyle a_{k}(z)} are analytic. If this equation is irreducible then the function is d-valued, and can be defined on a Riemann surface having d sheets.
https://en.wikipedia.org/wiki/Algebroid_function
In mathematics, an aliquot sequence is a sequence of positive integers in which each term is the sum of the proper divisors of the previous term. If the sequence reaches the number 1, it ends, since the sum of the proper divisors of 1 is 0.
https://en.wikipedia.org/wiki/Aliquot_sequence
In mathematics, an all one polynomial (AOP) is a polynomial in which all coefficients are one. Over the finite field of order two, conditions for the AOP to be irreducible are known, which allow this polynomial to be used to define efficient algorithms and circuits for multiplication in finite fields of characteristic two. The AOP is a 1-equally spaced polynomial.
https://en.wikipedia.org/wiki/All_one_polynomial
In mathematics, an almost complex manifold is a smooth manifold equipped with a smooth linear complex structure on each tangent space. Every complex manifold is an almost complex manifold, but there are almost complex manifolds that are not complex manifolds. Almost complex structures have important applications in symplectic geometry. The concept is due to Charles Ehresmann and Heinz Hopf in the 1940s.
https://en.wikipedia.org/wiki/Almost_complex_manifold
In mathematics, an almost perfect number (sometimes also called slightly defective or least deficient number) is a natural number n such that the sum of all divisors of n (the sum-of-divisors function σ(n)) is equal to 2n − 1, the sum of all proper divisors of n, s(n) = σ(n) − n, then being equal to n − 1. The only known almost perfect numbers are powers of 2 with non-negative exponents (sequence A000079 in the OEIS). Therefore the only known odd almost perfect number is 20 = 1, and the only known even almost perfect numbers are those of the form 2k for some positive integer k; however, it has not been shown that all almost perfect numbers are of this form. It is known that an odd almost perfect number greater than 1 would have at least six prime factors.If m is an odd almost perfect number then m(2m − 1) is a Descartes number. Moreover if a and b are positive odd integers such that b + 3 < a < m / 2 {\displaystyle b+3
https://en.wikipedia.org/wiki/Almost_perfect_number
In mathematics, an almost periodic function is, loosely speaking, a function of a real number that is periodic to within any desired level of accuracy, given suitably long, well-distributed "almost-periods". The concept was first studied by Harald Bohr and later generalized by Vyacheslav Stepanov, Hermann Weyl and Abram Samoilovitch Besicovitch, amongst others. There is also a notion of almost periodic functions on locally compact abelian groups, first studied by John von Neumann.
https://en.wikipedia.org/wiki/Almost_periodic_functions
Almost periodicity is a property of dynamical systems that appear to retrace their paths through phase space, but not exactly. An example would be a planetary system, with planets in orbits moving with periods that are not commensurable (i.e., with a period vector that is not proportional to a vector of integers). A theorem of Kronecker from diophantine approximation can be used to show that any particular configuration that occurs once, will recur to within any specified accuracy: if we wait long enough we can observe the planets all return to within a second of arc to the positions they once were in.
https://en.wikipedia.org/wiki/Almost_periodic_functions
In mathematics, an alternating algebra is a Z-graded algebra for which xy = (−1)deg(x)deg(y)yx for all nonzero homogeneous elements x and y (i.e. it is an anticommutative algebra) and has the further property that x2 = 0 for every homogeneous element x of odd degree.
https://en.wikipedia.org/wiki/Alternating_algebra
In mathematics, an alternating factorial is the absolute value of the alternating sum of the first n factorials of positive integers. This is the same as their sum, with the odd-indexed factorials multiplied by −1 if n is even, and the even-indexed factorials multiplied by −1 if n is odd, resulting in an alternation of signs of the summands (or alternation of addition and subtraction operators, if preferred). To put it algebraically, af ⁡ ( n ) = ∑ i = 1 n ( − 1 ) n − i i ! {\displaystyle \operatorname {af} (n)=\sum _{i=1}^{n}(-1)^{n-i}i!}
https://en.wikipedia.org/wiki/Alternating_factorial
or with the recurrence relation af ⁡ ( n ) = n ! − af ⁡ ( n − 1 ) {\displaystyle \operatorname {af} (n)=n!-\operatorname {af} (n-1)} in which af(1) = 1. The first few alternating factorials are 1, 1, 5, 19, 101, 619, 4421, 35899, 326981, 3301819, 36614981, 442386619, 5784634181, 81393657019 (sequence A005165 in the OEIS)For example, the third alternating factorial is 1!
https://en.wikipedia.org/wiki/Alternating_factorial
– 2! + 3!.
https://en.wikipedia.org/wiki/Alternating_factorial
The fourth alternating factorial is −1! + 2! − 3!
https://en.wikipedia.org/wiki/Alternating_factorial
+ 4! = 19. Regardless of the parity of n, the last (nth) summand, n!, is given a positive sign, the (n – 1)th summand is given a negative sign, and the signs of the lower-indexed summands are alternated accordingly.
https://en.wikipedia.org/wiki/Alternating_factorial
This pattern of alternation ensures the resulting sums are all positive integers. Changing the rule so that either the odd- or even-indexed summands are given negative signs (regardless of the parity of n) changes the signs of the resulting sums but not their absolute values. Miodrag Zivković proved in 1999 that there are only a finite number of alternating factorials that are also prime numbers, since 3612703 divides af(3612702) and therefore divides af(n) for all n ≥ 3612702. As of 2006, the known primes and probable primes are af(n) for (sequence A001272 in the OEIS) n = 3, 4, 5, 6, 7, 8, 10, 15, 19, 41, 59, 61, 105, 160, 661, 2653, 3069, 3943, 4053, 4998, 8275, 9158, 11164Only the values up to n = 661 have been proved prime in 2006. af(661) is approximately 7.818097272875 × 101578.
https://en.wikipedia.org/wiki/Alternating_factorial
In mathematics, an alternating group is the group of even permutations of a finite set. The alternating group on a set of n elements is called the alternating group of degree n, or the alternating group on n letters and denoted by An or Alt(n).
https://en.wikipedia.org/wiki/Alternating_groups
In mathematics, an alternating series is an infinite series of the form or with an > 0 for all n. The signs of the general terms alternate between positive and negative. Like any series, an alternating series converges if and only if the associated sequence of partial sums converges.
https://en.wikipedia.org/wiki/Alternating_series
In mathematics, an alternating sign matrix is a square matrix of 0s, 1s, and −1s such that the sum of each row and column is 1 and the nonzero entries in each row and column alternate in sign. These matrices generalize permutation matrices and arise naturally when using Dodgson condensation to compute a determinant. They are also closely related to the six-vertex model with domain wall boundary conditions from statistical mechanics. They were first defined by William Mills, David Robbins, and Howard Rumsey in the former context.
https://en.wikipedia.org/wiki/Alternating_sign_matrix
In mathematics, an amenable group is a locally compact topological group G carrying a kind of averaging operation on bounded functions that is invariant under translation by group elements. The original definition, in terms of a finitely additive measure (or mean) on subsets of G, was introduced by John von Neumann in 1929 under the German name "messbar" ("measurable" in English) in response to the Banach–Tarski paradox. In 1949 Mahlon M. Day introduced the English translation "amenable", apparently as a pun on "mean".The amenability property has a large number of equivalent formulations. In the field of analysis, the definition is in terms of linear functionals.
https://en.wikipedia.org/wiki/Amenable_group
An intuitive way to understand this version is that the support of the regular representation is the whole space of irreducible representations. In discrete group theory, where G has the discrete topology, a simpler definition is used. In this setting, a group is amenable if one can say what proportion of G any given subset takes up. If a group has a Følner sequence then it is automatically amenable.
https://en.wikipedia.org/wiki/Amenable_group
In mathematics, an amicable triple is a set of three different numbers so related that the restricted sum of the divisors of each is equal to the sum of other two numbers.In another equivalent characterization, an amicable triple is a set of three different numbers so related that the sum of the divisors of each is equal to the sum of the three numbers. So a triple (a, b, c) of natural numbers is called amicable if s(a) = b + c, s(b) = a + c and s(c) = a + b, or equivalently if σ(a) = σ(b) = σ(c) = a + b + c. Here σ(n) is the sum of all positive divisors, and s(n) = σ(n) − n is the aliquot sum. == References ==
https://en.wikipedia.org/wiki/Amicable_triple
In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex analytic functions exhibit properties that do not generally hold for real analytic functions. A function is analytic if and only if its Taylor series about x 0 {\displaystyle x_{0}} converges to the function in some neighborhood for every x 0 {\displaystyle x_{0}} in its domain.
https://en.wikipedia.org/wiki/Analytical_function
It is important to note that it's a neighborhood and not just at some point x 0 {\displaystyle x_{0}} , since every differentiable function has at least a tangent line at every point, which is its Taylor series of order 1. So just having a polynomial expansion at singular points is not enough, and the Taylor series must also converge to the function on points adjacent to x 0 {\displaystyle x_{0}} to be considered an analytic function. As a counterexample see the Fabius function.
https://en.wikipedia.org/wiki/Analytical_function
In mathematics, an analytic manifold, also known as a C ω {\displaystyle C^{\omega }} manifold, is a differentiable manifold with analytic transition maps. The term usually refers to real analytic manifolds, although complex manifolds are also analytic. In algebraic geometry, analytic spaces are a generalization of analytic manifolds such that singularities are permitted. For U ⊆ R n {\displaystyle U\subseteq \mathbb {R} ^{n}} , the space of analytic functions, C ω ( U ) {\displaystyle C^{\omega }(U)} , consists of infinitely differentiable functions f: U → R {\displaystyle f:U\to \mathbb {R} } , such that the Taylor series T f ( x ) = ∑ | α | ≥ 0 D α f ( x 0 ) α !
https://en.wikipedia.org/wiki/Real-analytic_manifold
( x − x 0 ) α {\displaystyle T_{f}(\mathbf {x} )=\sum _{|\alpha |\geq 0}{\frac {D^{\alpha }f(\mathbf {x_{0}} )}{\alpha ! }}(\mathbf {x} -\mathbf {x_{0}} )^{\alpha }} converges to f ( x ) {\displaystyle f(\mathbf {x} )} in a neighborhood of x 0 {\displaystyle \mathbf {x_{0}} } , for all x 0 ∈ U {\displaystyle \mathbf {x_{0}} \in U} . The requirement that the transition maps be analytic is significantly more restrictive than that they be infinitely differentiable; the analytic manifolds are a proper subset of the smooth, i.e. C ∞ {\displaystyle C^{\infty }} , manifolds. There are many similarities between the theory of analytic and smooth manifolds, but a critical difference is that analytic manifolds do not admit analytic partitions of unity, whereas smooth partitions of unity are an essential tool in the study of smooth manifolds. A fuller description of the definitions and general theory can be found at differentiable manifolds, for the real case, and at complex manifolds, for the complex case.
https://en.wikipedia.org/wiki/Real-analytic_manifold
In mathematics, an analytic proof is a proof of a theorem in analysis that only makes use of methods from analysis, and which does not predominantly make use of algebraic or geometrical methods. The term was first used by Bernard Bolzano, who first provided a non-analytic proof of his intermediate value theorem and then, several years later provided a proof of the theorem that was free from intuitions concerning lines crossing each other at a point, and so he felt happy calling it analytic (Bolzano 1817). Bolzano's philosophical work encouraged a more abstract reading of when a demonstration could be regarded as analytic, where a proof is analytic if it does not go beyond its subject matter (Sebastik 2007). In proof theory, an analytic proof has come to mean a proof whose structure is simple in a special way, due to conditions on the kind of inferences that ensure none of them go beyond what is contained in the assumptions and what is demonstrated.
https://en.wikipedia.org/wiki/Analytic_proof
In mathematics, an analytic semigroup is particular kind of strongly continuous semigroup. Analytic semigroups are used in the solution of partial differential equations; compared to strongly continuous semigroups, analytic semigroups provide better regularity of solutions to initial value problems, better results concerning perturbations of the infinitesimal generator, and a relationship between the type of the semigroup and the spectrum of the infinitesimal generator.
https://en.wikipedia.org/wiki/Analytic_semigroup
In mathematics, an ancient solution to a differential equation is a solution that can be extrapolated backwards to all past times, without singularities. That is, it is a solution "that is defined on a time interval of the form (−∞, T). "The term was introduced by Richard Hamilton in his work on the Ricci flow. It has since been applied to other geometric flows as well as to other systems such as the Navier–Stokes equations and heat equation. == References ==
https://en.wikipedia.org/wiki/Ancient_solution
In mathematics, an annulus (plural annuli or annuluses) is the region between two concentric circles. Informally, it is shaped like a ring or a hardware washer. The word "annulus" is borrowed from the Latin word anulus or annulus meaning 'little ring'. The adjectival form is annular (as in annular eclipse). The open annulus is topologically equivalent to both the open cylinder S1 × (0,1) and the punctured plane.
https://en.wikipedia.org/wiki/Punctured_disk
In mathematics, an anti-diagonal matrix is a square matrix where all the entries are zero except those on the diagonal going from the lower left corner to the upper right corner (↗), known as the anti-diagonal (sometimes Harrison diagonal, secondary diagonal, trailing diagonal, minor diagonal, off diagonal or bad diagonal).
https://en.wikipedia.org/wiki/Anti-diagonal_matrix
In mathematics, an antihomomorphism is a type of function defined on sets with multiplication that reverses the order of multiplication. An antiautomorphism is a bijective antihomomorphism, i.e. an antiisomorphism, from a set to itself. From bijectivity it follows that antiautomorphisms have inverses, and that the inverse of an antiautomorphism is also an antiautomorphism.
https://en.wikipedia.org/wiki/Antiautomorphism
In mathematics, an antimatroid is a formal system that describes processes in which a set is built up by including elements one at a time, and in which an element, once available for inclusion, remains available until it is included. Antimatroids are commonly axiomatized in two equivalent ways, either as a set system modeling the possible states of such a process, or as a formal language modeling the different sequences in which elements may be included. Dilworth (1940) was the first to study antimatroids, using yet another axiomatization based on lattice theory, and they have been frequently rediscovered in other contexts.The axioms defining antimatroids as set systems are very similar to those of matroids, but whereas matroids are defined by an exchange axiom, antimatroids are defined instead by an anti-exchange axiom, from which their name derives. Antimatroids can be viewed as a special case of greedoids and of semimodular lattices, and as a generalization of partial orders and of distributive lattices. Antimatroids are equivalent, by complementation, to convex geometries, a combinatorial abstraction of convex sets in geometry. Antimatroids have been applied to model precedence constraints in scheduling problems, potential event sequences in simulations, task planning in artificial intelligence, and the states of knowledge of human learners.
https://en.wikipedia.org/wiki/Antimatroid
In mathematics, an antiunitary transformation, is a bijective antilinear map U: H 1 → H 2 {\displaystyle U:H_{1}\to H_{2}\,} between two complex Hilbert spaces such that ⟨ U x , U y ⟩ = ⟨ x , y ⟩ ¯ {\displaystyle \langle Ux,Uy\rangle ={\overline {\langle x,y\rangle }}} for all x {\displaystyle x} and y {\displaystyle y} in H 1 {\displaystyle H_{1}} , where the horizontal bar represents the complex conjugate. If additionally one has H 1 = H 2 {\displaystyle H_{1}=H_{2}} then U {\displaystyle U} is called an antiunitary operator. Antiunitary operators are important in quantum theory because they are used to represent certain symmetries, such as time reversal. Their fundamental importance in quantum physics is further demonstrated by Wigner's theorem.
https://en.wikipedia.org/wiki/Antiunitary_operator
In mathematics, an anyonic Lie algebra is a U(1) graded vector space L {\displaystyle L} over C {\displaystyle \mathbb {C} } equipped with a bilinear operator : L × L → L {\displaystyle \colon L\times L\rightarrow L} and linear maps ε: L → C {\displaystyle \varepsilon \colon L\to \mathbb {C} } (some authors use | ⋅ |: L → C {\displaystyle |\cdot |\colon L\to \mathbb {C} } ) and Δ: L → L ⊗ L {\displaystyle \Delta \colon L\to L\otimes L} such that Δ X = X i ⊗ X i {\displaystyle \Delta X=X_{i}\otimes X^{i}} , satisfying following axioms: for pure graded elements X, Y, and Z. == References ==
https://en.wikipedia.org/wiki/Anyonic_Lie_algebra
In mathematics, an aperiodic semigroup is a semigroup S such that every element is aperiodic, that is, for each x in S there exists a positive integer n such that xn = xn+1. An aperiodic monoid is an aperiodic semigroup which is a monoid.
https://en.wikipedia.org/wiki/Aperiodic_semigroup
In mathematics, an approximate group is a subset of a group which behaves like a subgroup "up to a constant error", in a precise quantitative sense (so the term approximate subgroup may be more correct). For example, it is required that the set of products of elements in the subset be not much bigger than the subset itself (while for a subgroup it is required that they be equal). The notion was introduced in the 2010s but can be traced to older sources in additive combinatorics.
https://en.wikipedia.org/wiki/Approximate_group
In mathematics, an approximately finite-dimensional (AF) C*-algebra is a C*-algebra that is the inductive limit of a sequence of finite-dimensional C*-algebras. Approximate finite-dimensionality was first defined and described combinatorially by Ola Bratteli. Later, George A. Elliott gave a complete classification of AF algebras using the K0 functor whose range consists of ordered abelian groups with sufficiently nice order structure. The classification theorem for AF-algebras serves as a prototype for classification results for larger classes of separable simple amenable stably finite C*-algebras.
https://en.wikipedia.org/wiki/AF_C*-algebra
Its proof divides into two parts. The invariant here is K0 with its natural order structure; this is a functor. First, one proves existence: a homomorphism between invariants must lift to a *-homomorphism of algebras.
https://en.wikipedia.org/wiki/AF_C*-algebra
Second, one shows uniqueness: the lift must be unique up to approximate unitary equivalence. Classification then follows from what is known as the intertwining argument. For unital AF algebras, both existence and uniqueness follow from the fact the Murray-von Neumann semigroup of projections in an AF algebra is cancellative. The counterpart of simple AF C*-algebras in the von Neumann algebra world are the hyperfinite factors, which were classified by Connes and Haagerup. In the context of noncommutative geometry and topology, AF C*-algebras are noncommutative generalizations of C0(X), where X is a totally disconnected metrizable space.
https://en.wikipedia.org/wiki/AF_C*-algebra
In mathematics, an argument of a function is a value provided to obtain the function's result. It is also called an independent variable.For example, the binary function f ( x , y ) = x 2 + y 2 {\displaystyle f(x,y)=x^{2}+y^{2}} has two arguments, x {\displaystyle x} and y {\displaystyle y} , in an ordered pair ( x , y ) {\displaystyle (x,y)} . The hypergeometric function is an example of a four-argument function. The number of arguments that a function takes is called the arity of the function.
https://en.wikipedia.org/wiki/Argument_of_a_function
A function that takes a single argument as input, such as f ( x ) = x 2 {\displaystyle f(x)=x^{2}} , is called a unary function. A function of two or more variables is considered to have a domain consisting of ordered pairs or tuples of argument values. The argument of a circular function is an angle.
https://en.wikipedia.org/wiki/Argument_of_a_function
The argument of a hyperbolic function is a hyperbolic angle. A mathematical function has one or more arguments in the form of independent variables designated in the definition, which can also contain parameters.
https://en.wikipedia.org/wiki/Argument_of_a_function
The independent variables are mentioned in the list of arguments that the function takes, whereas the parameters are not. For example, in the logarithmic function f ( x ) = log b ⁡ ( x ) , {\displaystyle f(x)=\log _{b}(x),} the base b {\displaystyle b} is considered a parameter. Sometimes, subscripts can be used to denote arguments. For example, we can use subscripts to denote the arguments with respect to which partial derivatives are taken.The use of the term "argument" in this sense developed from astronomy, which historically used tables to determine the spatial positions of planets from their positions in the sky (ephemerides). These tables were organized according to measured angles called arguments, literally "that which elucidates something else."
https://en.wikipedia.org/wiki/Argument_of_a_function
In mathematics, an arithmetic group is a group obtained as the integer points of an algebraic group, for example S L 2 ( Z ) . {\displaystyle \mathrm {SL} _{2}(\mathbb {Z} ).} They arise naturally in the study of arithmetic properties of quadratic forms and other classical topics in number theory. They also give rise to very interesting examples of Riemannian manifolds and hence are objects of interest in differential geometry and topology. Finally, these two topics join in the theory of automorphic forms which is fundamental in modern number theory.
https://en.wikipedia.org/wiki/Arithmetic_group
In mathematics, an arithmetic surface over a Dedekind domain R with fraction field K {\displaystyle K} is a geometric object having one conventional dimension, and one other dimension provided by the infinitude of the primes. When R is the ring of integers Z, this intuition depends on the prime ideal spectrum Spec(Z) being seen as analogous to a line. Arithmetic surfaces arise naturally in diophantine geometry, when an algebraic curve defined over K is thought of as having reductions over the fields R/P, where P is a prime ideal of R, for almost all P; and are helpful in specifying what should happen about the process of reducing to R/P when the most naive way fails to make sense. Such an object can be defined more formally as an R-scheme with a non-singular, connected projective curve C / K {\displaystyle C/K} for a generic fiber and unions of curves (possibly reducible, singular, non-reduced ) over the appropriate residue field for special fibers.
https://en.wikipedia.org/wiki/Arithmetic_surface
In mathematics, an arithmetic variety is the quotient space of a Hermitian symmetric space by an arithmetic subgroup of the associated algebraic Lie group.
https://en.wikipedia.org/wiki/Arithmetic_variety
In mathematics, an associahedron Kn is an (n – 2)-dimensional convex polytope in which each vertex corresponds to a way of correctly inserting opening and closing parentheses in a string of n letters, and the edges correspond to single application of the associativity rule. Equivalently, the vertices of an associahedron correspond to the triangulations of a regular polygon with n + 1 sides and the edges correspond to edge flips in which a single diagonal is removed from a triangulation and replaced by a different diagonal. Associahedra are also called Stasheff polytopes after the work of Jim Stasheff, who rediscovered them in the early 1960s after earlier work on them by Dov Tamari.
https://en.wikipedia.org/wiki/Stasheff_polytope
In mathematics, an associative algebra A is an algebraic structure with compatible operations of addition, multiplication (assumed to be associative), and a scalar multiplication by elements in some field K. The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a vector space over K. In this article we will also use the term K-algebra to mean an associative algebra over the field K. A standard first example of a K-algebra is a ring of square matrices over a field K, with the usual matrix multiplication. A commutative algebra is an associative algebra that has a commutative multiplication, or, equivalently, an associative algebra that is also a commutative ring. In this article associative algebras are assumed to have a multiplicative identity, denoted 1; they are sometimes called unital associative algebras for clarification. In some areas of mathematics this assumption is not made, and we will call such structures non-unital associative algebras.
https://en.wikipedia.org/wiki/Linear_associative_algebra
We will also assume that all rings are unital, and all ring homomorphisms are unital. Many authors consider the more general concept of an associative algebra over a commutative ring R, instead of a field: An R-algebra is an R-module with an associative R-bilinear binary operation, which also contains a multiplicative identity. For examples of this concept, if S is any ring with center C, then S is an associative C-algebra.
https://en.wikipedia.org/wiki/Linear_associative_algebra
In mathematics, an asymmetric norm on a vector space is a generalization of the concept of a norm.
https://en.wikipedia.org/wiki/Asymmetric_norm
In mathematics, an asymmetric relation is a binary relation R {\displaystyle R} on a set X {\displaystyle X} where for all a , b ∈ X , {\displaystyle a,b\in X,} if a {\displaystyle a} is related to b {\displaystyle b} then b {\displaystyle b} is not related to a . {\displaystyle a.}
https://en.wikipedia.org/wiki/Asymmetric_relation
In mathematics, an asymptotic expansion, asymptotic series or Poincaré expansion (after Henri Poincaré) is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation to a given function as the argument of the function tends towards a particular, often infinite, point. Investigations by Dingle (1973) revealed that the divergent part of an asymptotic expansion is latently meaningful, i.e. contains information about the exact value of the expanded function. The most common type of asymptotic expansion is a power series in either positive or negative powers. Methods of generating such expansions include the Euler–Maclaurin summation formula and integral transforms such as the Laplace and Mellin transforms.
https://en.wikipedia.org/wiki/Asymptotic_series
Repeated integration by parts will often lead to an asymptotic expansion. Since a convergent Taylor series fits the definition of asymptotic expansion as well, the phrase "asymptotic series" usually implies a non-convergent series. Despite non-convergence, the asymptotic expansion is useful when truncated to a finite number of terms.
https://en.wikipedia.org/wiki/Asymptotic_series
The approximation may provide benefits by being more mathematically tractable than the function being expanded, or by an increase in the speed of computation of the expanded function. Typically, the best approximation is given when the series is truncated at the smallest term. This way of optimally truncating an asymptotic expansion is known as superasymptotics.
https://en.wikipedia.org/wiki/Asymptotic_series
The error is then typically of the form ~ exp(−c/ε) where ε is the expansion parameter. The error is thus beyond all orders in the expansion parameter. It is possible to improve on the superasymptotic error, e.g. by employing resummation methods such as Borel resummation to the divergent tail. Such methods are often referred to as hyperasymptotic approximations. See asymptotic analysis and big O notation for the notation used in this article.
https://en.wikipedia.org/wiki/Asymptotic_series
In mathematics, an atoroidal 3-manifold is one that does not contain an essential torus. There are two major variations in this terminology: an essential torus may be defined geometrically, as an embedded, non-boundary parallel, incompressible torus, or it may be defined algebraically, as a subgroup Z × Z {\displaystyle \mathbb {Z} \times \mathbb {Z} } of its fundamental group that is not conjugate to a peripheral subgroup (i.e., the image of the map on fundamental group induced by an inclusion of a boundary component). The terminology is not standardized, and different authors require atoroidal 3-manifolds to satisfy certain additional restrictions. For instance: Boris Apanasov (2000) gives a definition of atoroidality that combines both geometric and algebraic aspects, in terms of maps from a torus to the manifold and the induced maps on the fundamental group.
https://en.wikipedia.org/wiki/Atoroidal