text
stringlengths
9
3.55k
source
stringlengths
31
280
Physically, the vertex operators arising from holomorphic field insertions at points in two-dimensional conformal field theory admit operator product expansions when insertions collide, and these satisfy precisely the relations specified in the definition of vertex operator algebra. Indeed, the axioms of a vertex operator algebra are a formal algebraic interpretation of what physicists call chiral algebras (not to be confused with the more precise notion with the same name in mathematics) or "algebras of chiral symmetries", where these symmetries describe the Ward identities satisfied by a given conformal field theory, including conformal invariance. Other formulations of the vertex algebra axioms include Borcherds's later work on singular commutative rings, algebras over certain operads on curves introduced by Huang, Kriz, and others, D-module-theoretic objects called chiral algebras introduced by Alexander Beilinson and Vladimir Drinfeld and factorization algebras, also introduced by Beilinson and Drinfeld. Important basic examples of vertex operator algebras include the lattice VOAs (modeling lattice conformal field theories), VOAs given by representations of affine Kac–Moody algebras (from the WZW model), the Virasoro VOAs, which are VOAs corresponding to representations of the Virasoro algebra, and the moonshine module V♮, which is distinguished by its monster symmetry. More sophisticated examples such as affine W-algebras and the chiral de Rham complex on a complex manifold arise in geometric representation theory and mathematical physics.
https://en.wikipedia.org/wiki/Vertex_algebra
In mathematics, a vexillary permutation is a permutation μ of the positive integers containing no subpermutation isomorphic to the permutation (2143); in other words, there do not exist four numbers i < j < k < l with μ(j) < μ(i) < μ(l) < μ(k). They were introduced by Lascoux and Schützenberger (1982, 1985). The word "vexillary" means flag-like, and comes from the fact that vexillary permutations are related to flags of modules. Guibert, Pergola & Pinzani (2001) showed that vexillary involutions are enumerated by Motzkin numbers.
https://en.wikipedia.org/wiki/Vexillary_involution
In mathematics, a volume element provides a means for integrating a function with respect to volume in various coordinate systems such as spherical coordinates and cylindrical coordinates. Thus a volume element is an expression of the form d V = ρ ( u 1 , u 2 , u 3 ) d u 1 d u 2 d u 3 {\displaystyle \mathrm {d} V=\rho (u_{1},u_{2},u_{3})\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\mathrm {d} u_{3}} where the u i {\displaystyle u_{i}} are the coordinates, so that the volume of any set B {\displaystyle B} can be computed by Volume ⁡ ( B ) = ∫ B ρ ( u 1 , u 2 , u 3 ) d u 1 d u 2 d u 3 . {\displaystyle \operatorname {Volume} (B)=\int _{B}\rho (u_{1},u_{2},u_{3})\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\mathrm {d} u_{3}.} For example, in spherical coordinates d V = u 1 2 sin ⁡ u 2 d u 1 d u 2 d u 3 {\displaystyle \mathrm {d} V=u_{1}^{2}\sin u_{2}\,\mathrm {d} u_{1}\,\mathrm {d} u_{2}\,\mathrm {d} u_{3}} , and so ρ = u 1 2 sin ⁡ u 2 {\displaystyle \rho =u_{1}^{2}\sin u_{2}} .
https://en.wikipedia.org/wiki/Volume_element
The notion of a volume element is not limited to three dimensions: in two dimensions it is often known as the area element, and in this setting it is useful for doing surface integrals. Under changes of coordinates, the volume element changes by the absolute value of the Jacobian determinant of the coordinate transformation (by the change of variables formula). This fact allows volume elements to be defined as a kind of measure on a manifold. On an orientable differentiable manifold, a volume element typically arises from a volume form: a top degree differential form. On a non-orientable manifold, the volume element is typically the absolute value of a (locally defined) volume form: it defines a 1-density.
https://en.wikipedia.org/wiki/Volume_element
In mathematics, a volume form or top-dimensional form is a differential form of degree equal to the differentiable manifold dimension. Thus on a manifold M {\displaystyle M} of dimension n {\displaystyle n} , a volume form is an n {\displaystyle n} -form. It is an element of the space of sections of the line bundle ⋀ n ( T ∗ M ) {\displaystyle \textstyle {\bigwedge }^{n}(T^{*}M)} , denoted as Ω n ( M ) {\displaystyle \Omega ^{n}(M)} . A manifold admits a nowhere-vanishing volume form if and only if it is orientable.
https://en.wikipedia.org/wiki/Riemannian_volume_form
An orientable manifold has infinitely many volume forms, since multiplying a volume form by a nowhere-vanishing real valued function yields another volume form. On non-orientable manifolds, one may instead define the weaker notion of a density. A volume form provides a means to define the integral of a function on a differentiable manifold.
https://en.wikipedia.org/wiki/Riemannian_volume_form
In other words, a volume form gives rise to a measure with respect to which functions can be integrated by the appropriate Lebesgue integral. The absolute value of a volume form is a volume element, which is also known variously as a twisted volume form or pseudo-volume form.
https://en.wikipedia.org/wiki/Riemannian_volume_form
It also defines a measure, but exists on any differentiable manifold, orientable or not. Kähler manifolds, being complex manifolds, are naturally oriented, and so possess a volume form. More generally, the n {\displaystyle n} th exterior power of the symplectic form on a symplectic manifold is a volume form. Many classes of manifolds have canonical volume forms: they have extra structure which allows the choice of a preferred volume form. Oriented pseudo-Riemannian manifolds have an associated canonical volume form.
https://en.wikipedia.org/wiki/Riemannian_volume_form
In mathematics, a von Neumann algebra or W*-algebra is a *-algebra of bounded operators on a Hilbert space that is closed in the weak operator topology and contains the identity operator. It is a special type of C*-algebra. Von Neumann algebras were originally introduced by John von Neumann, motivated by his study of single operators, group representations, ergodic theory and quantum mechanics. His double commutant theorem shows that the analytic definition is equivalent to a purely algebraic definition as an algebra of symmetries.
https://en.wikipedia.org/wiki/Von_Neumann_algebras
Two basic examples of von Neumann algebras are as follows: The ring L ∞ ( R ) {\displaystyle L^{\infty }(\mathbb {R} )} of essentially bounded measurable functions on the real line is a commutative von Neumann algebra, whose elements act as multiplication operators by pointwise multiplication on the Hilbert space L 2 ( R ) {\displaystyle L^{2}(\mathbb {R} )} of square-integrable functions. The algebra B ( H ) {\displaystyle {\mathcal {B}}({\mathcal {H}})} of all bounded operators on a Hilbert space H {\displaystyle {\mathcal {H}}} is a von Neumann algebra, non-commutative if the Hilbert space has dimension at least 2 {\displaystyle 2} .Von Neumann algebras were first studied by von Neumann (1930) in 1929; he and Francis Murray developed the basic theory, under the original name of rings of operators, in a series of papers written in the 1930s and 1940s (F.J. Murray & J. von Neumann 1936, 1937, 1943; J. von Neumann 1938, 1940, 1943, 1949), reprinted in the collected works of von Neumann (1961).
https://en.wikipedia.org/wiki/Von_Neumann_algebras
Introductory accounts of von Neumann algebras are given in the online notes of Jones (2003) and Wassermann (1991) and the books by Dixmier (1981), Schwartz (1967), Blackadar (2005) and Sakai (1971). The three volume work by Takesaki (1979) gives an encyclopedic account of the theory. The book by Connes (1994) discusses more advanced topics.
https://en.wikipedia.org/wiki/Von_Neumann_algebras
In mathematics, a von Neumann regular ring is a ring R (associative, with 1, not necessarily commutative) such that for every element a in R there exists an x in R with a = axa. One may think of x as a "weak inverse" of the element a; in general x is not uniquely determined by a. Von Neumann regular rings are also called absolutely flat rings, because these rings are characterized by the fact that every left R-module is flat. Von Neumann regular rings were introduced by von Neumann (1936) under the name of "regular rings", in the course of his study of von Neumann algebras and continuous geometry.
https://en.wikipedia.org/wiki/Von_Neumann_regular_element
Von Neumann regular rings should not be confused with the unrelated regular rings and regular local rings of commutative algebra. An element a of a ring is called a von Neumann regular element if there exists an x such that a = axa. An ideal i {\displaystyle {\mathfrak {i}}} is called a (von Neumann) regular ideal if for every element a in i {\displaystyle {\mathfrak {i}}} there exists an element x in i {\displaystyle {\mathfrak {i}}} such that a = axa.
https://en.wikipedia.org/wiki/Von_Neumann_regular_element
In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform.
https://en.wikipedia.org/wiki/Wavelet_transforms
In mathematics, a weak Hausdorff space or weakly Hausdorff space is a topological space where the image of every continuous map from a compact Hausdorff space into the space is closed. In particular, every Hausdorff space is weak Hausdorff. As a separation property, it is stronger than T1, which is equivalent to the statement that points are closed.
https://en.wikipedia.org/wiki/Weak_Hausdorff_space
Specifically, every weak Hausdorff space is a T1 space.The notion was introduced by M. C. McCord to remedy an inconvenience of working with the category of Hausdorff spaces. It is often used in tandem with compactly generated spaces in algebraic topology. For that, see the category of compactly generated weak Hausdorff spaces.
https://en.wikipedia.org/wiki/Weak_Hausdorff_space
In mathematics, a weak Lie algebra bundle ξ = ( ξ , p , X , θ ) {\displaystyle \xi =(\xi ,p,X,\theta )\,} is a vector bundle ξ {\displaystyle \xi \,} over a base space X together with a morphism θ: ξ ⊗ ξ → ξ {\displaystyle \theta :\xi \otimes \xi \rightarrow \xi } which induces a Lie algebra structure on each fibre ξ x {\displaystyle \xi _{x}\,} . A Lie algebra bundle ξ = ( ξ , p , X ) {\displaystyle \xi =(\xi ,p,X)\,} is a vector bundle in which each fibre is a Lie algebra and for every x in X, there is an open set U {\displaystyle U} containing x, a Lie algebra L and a homeomorphism ϕ: U × L → p − 1 ( U ) {\displaystyle \phi :U\times L\to p^{-1}(U)\,} such that ϕ x: x × L → p − 1 ( x ) {\displaystyle \phi _{x}:x\times L\rightarrow p^{-1}(x)\,} is a Lie algebra isomorphism. Any Lie algebra bundle is a weak Lie algebra bundle, but the converse need not be true in general. As an example of a weak Lie algebra bundle that is not a strong Lie algebra bundle, consider the total space s o ( 3 ) × R {\displaystyle {\mathfrak {so}}(3)\times \mathbb {R} } over the real line R {\displaystyle \mathbb {R} } .
https://en.wikipedia.org/wiki/Lie_algebra_bundle
Let denote the Lie bracket of s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} and deform it by the real parameter as: x = x ⋅ {\displaystyle _{x}=x\cdot } for X , Y ∈ s o ( 3 ) {\displaystyle X,Y\in {\mathfrak {so}}(3)} and x ∈ R {\displaystyle x\in \mathbb {R} } . Lie's third theorem states that every bundle of Lie algebras can locally be integrated to a bundle of Lie groups.
https://en.wikipedia.org/wiki/Lie_algebra_bundle
In general globally the total space might fail to be Hausdorff. But if all fibres of a real Lie algebra bundle over a topological space are mutually isomorphic as Lie algebras, then it is a locally trivial Lie algebra bundle. This result was proved by proving that the real orbit of a real point under an algebraic group is open in the real part of its complex orbit.
https://en.wikipedia.org/wiki/Lie_algebra_bundle
Suppose the base space is Hausdorff and fibers of total space are isomorphic as Lie algebras then there exists a Hausdorff Lie group bundle over the same base space whose Lie algebra bundle is isomorphic to the given Lie algebra bundle. Every semi simple Lie algebra bundle is locally trivial. Hence there exist a Hausdorff Lie group bundle over the same base space whose Lie algebra bundle is isomorphic to the given Lie algebra bundle.
https://en.wikipedia.org/wiki/Lie_algebra_bundle
In mathematics, a weak Maass form is a smooth function f {\displaystyle f} on the upper half plane, transforming like a modular form under the action of the modular group, being an eigenfunction of the corresponding hyperbolic Laplace operator, and having at most linear exponential growth at the cusps. If the eigenvalue of f {\displaystyle f} under the Laplacian is zero, then f {\displaystyle f} is called a harmonic weak Maass form, or briefly a harmonic Maass form. A weak Maass form which has actually moderate growth at the cusps is a classical Maass wave form. The Fourier expansions of harmonic Maass forms often encode interesting combinatorial, arithmetic, or geometric generating functions. Regularized theta lifts of harmonic Maass forms can be used to construct Arakelov Green functions for special divisors on orthogonal Shimura varieties.
https://en.wikipedia.org/wiki/Harmonic_Maass_form
In mathematics, a weak derivative is a generalization of the concept of the derivative of a function (strong derivative) for functions not assumed differentiable, but only integrable, i.e., to lie in the Lp space L 1 ( ) {\displaystyle L^{1}()} . The method of integration by parts holds that for differentiable functions u {\displaystyle u} and φ {\displaystyle \varphi } we have A function u' being the weak derivative of u is essentially defined by the requirement that this equation must hold for all infinitely differentiable functions φ vanishing at the boundary points ( φ ( a ) = φ ( b ) = 0 {\displaystyle \varphi (a)=\varphi (b)=0} ).
https://en.wikipedia.org/wiki/Weak_derivative
In mathematics, a weak equivalence is a notion from homotopy theory that in some sense identifies objects that have the same "shape". This notion is formalized in the axiomatic definition of a model category. A model category is a category with classes of morphisms called weak equivalences, fibrations, and cofibrations, satisfying several axioms. The associated homotopy category of a model category has the same objects, but the morphisms are changed in order to make the weak equivalences into isomorphisms. It is a useful observation that the associated homotopy category depends only on the weak equivalences, not on the fibrations and cofibrations.
https://en.wikipedia.org/wiki/Weak_equivalence_(homotopy_theory)
In mathematics, a weak solution (also called a generalized solution) to an ordinary or partial differential equation is a function for which the derivatives may not all exist but which is nonetheless deemed to satisfy the equation in some precisely defined sense. There are many different definitions of weak solution, appropriate for different classes of equations. One of the most important is based on the notion of distributions. Avoiding the language of distributions, one starts with a differential equation and rewrites it in such a way that no derivatives of the solution of the equation show up (the new form is called the weak formulation, and the solutions to it are called weak solutions).
https://en.wikipedia.org/wiki/Generalized_solution
Somewhat surprisingly, a differential equation may have solutions which are not differentiable; and the weak formulation allows one to find such solutions. Weak solutions are important because many differential equations encountered in modelling real-world phenomena do not admit of sufficiently smooth solutions, and the only way of solving such equations is using the weak formulation. Even in situations where an equation does have differentiable solutions, it is often convenient to first prove the existence of weak solutions and only later show that those solutions are in fact smooth enough.
https://en.wikipedia.org/wiki/Generalized_solution
In mathematics, a weak trace class operator is a compact operator on a separable Hilbert space H with singular values the same order as the harmonic sequence. When the dimension of H is infinite, the ideal of weak trace-class operators is strictly larger than the ideal of trace class operators, and has fundamentally different properties. The usual operator trace on the trace-class operators does not extend to the weak trace class. Instead the ideal of weak trace-class operators admits an infinite number of linearly independent quasi-continuous traces, and it is the smallest two-sided ideal for which all traces on it are singular traces. Weak trace-class operators feature in the noncommutative geometry of French mathematician Alain Connes.
https://en.wikipedia.org/wiki/Weak_trace-class_operator
In mathematics, a weakly compact cardinal is a certain kind of cardinal number introduced by Erdős & Tarski (1961); weakly compact cardinals are large cardinals, meaning that their existence cannot be proven from the standard axioms of set theory. (Tarski originally called them "not strongly incompact" cardinals.) Formally, a cardinal κ is defined to be weakly compact if it is uncountable and for every function f: 2 → {0, 1} there is a set of cardinality κ that is homogeneous for f. In this context, 2 means the set of 2-element subsets of κ, and a subset S of κ is homogeneous for f if and only if either all of 2 maps to 0 or all of it maps to 1. The name "weakly compact" refers to the fact that if a cardinal is weakly compact then a certain related infinitary language satisfies a version of the compactness theorem; see below.
https://en.wikipedia.org/wiki/Weakly_compact_cardinal
In mathematics, a weakly holomorphic modular form is similar to a holomorphic modular form, except that it is allowed to have poles at cusps. Examples include modular functions and modular forms.
https://en.wikipedia.org/wiki/Weakly_holomorphic_modular_form
In mathematics, a weakly symmetric space is a notion introduced by the Norwegian mathematician Atle Selberg in the 1950s as a generalisation of symmetric space, due to Élie Cartan. Geometrically the spaces are defined as complete Riemannian manifolds such that any two points can be exchanged by an isometry, the symmetric case being when the isometry is required to have period two. The classification of weakly symmetric spaces relies on that of periodic automorphisms of complex semisimple Lie algebras. They provide examples of Gelfand pairs, although the corresponding theory of spherical functions in harmonic analysis, known for symmetric spaces, has not yet been developed.
https://en.wikipedia.org/wiki/Weakly_symmetric_space
In mathematics, a web permits an intrinsic characterization in terms of Riemannian geometry of the additive separation of variables in the Hamilton–Jacobi equation.
https://en.wikipedia.org/wiki/Web_(differential_geometry)
In mathematics, a weighing matrix of order n {\displaystyle n} and weight w {\displaystyle w} is a matrix W {\displaystyle W} with entries from the set { 0 , 1 , − 1 } {\displaystyle \{0,1,-1\}} such that: W W T = w I n {\displaystyle WW^{\mathsf {T}}=wI_{n}} Where W T {\displaystyle W^{\mathsf {T}}} is the transpose of W {\displaystyle W} and I n {\displaystyle I_{n}} is the identity matrix of order n {\displaystyle n} . The weight w {\displaystyle w} is also called the degree of the matrix. For convenience, a weighing matrix of order n {\displaystyle n} and weight w {\displaystyle w} is often denoted by W ( n , w ) {\displaystyle W(n,w)} .Weighing matrices are so called because of their use in optimally measuring the individual weights of multiple objects. When the weighing device is a balance scale, the statistical variance of the measurement can be minimized by weighing multiple objects at once, including some objects in the opposite pan of the scale where they subtract from the measurement.
https://en.wikipedia.org/wiki/Weighing_matrix
In mathematics, a weighted Voronoi diagram in n dimensions is a generalization of a Voronoi diagram. The Voronoi cells in a weighted Voronoi diagram are defined in terms of a distance function. The distance function may specify the usual Euclidean distance, or may be some other, special distance function. In weighted Voronoi diagrams, each site has a weight that influences the distance computation.
https://en.wikipedia.org/wiki/Weighted_Voronoi_diagram
The idea is that larger weights indicate more important sites, and such sites will get bigger Voronoi cells. In a multiplicatively weighted Voronoi diagram, the distance between a point and a site is divided by the (positive) weight of the site.
https://en.wikipedia.org/wiki/Weighted_Voronoi_diagram
In the plane under the ordinary Euclidean distance, the multiplicatively weighted Voronoi diagram is also called circular Dirichlet tessellation and its edges are circular arcs and straight line segments. A Voronoi cell may be non-convex, disconnected and may have holes. This diagram arises, e.g., as a model of crystal growth, where crystals from different points may grow with different speed.
https://en.wikipedia.org/wiki/Weighted_Voronoi_diagram
Since crystals may grow in empty space only and are continuous objects, a natural variation is the crystal Voronoi diagram, in which the cells are defined somewhat differently. In an additively weighted Voronoi diagram, weights are subtracted from the distances. In the plane under the ordinary Euclidean distance this diagram is also known as the hyperbolic Dirichlet tessellation and its edges are arcs of hyperbolas and straight line segments.The power diagram is defined when weights are subtracted from the squared Euclidean distance. It can also be defined using the power distance defined from a set of circles.
https://en.wikipedia.org/wiki/Weighted_Voronoi_diagram
In mathematics, a well-defined expression or unambiguous expression is an expression whose definition assigns it a unique interpretation or value. Otherwise, the expression is said to be not well defined, ill defined or ambiguous. A function is well defined if it gives the same result when the representation of the input is changed without changing the value of the input. For instance, if f {\displaystyle f} takes real numbers as input, and if f ( 0.5 ) {\displaystyle f(0.5)} does not equal f ( 1 / 2 ) {\displaystyle f(1/2)} then f {\displaystyle f} is not well defined (and thus not a function).
https://en.wikipedia.org/wiki/Well-defined_expression
The term well defined can also be used to indicate that a logical expression is unambiguous or uncontradictory. A function that is not well defined is not the same as a function that is undefined. For example, if f ( x ) = 1 x {\displaystyle f(x)={\frac {1}{x}}} , then even though f ( 0 ) {\displaystyle f(0)} is undefined does not mean that the function is not well defined – but simply that 0 is not in the domain of f {\displaystyle f} .
https://en.wikipedia.org/wiki/Well-defined_expression
In mathematics, a well-order (or well-ordering or well-order relation) on a set S is a total order on S with the property that every non-empty subset of S has a least element in this ordering. The set S together with the well-order relation is then called a well-ordered set. In some academic articles and textbooks these terms are instead written as wellorder, wellordered, and wellordering or well order, well ordered, and well ordering. Every non-empty well-ordered set has a least element.
https://en.wikipedia.org/wiki/Well-ordering_property
Every element s of a well-ordered set, except a possible greatest element, has a unique successor (next element), namely the least element of the subset of all elements greater than s. There may be elements besides the least element which have no predecessor (see § Natural numbers below for an example). A well-ordered set S contains for every subset T with an upper bound a least upper bound, namely the least element of the subset of all upper bounds of T in S. If ≤ is a non-strict well ordering, then < is a strict well ordering. A relation is a strict well ordering if and only if it is a well-founded strict total order.
https://en.wikipedia.org/wiki/Well-ordering_property
The distinction between strict and non-strict well orders is often ignored since they are easily interconvertible. Every well-ordered set is uniquely order isomorphic to a unique ordinal number, called the order type of the well-ordered set. The well-ordering theorem, which is equivalent to the axiom of choice, states that every set can be well ordered. If a set is well ordered (or even if it merely admits a well-founded relation), the proof technique of transfinite induction can be used to prove that a given statement is true for all elements of the set. The observation that the natural numbers are well ordered by the usual less-than relation is commonly called the well-ordering principle (for natural numbers).
https://en.wikipedia.org/wiki/Well-ordering_property
In mathematics, a well-posed problem is one for which the following properties hold: The problem has a solution The solution is unique The solution's behavior changes continuously with the initial conditionsExamples of archetypal well-posed problems include the Dirichlet problem for Laplace's equation, and the heat equation with specified initial conditions. These might be regarded as 'natural' problems in that there are physical processes modelled by these problems. Problems that are not well-posed in the sense of Hadamard are termed ill-posed. Inverse problems are often ill-posed.
https://en.wikipedia.org/wiki/Well-posed_problem_(numerical_analysis)
For example, the inverse heat equation, deducing a previous distribution of temperature from final data, is not well-posed in that the solution is highly sensitive to changes in the final data. Continuum models must often be discretized in order to obtain a numerical solution. While solutions may be continuous with respect to the initial conditions, they may suffer from numerical instability when solved with finite precision, or with errors in the data.
https://en.wikipedia.org/wiki/Well-posed_problem_(numerical_analysis)
Even if a problem is well-posed, it may still be ill-conditioned, meaning that a small error in the initial data can result in much larger errors in the answers. Problems in nonlinear complex systems (so-called chaotic systems) provide well-known examples of instability. An ill-conditioned problem is indicated by a large condition number.
https://en.wikipedia.org/wiki/Well-posed_problem_(numerical_analysis)
If the problem is well-posed, then it stands a good chance of solution on a computer using a stable algorithm. If it is not well-posed, it needs to be re-formulated for numerical treatment. Typically this involves including additional assumptions, such as smoothness of solution.
https://en.wikipedia.org/wiki/Well-posed_problem_(numerical_analysis)
This process is known as regularization. Tikhonov regularization is one of the most commonly used for regularization of linear ill-posed problems. The definition of a well-posed problem comes from the work of Jacques Hadamard on mathematical modeling of physical phenomena.
https://en.wikipedia.org/wiki/Well-posed_problem_(numerical_analysis)
In mathematics, a zero (also sometimes called a root) of a real-, complex-, or generally vector-valued function f {\displaystyle f} , is a member x {\displaystyle x} of the domain of f {\displaystyle f} such that f ( x ) {\displaystyle f(x)} vanishes at x {\displaystyle x} ; that is, the function f {\displaystyle f} attains the value of 0 at x {\displaystyle x} , or equivalently, x {\displaystyle x} is the solution to the equation f ( x ) = 0 {\displaystyle f(x)=0} . A "zero" of a function is thus an input value that produces an output of 0.A root of a polynomial is a zero of the corresponding polynomial function. The fundamental theorem of algebra shows that any non-zero polynomial has a number of roots at most equal to its degree, and that the number of roots and the degree are equal when one considers the complex roots (or more generally, the roots in an algebraically closed extension) counted with their multiplicities.
https://en.wikipedia.org/wiki/Zero_of_a_function
For example, the polynomial f {\displaystyle f} of degree two, defined by f ( x ) = x 2 − 5 x + 6 {\displaystyle f(x)=x^{2}-5x+6} has the two roots (or zeros) that are 2 and 3. If the function maps real numbers to real numbers, then its zeros are the x {\displaystyle x} -coordinates of the points where its graph meets the x-axis. An alternative name for such a point ( x , 0 ) {\displaystyle (x,0)} in this context is an x {\displaystyle x} -intercept.
https://en.wikipedia.org/wiki/Zero_of_a_function
In mathematics, a zero element is one of several generalizations of the number zero to other algebraic structures. These alternate meanings may or may not reduce to the same thing, depending on the context.
https://en.wikipedia.org/wiki/Zero_vector
In mathematics, a zero-dimensional topological space (or nildimensional space) is a topological space that has dimension zero with respect to one of several inequivalent notions of assigning a dimension to a given topological space. A graphical illustration of a nildimensional space is a point.
https://en.wikipedia.org/wiki/Zero-dimensional_space
In mathematics, a zeta function is (usually) a function analogous to the original example, the Riemann zeta function ζ ( s ) = ∑ n = 1 ∞ 1 n s . {\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}.} Zeta functions include: Airy zeta function, related to the zeros of the Airy function Arakawa–Kaneko zeta function Arithmetic zeta function Artin–Mazur zeta function of a dynamical system Barnes zeta function or double zeta function Beurling zeta function of Beurling generalized primes Dedekind zeta function of a number field Duursma zeta function of error-correcting codes Epstein zeta function of a quadratic form Goss zeta function of a function field Hasse–Weil zeta function of a variety Height zeta function of a variety Hurwitz zeta function, a generalization of the Riemann zeta function Igusa zeta function Ihara zeta function of a graph L-function, a "twisted" zeta function Lefschetz zeta function of a morphism Lerch zeta function, a generalization of the Riemann zeta function Local zeta function of a characteristic-p variety Matsumoto zeta function Minakshisundaram–Pleijel zeta function of a Laplacian Motivic zeta function of a motive Multiple zeta function, or Mordell–Tornheim zeta function of several variables p-adic zeta function of a p-adic number Prime zeta function, like the Riemann zeta function, but only summed over primes Riemann zeta function, the archetypal example Ruelle zeta function Selberg zeta function of a Riemann surface Shimizu L-function Shintani zeta function Subgroup zeta function Witten zeta function of a Lie group Zeta function of an incidence algebra, a function that maps every interval of a poset to the constant value 1. Despite not resembling a holomorphic function, the special case for the poset of integer divisibility is related as a formal Dirichlet series to the Riemann zeta function. Zeta function of an operator or spectral zeta function
https://en.wikipedia.org/wiki/Zeta_function
In mathematics, a zonal polynomial is a multivariate symmetric homogeneous polynomial. The zonal polynomials form a basis of the space of symmetric polynomials. They appear as zonal spherical functions of the Gelfand pairs ( S 2 n , H n ) {\displaystyle (S_{2n},H_{n})} (here, H n {\displaystyle H_{n}} is the hyperoctahedral group) and ( G l n ( R ) , O n ) {\displaystyle (Gl_{n}(\mathbb {R} ),O_{n})} , which means that they describe canonical basis of the double class algebras C {\displaystyle \mathbb {C} } and C {\displaystyle \mathbb {C} } . They are applied in multivariate statistics. The zonal polynomials are the α = 2 {\displaystyle \alpha =2} case of the C normalization of the Jack function.
https://en.wikipedia.org/wiki/Zonal_polynomial
In mathematics, a zonal spherical function or often just spherical function is a function on a locally compact group G with compact subgroup K (often a maximal compact subgroup) that arises as the matrix coefficient of a K-invariant vector in an irreducible representation of G. The key examples are the matrix coefficients of the spherical principal series, the irreducible representations appearing in the decomposition of the unitary representation of G on L2(G/K). In this case the commutant of G is generated by the algebra of biinvariant functions on G with respect to K acting by right convolution. It is commutative if in addition G/K is a symmetric space, for example when G is a connected semisimple Lie group with finite centre and K is a maximal compact subgroup. The matrix coefficients of the spherical principal series describe precisely the spectrum of the corresponding C* algebra generated by the biinvariant functions of compact support, often called a Hecke algebra.
https://en.wikipedia.org/wiki/Zonal_spherical_function
The spectrum of the commutative Banach *-algebra of biinvariant L1 functions is larger; when G is a semisimple Lie group with maximal compact subgroup K, additional characters come from matrix coefficients of the complementary series, obtained by analytic continuation of the spherical principal series. Zonal spherical functions have been explicitly determined for real semisimple groups by Harish-Chandra. For special linear groups, they were independently discovered by Israel Gelfand and Mark Naimark.
https://en.wikipedia.org/wiki/Zonal_spherical_function
For complex groups, the theory simplifies significantly, because G is the complexification of K, and the formulas are related to analytic continuations of the Weyl character formula on K. The abstract functional analytic theory of zonal spherical functions was first developed by Roger Godement. Apart from their group theoretic interpretation, the zonal spherical functions for a semisimple Lie group G also provide a set of simultaneous eigenfunctions for the natural action of the centre of the universal enveloping algebra of G on L2(G/K), as differential operators on the symmetric space G/K. For semisimple p-adic Lie groups, the theory of zonal spherical functions and Hecke algebras was first developed by Satake and Ian G. Macdonald. The analogues of the Plancherel theorem and Fourier inversion formula in this setting generalise the eigenfunction expansions of Mehler, Weyl and Fock for singular ordinary differential equations: they were obtained in full generality in the 1960s in terms of Harish-Chandra's c-function. The name "zonal spherical function" comes from the case when G is SO(3,R) acting on a 2-sphere and K is the subgroup fixing a point: in this case the zonal spherical functions can be regarded as certain functions on the sphere invariant under rotation about a fixed axis.
https://en.wikipedia.org/wiki/Zonal_spherical_function
In mathematics, a Γ-object of a pointed category C is a contravariant functor from Γ to C. The basic example is Segal's so-called Γ-space, which may be thought of as a generalization of simplicial abelian group (or simplicial abelian monoid). More precisely, one can define a Gamma space as an O-monoid object in an infinity-category. The notion plays a role in the generalization of algebraic K-theory that replaces an abelian group by something higher.
https://en.wikipedia.org/wiki/Gamma-object
In mathematics, a Δ-set S, often called a Δ-complex or a semi-simplicial set, is a combinatorial object that is useful in the construction and triangulation of topological spaces, and also in the computation of related algebraic invariants of such spaces. A Δ-set is somewhat more general than a simplicial complex, yet not quite as general as a simplicial set.As an example, suppose we want to triangulate the 1-dimensional circle S 1 {\displaystyle S^{1}} . To do so with a simplicial complex, we need at least three vertices, and edges connecting them. But delta-sets allow for a simpler triangulation: thinking of S 1 {\displaystyle S^{1}} as the interval with the two endpoints identified, we can define a triangulation with a single vertex 0, and a single edge looping between 0 and 0.
https://en.wikipedia.org/wiki/Delta_set
In mathematics, a γ {\displaystyle \gamma } -space is a topological space that satisfies a certain a basic selection principle. An infinite cover of a topological space is an ω {\displaystyle \omega } -cover if every finite subset of this space is contained in some member of the cover, and the whole space is not a member the cover. A cover of a topological space is a γ {\displaystyle \gamma } -cover if every point of this space belongs to all but finitely many members of this cover. A γ {\displaystyle \gamma } -space is a space in which every open ω {\displaystyle \omega } -cover contains a γ {\displaystyle \gamma } -cover.
https://en.wikipedia.org/wiki/Γ-space
In mathematics, a π-system (or pi-system) on a set Ω {\displaystyle \Omega } is a collection P {\displaystyle P} of certain subsets of Ω , {\displaystyle \Omega ,} such that P {\displaystyle P} is non-empty. If A , B ∈ P {\displaystyle A,B\in P} then A ∩ B ∈ P . {\displaystyle A\cap B\in P.} That is, P {\displaystyle P} is a non-empty family of subsets of Ω {\displaystyle \Omega } that is closed under non-empty finite intersections.
https://en.wikipedia.org/wiki/Pi-system
The importance of π-systems arises from the fact that if two probability measures agree on a π-system, then they agree on the 𝜎-algebra generated by that π-system. Moreover, if other properties, such as equality of integrals, hold for the π-system, then they hold for the generated 𝜎-algebra as well. This is the case whenever the collection of subsets for which the property holds is a 𝜆-system.
https://en.wikipedia.org/wiki/Pi-system
π-systems are also useful for checking independence of random variables. This is desirable because in practice, π-systems are often simpler to work with than 𝜎-algebras. For example, it may be awkward to work with 𝜎-algebras generated by infinitely many sets σ ( E 1 , E 2 , … ) .
https://en.wikipedia.org/wiki/Pi-system
{\displaystyle \sigma (E_{1},E_{2},\ldots ).} So instead we may examine the union of all 𝜎-algebras generated by finitely many sets ⋃ n σ ( E 1 , … , E n ) . {\textstyle \bigcup _{n}\sigma (E_{1},\ldots ,E_{n}).} This forms a π-system that generates the desired 𝜎-algebra. Another example is the collection of all intervals of the real line, along with the empty set, which is a π-system that generates the very important Borel 𝜎-algebra of subsets of the real line.
https://en.wikipedia.org/wiki/Pi-system
In mathematics, abstract nonsense, general abstract nonsense, generalized abstract nonsense, and general nonsense are nonderogatory terms used by mathematicians to describe long, theoretical parts of a proof they skip over when readers are expected to be familiar with them. These terms are mainly used for abstract methods related to category theory and homological algebra. More generally, "abstract nonsense" may refer to a proof that relies on category-theoretic methods, or even to the study of category theory itself.
https://en.wikipedia.org/wiki/Generalized_abstract_nonsense
In mathematics, abuse of notation occurs when an author uses a mathematical notation in a way that is not entirely formally correct, but which might help simplify the exposition or suggest the correct intuition (while possibly minimizing errors and confusion at the same time). However, since the concept of formal/syntactical correctness depends on both time and context, certain notations in mathematics that are flagged as abuse in one context could be formally correct in one or more other contexts. Time-dependent abuses of notation may occur when novel notations are introduced to a theory some time before the theory is first formalized; these may be formally corrected by solidifying and/or otherwise improving the theory. Abuse of notation should be contrasted with misuse of notation, which does not have the presentational benefits of the former and should be avoided (such as the misuse of constants of integration).
https://en.wikipedia.org/wiki/Abuse_of_notation
A related concept is abuse of language or abuse of terminology, where a term — rather than a notation — is misused. Abuse of language is an almost synonymous expression for abuses that are non-notational by nature.
https://en.wikipedia.org/wiki/Abuse_of_notation
For example, while the word representation properly designates a group homomorphism from a group G to GL(V), where V is a vector space, it is common to call V "a representation of G". Another common abuse of language consists in identifying two mathematical objects that are different, but canonically isomorphic. Other examples include identifying a constant function with its value, identifying a group with a binary operation with the name of its underlying set, or identifying to R 3 {\displaystyle \mathbb {R} ^{3}} the Euclidean space of dimension three equipped with a Cartesian coordinate system.
https://en.wikipedia.org/wiki/Abuse_of_notation
In mathematics, addition and multiplication of real numbers is associative. By contrast, in computer science, the addition and multiplication of floating point numbers is not associative, as rounding errors are introduced when dissimilar-sized values are joined together.To illustrate this, consider a floating point representation with a 4-bit mantissa: Even though most computers compute with 24 or 53 bits of mantissa, this is an important source of rounding error, and approaches such as the Kahan summation algorithm are ways to minimise the errors. It can be especially problematic in parallel computing.
https://en.wikipedia.org/wiki/Left_associative_operator
In mathematics, additive K-theory means some version of algebraic K-theory in which, according to Spencer Bloch, the general linear group GL has everywhere been replaced by its Lie algebra gl. It is not, therefore, one theory but a way of creating additive or infinitesimal analogues of multiplicative theories.
https://en.wikipedia.org/wiki/Additive_K-theory
In mathematics, admissible representations are a well-behaved class of representations used in the representation theory of reductive Lie groups and locally compact totally disconnected groups. They were introduced by Harish-Chandra.
https://en.wikipedia.org/wiki/Admissible_representation
In mathematics, affine geometry is what remains of Euclidean geometry when ignoring (mathematicians often say "forgetting") the metric notions of distance and angle. As the notion of parallel lines is one of the main properties that is independent of any metric, affine geometry is often considered as the study of parallel lines. Therefore, Playfair's axiom (Given a line L and a point P not on L, there is exactly one line parallel to L that passes through P.) is fundamental in affine geometry.
https://en.wikipedia.org/wiki/Affine_geometry
Comparisons of figures in affine geometry are made with affine transformations, which are mappings that preserve alignment of points and parallelism of lines. Affine geometry can be developed in two ways that are essentially equivalent.In synthetic geometry, an affine space is a set of points to which is associated a set of lines, which satisfy some axioms (such as Playfair's axiom). Affine geometry can also be developed on the basis of linear algebra.
https://en.wikipedia.org/wiki/Affine_geometry
In this context an affine space is a set of points equipped with a set of transformations (that is bijective mappings), the translations, which forms a vector space (over a given field, commonly the real numbers), and such that for any given ordered pair of points there is a unique translation sending the first point to the second; the composition of two translations is their sum in the vector space of the translations. In more concrete terms, this amounts to having an operation that associates to any ordered pair of points a vector and another operation that allows translation of a point by a vector to give another point; these operations are required to satisfy a number of axioms (notably that two successive translations have the effect of translation by the sum vector). By choosing any point as "origin", the points are in one-to-one correspondence with the vectors, but there is no preferred choice for the origin; thus an affine space may be viewed as obtained from its associated vector space by "forgetting" the origin (zero vector). The idea of forgetting the metric can be applied in the theory of manifolds. That is developed in the article on the affine connection.
https://en.wikipedia.org/wiki/Affine_geometry
In mathematics, algebraic L-theory is the K-theory of quadratic forms; the term was coined by C. T. C. Wall, with L being used as the letter after K. Algebraic L-theory, also known as "Hermitian K-theory", is important in surgery theory.
https://en.wikipedia.org/wiki/L-theory
In mathematics, algebraic cobordism is an analogue of complex cobordism for smooth quasi-projective schemes over a field. It was introduced by Marc Levine and Fabien Morel (2001, 2001b). An oriented cohomology theory on the category of smooth quasi-projective schemes Sm over a field k consists of a contravariant functor A* from Sm to commutative graded rings, together with push-forward maps f* whenever f:Y→X has relative dimension d for some d. These maps have to satisfy various conditions similar to those satisfied by complex cobordism.
https://en.wikipedia.org/wiki/Algebraic_cobordism
In particular they are "oriented", which means roughly that they behave well on vector bundles; this is closely related to the condition that a generalized cohomology theory has a complex orientation. Over a field of characteristic 0, algebraic cobordism is the universal oriented cohomology theory for smooth varieties. In other words there is a unique morphism of oriented cohomology theories from algebraic cobordism to any other oriented cohomology theory. Levine (2002) and Levine & Morel (2007) give surveys of algebraic cobordism. The algebraic cobordism ring of generalized flag varieties has been computed by Hornbostel & Kiritchenko (2011).
https://en.wikipedia.org/wiki/Algebraic_cobordism
In mathematics, algebraic geometry and analytic geometry are two closely related subjects. While algebraic geometry studies algebraic varieties, analytic geometry deals with complex manifolds and the more general analytic spaces defined locally by the vanishing of analytic functions of several complex variables. The deep relation between these subjects has numerous applications in which algebraic techniques are applied to analytic spaces and analytic techniques to algebraic varieties.
https://en.wikipedia.org/wiki/Lefschetz_principle
In mathematics, algebraic spaces form a generalization of the schemes of algebraic geometry, introduced by Michael Artin for use in deformation theory. Intuitively, schemes are given by gluing together affine schemes using the Zariski topology, while algebraic spaces are given by gluing together affine schemes using the finer étale topology. Alternatively one can think of schemes as being locally isomorphic to affine schemes in the Zariski topology, while algebraic spaces are locally isomorphic to affine schemes in the étale topology. The resulting category of algebraic spaces extends the category of schemes and allows one to carry out several natural constructions that are used in the construction of moduli spaces but are not always possible in the smaller category of schemes, such as taking the quotient of a free action by a finite group (cf. the Keel–Mori theorem).
https://en.wikipedia.org/wiki/Algebraic_space
In mathematics, algebraically compact modules, also called pure-injective modules, are modules that have a certain "nice" property which allows the solution of infinite systems of equations in the module by finitary means. The solutions to these systems allow the extension of certain kinds of module homomorphisms. These algebraically compact modules are analogous to injective modules, where one can extend all module homomorphisms. All injective modules are algebraically compact, and the analogy between the two is made quite precise by a category embedding.
https://en.wikipedia.org/wiki/Pure_injective_module
In mathematics, algebras A, B over a field k inside some field extension Ω {\displaystyle \Omega } of k are said to be linearly disjoint over k if the following equivalent conditions are met: (i) The map A ⊗ k B → A B {\displaystyle A\otimes _{k}B\to AB} induced by ( x , y ) ↦ x y {\displaystyle (x,y)\mapsto xy} is injective. (ii) Any k-basis of A remains linearly independent over B. (iii) If u i , v j {\displaystyle u_{i},v_{j}} are k-bases for A, B, then the products u i v j {\displaystyle u_{i}v_{j}} are linearly independent over k.Note that, since every subalgebra of Ω {\displaystyle \Omega } is a domain, (i) implies A ⊗ k B {\displaystyle A\otimes _{k}B} is a domain (in particular reduced). Conversely if A and B are fields and either A or B is an algebraic extension of k and A ⊗ k B {\displaystyle A\otimes _{k}B} is a domain then it is a field and A and B are linearly disjoint.
https://en.wikipedia.org/wiki/Linearly_disjoint
However, there are examples where A ⊗ k B {\displaystyle A\otimes _{k}B} is a domain but A and B are not linearly disjoint: for example, A = B = k(t), the field of rational functions over k. One also has: A, B are linearly disjoint over k if and only if subfields of Ω {\displaystyle \Omega } generated by A , B {\displaystyle A,B} , resp. are linearly disjoint over k. (cf. Tensor product of fields) Suppose A, B are linearly disjoint over k. If A ′ ⊂ A {\displaystyle A'\subset A} , B ′ ⊂ B {\displaystyle B'\subset B} are subalgebras, then A ′ {\displaystyle A'} and B ′ {\displaystyle B'} are linearly disjoint over k. Conversely, if any finitely generated subalgebras of algebras A, B are linearly disjoint, then A, B are linearly disjoint (since the condition involves only finite sets of elements.)
https://en.wikipedia.org/wiki/Linearly_disjoint
In mathematics, almost holomorphic modular forms, also called nearly holomorphic modular forms, are a generalization of modular forms that are polynomials in 1/Im(τ) with coefficients that are holomorphic functions of τ. A quasimodular form is the holomorphic part of an almost holomorphic modular form. An almost holomorphic modular form is determined by its holomorphic part, so the operation of taking the holomorphic part gives an isomorphism between the spaces of almost holomorphic modular forms and quasimodular forms. The archetypal examples of quasimodular forms are the Eisenstein series E2(τ) (the holomorphic part of the almost holomorphic modular form E2(τ) – 3/πIm(τ)), and derivatives of modular forms. In terms of representation theory, modular forms correspond roughly to highest weight vectors of certain discrete series representations of SL2(R), while almost holomorphic or quasimodular forms correspond roughly to other (not necessarily highest weight) vectors of these representations.
https://en.wikipedia.org/wiki/Quasimodular_form
In mathematics, almost modules and almost rings are certain objects interpolating between rings and their fields of fractions. They were introduced by Gerd Faltings (1988) in his study of p-adic Hodge theory.
https://en.wikipedia.org/wiki/Almost_ring_theory
In mathematics, amalgam spaces categorize functions with regard to their local and global behavior. While the concept of function spaces treating local and global behavior separately was already known earlier, Wiener amalgams, as the term is used today, were introduced by Hans Georg Feichtinger in 1980. The concept is named after Norbert Wiener. Let X {\displaystyle X} be a normed space with norm ‖ ⋅ ‖ X {\displaystyle \|\cdot \|_{X}} .
https://en.wikipedia.org/wiki/Wiener_amalgam_space
Then the Wiener amalgam space with local component X {\displaystyle X} and global component L m p {\displaystyle L_{m}^{p}} , a weighted L p {\displaystyle L^{p}} space with non-negative weight m {\displaystyle m} , is defined by W ( X , L p ) = { f: ( ∫ R d ‖ f ( ⋅ ) g ¯ ( ⋅ − x ) ‖ X p m ( x ) p d x ) 1 / p < ∞ } , {\displaystyle W(X,L^{p})=\left\{f\ :\ \left(\int _{\mathbb {R} ^{d}}\|f(\cdot ){\bar {g}}(\cdot -x)\|_{X}^{p}m(x)^{p}\,dx\right)^{1/p}<\infty \right\},} where g {\displaystyle g} is a continuously differentiable, compactly supported function, such that ∑ x ∈ Z d g ( z − x ) = 1 {\displaystyle \sum _{x\in \mathbb {Z^{d}} }g(z-x)=1} , for all z ∈ R d {\displaystyle z\in \mathbb {R} ^{d}} . Again, the space defined is independent of g {\displaystyle g} . As the definition suggests, Wiener amalgams are useful to describe functions showing characteristic local and global behavior. == References ==
https://en.wikipedia.org/wiki/Wiener_amalgam_space
In mathematics, an Abel equation of the first kind, named after Niels Henrik Abel, is any ordinary differential equation that is cubic in the unknown function. In other words, it is an equation of the form y ′ = f 3 ( x ) y 3 + f 2 ( x ) y 2 + f 1 ( x ) y + f 0 ( x ) {\displaystyle y'=f_{3}(x)y^{3}+f_{2}(x)y^{2}+f_{1}(x)y+f_{0}(x)\,} where f 3 ( x ) ≠ 0 {\displaystyle f_{3}(x)\neq 0} . If f 3 ( x ) = 0 {\displaystyle f_{3}(x)=0} and f 0 ( x ) = 0 {\displaystyle f_{0}(x)=0} , or f 2 ( x ) = 0 {\displaystyle f_{2}(x)=0} and f 0 ( x ) = 0 {\displaystyle f_{0}(x)=0} , the equation reduces to a Bernoulli equation, while if f 3 ( x ) = 0 {\displaystyle f_{3}(x)=0} the equation reduces to a Riccati equation.
https://en.wikipedia.org/wiki/Abel_equation_of_the_first_kind
In mathematics, an Abelian 2-group is a higher dimensional analogue of an Abelian group, in the sense of higher algebra, which were originally introduced by Alexander Grothendieck while studying abstract structures surrounding Abelian varieties and Picard groups. More concretely, they are given by groupoids A {\displaystyle \mathbb {A} } which have a bifunctor +: A × A → A {\displaystyle +:\mathbb {A} \times \mathbb {A} \to \mathbb {A} } which acts formally like the addition an Abelian group. Namely, the bifunctor + {\displaystyle +} has a notion of commutativity, associativity, and an identity structure. Although this seems like a rather lofty and abstract structure, there are several (very concrete) examples of Abelian 2-groups. In fact, some of which provide prototypes for more complex examples of higher algebraic structures, such as Abelian n-groups.
https://en.wikipedia.org/wiki/Picard_stack
In mathematics, an Adams operation, denoted ψk for natural numbers k, is a cohomology operation in topological K-theory, or any allied operation in algebraic K-theory or other types of algebraic construction, defined on a pattern introduced by Frank Adams. The basic idea is to implement some fundamental identities in symmetric function theory, at the level of vector bundles or other representing object in more abstract theories. Adams operations can be defined more generally in any λ-ring.
https://en.wikipedia.org/wiki/Adams_operation
In mathematics, an Albert algebra is a 27-dimensional exceptional Jordan algebra. They are named after Abraham Adrian Albert, who pioneered the study of non-associative algebras, usually working over the real numbers. Over the real numbers, there are three such Jordan algebras up to isomorphism. One of them, which was first mentioned by Pascual Jordan, John von Neumann, and Eugene Wigner (1934) and studied by Albert (1934), is the set of 3×3 self-adjoint matrices over the octonions, equipped with the binary operation x ∘ y = 1 2 ( x ⋅ y + y ⋅ x ) , {\displaystyle x\circ y={\frac {1}{2}}(x\cdot y+y\cdot x),} where ⋅ {\displaystyle \cdot } denotes matrix multiplication.
https://en.wikipedia.org/wiki/Albert_algebra
Another is defined the same way, but using split octonions instead of octonions. The final is constructed from the non-split octonions using a different standard involution. Over any algebraically closed field, there is just one Albert algebra, and its automorphism group G is the simple split group of type F4.
https://en.wikipedia.org/wiki/Albert_algebra
(For example, the complexifications of the three Albert algebras over the real numbers are isomorphic Albert algebras over the complex numbers.) Because of this, for a general field F, the Albert algebras are classified by the Galois cohomology group H1(F,G).The Kantor–Koecher–Tits construction applied to an Albert algebra gives a form of the E7 Lie algebra. The split Albert algebra is used in a construction of a 56-dimensional structurable algebra whose automorphism group has identity component the simply-connected algebraic group of type E6.The space of cohomological invariants of Albert algebras a field F (of characteristic not 2) with coefficients in Z/2Z is a free module over the cohomology ring of F with a basis 1, f3, f5, of degrees 0, 3, 5. The cohomological invariants with 3-torsion coefficients have a basis 1, g3 of degrees 0, 3. The invariants f3 and g3 are the primary components of the Rost invariant.
https://en.wikipedia.org/wiki/Albert_algebra
In mathematics, an Alexander matrix is a presentation matrix for the Alexander invariant of a knot. The determinant of an Alexander matrix is the Alexander polynomial for the knot.
https://en.wikipedia.org/wiki/Alexander_matrix
In mathematics, an Apollonian gasket or Apollonian net is a fractal generated by starting with a triple of circles, each tangent to the other two, and successively filling in more circles, each tangent to another three. It is named after Greek mathematician Apollonius of Perga.
https://en.wikipedia.org/wiki/Apollonian_gasket
In mathematics, an Appell sequence, named after Paul Émile Appell, is any polynomial sequence { p n ( x ) } n = 0 , 1 , 2 , … {\displaystyle \{p_{n}(x)\}_{n=0,1,2,\ldots }} satisfying the identity d d x p n ( x ) = n p n − 1 ( x ) , {\displaystyle {\frac {d}{dx}}p_{n}(x)=np_{n-1}(x),} and in which p 0 ( x ) {\displaystyle p_{0}(x)} is a non-zero constant. Among the most notable Appell sequences besides the trivial example { x n } {\displaystyle \{x^{n}\}} are the Hermite polynomials, the Bernoulli polynomials, and the Euler polynomials. Every Appell sequence is a Sheffer sequence, but most Sheffer sequences are not Appell sequences. Appell sequences have a probabilistic interpretation as systems of moments.
https://en.wikipedia.org/wiki/Appell_polynomials
In mathematics, an Arf ring was defined by Lipman (1971) to be a 1-dimensional commutative semi-local Macaulay ring satisfying some extra conditions studied by Cahit Arf (1948).
https://en.wikipedia.org/wiki/Arf_ring
In mathematics, an Artin L-function is a type of Dirichlet series associated to a linear representation ρ of a Galois group G. These functions were introduced in 1923 by Emil Artin, in connection with his research into class field theory. Their fundamental properties, in particular the Artin conjecture described below, have turned out to be resistant to easy proof. One of the aims of proposed non-abelian class field theory is to incorporate the complex-analytic nature of Artin L-functions into a larger framework, such as is provided by automorphic forms and the Langlands program. So far, only a small part of such a theory has been put on a firm basis.
https://en.wikipedia.org/wiki/Artin_L-function
In mathematics, an Artin–Schreier curve is a plane curve defined over an algebraically closed field of characteristic p {\displaystyle p} by an equation y p − y = f ( x ) {\displaystyle y^{p}-y=f(x)} for some rational function f {\displaystyle f} over that field. One of the most important examples of such curves is hyperelliptic curves in characteristic 2, whose Jacobian varieties have been suggested for use in cryptography. It is common to write these curves in the form y 2 + h ( x ) y = f ( x ) {\displaystyle y^{2}+h(x)y=f(x)} for some polynomials f {\displaystyle f} and h {\displaystyle h} .
https://en.wikipedia.org/wiki/Artin–Schreier_curve
In mathematics, an Azumaya algebra is a generalization of central simple algebras to R-algebras where R need not be a field. Such a notion was introduced in a 1951 paper of Goro Azumaya, for the case where R is a commutative local ring. The notion was developed further in ring theory, and in algebraic geometry, where Alexander Grothendieck made it the basis for his geometric theory of the Brauer group in Bourbaki seminars from 1964–65. There are now several points of access to the basic definitions.
https://en.wikipedia.org/wiki/Azumaya_algebra
In mathematics, an E n {\displaystyle {\mathcal {E}}_{n}} -algebra in a symmetric monoidal infinity category C consists of the following data: An object A ( U ) {\displaystyle A(U)} for any open subset U of Rn homeomorphic to an n-disk. A multiplication map: μ: A ( U 1 ) ⊗ ⋯ ⊗ A ( U m ) → A ( V ) {\displaystyle \mu :A(U_{1})\otimes \cdots \otimes A(U_{m})\to A(V)} for any disjoint open disks U j {\displaystyle U_{j}} contained in some open disk Vsubject to the requirements that the multiplication maps are compatible with composition, and that μ {\displaystyle \mu } is an equivalence if m = 1 {\displaystyle m=1} . An equivalent definition is that A is an algebra in C over the little n-disks operad.
https://en.wikipedia.org/wiki/E_n-ring
In mathematics, an EP matrix (or range-Hermitian matrix or RPN matrix) is a square matrix A whose range is equal to the range of its conjugate transpose A*. Another equivalent characterization of EP matrices is that the range of A is orthogonal to the nullspace of A. Thus, EP matrices are also known as RPN (Range Perpendicular to Nullspace) matrices. EP matrices were introduced in 1950 by Hans Schwerdtfeger, and since then, many equivalent characterizations of EP matrices have been investigated through the literature. The meaning of the EP abbreviation stands originally for Equal Principal, but it is widely believed that it stands for Equal Projectors instead, since an equivalent characterization of EP matrices is based in terms of equality of the projectors AA+ and A+A.The range of any matrix A is perpendicular to the null-space of A*, but is not necessarily perpendicular to the null-space of A. When A is an EP matrix, the range of A is precisely perpendicular to the null-space of A.
https://en.wikipedia.org/wiki/EP_matrix
In mathematics, an Eells–Kuiper manifold is a compactification of R n {\displaystyle \mathbb {R} ^{n}} by a sphere of dimension n / 2 {\displaystyle n/2} , where n = 2 , 4 , 8 {\displaystyle n=2,4,8} , or 16 {\displaystyle 16} . It is named after James Eells and Nicolaas Kuiper. If n = 2 {\displaystyle n=2} , the Eells–Kuiper manifold is diffeomorphic to the real projective plane R P 2 {\displaystyle \mathbb {RP} ^{2}} . For n ≥ 4 {\displaystyle n\geq 4} it is simply-connected and has the integral cohomology structure of the complex projective plane C P 2 {\displaystyle \mathbb {CP} ^{2}} ( n = 4 {\displaystyle n=4} ), of the quaternionic projective plane H P 2 {\displaystyle \mathbb {HP} ^{2}} ( n = 8 {\displaystyle n=8} ) or of the Cayley projective plane ( n = 16 {\displaystyle n=16} ).
https://en.wikipedia.org/wiki/Eells–Kuiper_manifold
In mathematics, an Eichler order, named after Martin Eichler, is an order of a quaternion algebra that is the intersection of two maximal orders.
https://en.wikipedia.org/wiki/Eichler_order