text
stringlengths
9
3.55k
source
stringlengths
31
280
There is a correspondence between certain systems of partial differential equations (linear and having very special properties for their solutions) and possible monodromies of their solutions. Such a result was proved for algebraic connections with regular singularities by Pierre Deligne (1970, generalizing existing work in the case of Riemann surfaces) and more generally for regular holonomic D-modules by Masaki Kashiwara (1980, 1984) and Zoghman Mebkhout (1980, 1984) independently. In the setting of nonabelian Hodge theory, the Riemann-Hilbert correspondence provides a complex analytic isomorphism between two of the three natural algebraic structures on the moduli spaces, and so is naturally viewed as a nonabelian analogue of the comparison isomorphism between De Rham cohomology and singular/Betti cohomology.
https://en.wikipedia.org/wiki/Riemann-Hilbert_correspondence
In mathematics, the term adjoint applies in several situations. Several of these share a similar formalism: if A is adjoint to B, then there is typically some formula of the type (Ax, y) = (x, By).Specifically, adjoint or adjunction may mean: Adjoint of a linear map, also called its transpose Hermitian adjoint (adjoint of a linear operator) in functional analysis Adjoint endomorphism of a Lie algebra Adjoint representation of a Lie group Adjoint functors in category theory Adjunction (field theory) Adjunction formula (algebraic geometry) Adjunction space in topology Conjugate transpose of a matrix in linear algebra Adjugate matrix, related to its inverse Adjoint equation The upper and lower adjoints of a Galois connection in order theory The adjoint of a differential operator with general polynomial coefficients Kleisli adjunction Monoidal adjunction Quillen adjunction Axiom of adjunction in set theory Adjunction (rule of inference)
https://en.wikipedia.org/wiki/Adjoint
In mathematics, the term chaos game originally referred to a method of creating a fractal, using a polygon and an initial point selected at random inside it. The fractal is created by iteratively creating a sequence of points, starting with the initial random point, in which each point in the sequence is a given fraction of the distance between the previous point and one of the vertices of the polygon; the vertex is chosen at random in each iteration. Repeating this iterative process a large number of times, selecting the vertex at random on each iteration, and throwing out the first few points in the sequence, will often (but not always) produce a fractal shape. Using a regular triangle and the factor 1/2 will result in the Sierpinski triangle, while creating the proper arrangement with four points and a factor 1/2 will create a display of a "Sierpinski Tetrahedron", the three-dimensional analogue of the Sierpinski triangle.
https://en.wikipedia.org/wiki/Chaos_game
As the number of points is increased to a number N, the arrangement forms a corresponding (N-1)-dimensional Sierpinski Simplex. The term has been generalized to refer to a method of generating the attractor, or the fixed point, of any iterated function system (IFS). Starting with any point x0, successive iterations are formed as xk+1 = fr(xk), where fr is a member of the given IFS randomly selected for each iteration.
https://en.wikipedia.org/wiki/Chaos_game
The iterations converge to the fixed point of the IFS. Whenever x0 belongs to the attractor of the IFS, all iterations xk stay inside the attractor and, with probability 1, form a dense set in the latter. The "chaos game" method plots points in random order all over the attractor.
https://en.wikipedia.org/wiki/Chaos_game
This is in contrast to other methods of drawing fractals, which test each pixel on the screen to see whether it belongs to the fractal. The general shape of a fractal can be plotted quickly with the "chaos game" method, but it may be difficult to plot some areas of the fractal in detail. With the aid of the "chaos game" a new fractal can be made and while making the new fractal some parameters can be obtained. These parameters are useful for applications of fractal theory such as classification and identification. The new fractal is self-similar to the original in some important features such as fractal dimension.
https://en.wikipedia.org/wiki/Chaos_game
In mathematics, the term combinatorial proof is often used to mean either of two types of mathematical proof: A proof by double counting. A combinatorial identity is proven by counting the number of elements of some carefully chosen set in two different ways to obtain the different expressions in the identity. Since those expressions count the same objects, they must be equal to each other and thus the identity is established.
https://en.wikipedia.org/wiki/Combinatorial_proof
A bijective proof. Two sets are shown to have the same number of members by exhibiting a bijection, i.e. a one-to-one correspondence, between them.The term "combinatorial proof" may also be used more broadly to refer to any kind of elementary proof in combinatorics. However, as Glass (2003) writes in his review of Benjamin & Quinn (2003) (a book about combinatorial proofs), these two simple techniques are enough to prove many theorems in combinatorics and number theory.
https://en.wikipedia.org/wiki/Combinatorial_proof
In mathematics, the term cosocle (socle meaning pedestal in French) has several related meanings. In group theory, a cosocle of a group G, denoted by Cosoc(G), is the intersection of all maximal normal subgroups of G. If G is a quasisimple group, then Cosoc(G) = Z(G).In the context of Lie algebras, a cosocle of a symmetric Lie algebra is the eigenspace of its structural automorphism that corresponds to the eigenvalue +1. (A symmetric Lie algebra decomposes into the direct sum of its socle and cosocle. )In the context of module theory, the cosocle of a module over a ring R is defined to be the maximal semisimple quotient of the module.
https://en.wikipedia.org/wiki/Cosocle
In mathematics, the term essentially unique is used to describe a weaker form of uniqueness, where an object satisfying a property is "unique" only in the sense that all objects satisfying the property are equivalent to each other. The notion of essential uniqueness presupposes some form of "sameness", which is often formalized using an equivalence relation. A related notion is a universal property, where an object is not only essentially unique, but unique up to a unique isomorphism (meaning that it has trivial automorphism group). In general there can be more than one isomorphism between examples of an essentially unique object.
https://en.wikipedia.org/wiki/Essentially_unique
In mathematics, the term fiber (US English) or fibre (British English) can have two meanings, depending on the context: In naive set theory, the fiber of the element y {\displaystyle y} in the set Y {\displaystyle Y} under a map f: X → Y {\displaystyle f:X\to Y} is the inverse image of the singleton { y } {\displaystyle \{y\}} under f . {\displaystyle f.} In algebraic geometry, the notion of a fiber of a morphism of schemes must be defined more carefully because, in general, not every point is closed.
https://en.wikipedia.org/wiki/Fiber_(mathematics)
In mathematics, the term linear function refers to two distinct but related notions: In calculus and related areas, a linear function is a function whose graph is a straight line, that is, a polynomial function of degree zero or one. For distinguishing such a linear function from the other concept, the term affine function is often used. In linear algebra, mathematical analysis, and functional analysis, a linear function is a linear map.
https://en.wikipedia.org/wiki/Linear_growth
In mathematics, the term linear is used in two distinct senses for two different properties: linearity of a function (or mapping ); linearity of a polynomial.An example of a linear function is the function defined by f ( x ) = ( a x , b x ) {\displaystyle f(x)=(ax,bx)} that maps the real line to a line in the Euclidean plane R2 that passes through the origin. An example of a linear polynomial in the variables X , {\displaystyle X,} Y {\displaystyle Y} and Z {\displaystyle Z} is a X + b Y + c Z + d . {\displaystyle aX+bY+cZ+d.} Linearity of a mapping is closely related to proportionality.
https://en.wikipedia.org/wiki/Linearity
Examples in physics include the linear relationship of voltage and current in an electrical conductor (Ohm's law), and the relationship of mass and weight. By contrast, more complicated relationships, such as between velocity and kinetic energy, are nonlinear. Generalized for functions in more than one dimension, linearity means the property of a function of being compatible with addition and scaling, also known as the superposition principle.
https://en.wikipedia.org/wiki/Linearity
Linearity of a polynomial means that its degree is less than two. The use of the term for polynomials stems from the fact that the graph of a polynomial in one variable is a straight line. In the term "linear equation", the word refers to the linearity of the polynomials involved.
https://en.wikipedia.org/wiki/Linearity
Because a function such as f ( x ) = a x + b {\displaystyle f(x)=ax+b} is defined by a linear polynomial in its argument, it is sometimes also referred to as being a "linear function", and the relationship between the argument and the function value may be referred to as a "linear relationship". This is potentially confusing, but usually the intended meaning will be clear from the context. The word linear comes from Latin linearis, "pertaining to or resembling a line".
https://en.wikipedia.org/wiki/Linearity
In mathematics, the term local analysis has at least two meanings, both derived from the idea of looking at a problem relative to each prime number p first, and then later trying to integrate the information gained at each prime into a 'global' picture. These are forms of the localization approach.
https://en.wikipedia.org/wiki/Local_analysis
In mathematics, the term maximal subgroup is used to mean slightly different things in different areas of algebra. In group theory, a maximal subgroup H of a group G is a proper subgroup, such that no proper subgroup K contains H strictly. In other words, H is a maximal element of the partially ordered set of subgroups of G that are not equal to G. Maximal subgroups are of interest because of their direct connection with primitive permutation representations of G. They are also much studied for the purposes of finite group theory: see for example Frattini subgroup, the intersection of the maximal subgroups. In semigroup theory, a maximal subgroup of a semigroup S is a subgroup (that is, a subsemigroup which forms a group under the semigroup operation) of S which is not properly contained in another subgroup of S. Notice that, here, there is no requirement that a maximal subgroup be proper, so if S is in fact a group then its unique maximal subgroup (as a semigroup) is S itself. Considering subgroups, and in particular maximal subgroups, of semigroups often allows one to apply group-theoretic techniques in semigroup theory. There is a one-to-one correspondence between idempotent elements of a semigroup and maximal subgroups of the semigroup: each idempotent element is the identity element of a unique maximal subgroup.
https://en.wikipedia.org/wiki/Maximal_subgroup
In mathematics, the term modulo ("with respect to a modulus of", the Latin ablative of modulus which itself means "a small measure") is often used to assert that two distinct mathematical objects can be regarded as equivalent—if their difference is accounted for by an additional factor. It was initially introduced into mathematics in the context of modular arithmetic by Carl Friedrich Gauss in 1801. Since then, the term has gained many meanings—some exact and some imprecise (such as equating "modulo" with "except for"). For the most part, the term often occurs in statements of the form: A is the same as B modulo Cwhich means A and B are the same—except for differences accounted for or explained by C.
https://en.wikipedia.org/wiki/Modulo_(mathematics)
In mathematics, the term permutation representation of a (typically finite) group G {\displaystyle G} can refer to either of two closely related notions: a representation of G {\displaystyle G} as a group of permutations, or as a group of permutation matrices. The term also refers to the combination of the two.
https://en.wikipedia.org/wiki/Permutation_representation
In mathematics, the term simple is used to describe an algebraic structure which in some sense cannot be divided by a smaller structure of the same type. Put another way, an algebraic structure is simple if the kernel of every homomorphism is either the whole structure or a single element. Some examples are: A group is called a simple group if it does not contain a nontrivial proper normal subgroup. A ring is called a simple ring if it does not contain a nontrivial two sided ideal.
https://en.wikipedia.org/wiki/Simple_(abstract_algebra)
A module is called a simple module if it does not contain a nontrivial submodule. An algebra is called a simple algebra if it does not contain a nontrivial two sided ideal.The general pattern is that the structure admits no non-trivial congruence relations.
https://en.wikipedia.org/wiki/Simple_(abstract_algebra)
The term is used differently in semigroup theory. A semigroup is said to be simple if it has no nontrivial ideals, or equivalently, if Green's relation J is the universal relation. Not every congruence on a semigroup is associated with an ideal, so a simple semigroup may have nontrivial congruences. A semigroup with no nontrivial congruences is called congruence simple.
https://en.wikipedia.org/wiki/Simple_(abstract_algebra)
In mathematics, the term socle has several related meanings.
https://en.wikipedia.org/wiki/Socle_of_a_module
In mathematics, the term standard L-function refers to a particular type of automorphic L-function described by Robert P. Langlands. Here, standard refers to the finite-dimensional representation r being the standard representation of the L-group as a matrix group.
https://en.wikipedia.org/wiki/Standard_L-function
In mathematics, the term undefined is often used to refer to an expression which is not assigned an interpretation or a value (such as an indeterminate form, which has the possibility of assuming different values). The term can take on several different meanings depending on the context. For example: In various branches of mathematics, certain concepts are introduced as primitive notions (e.g., the terms "point", "line" and "plane" in geometry).
https://en.wikipedia.org/wiki/Undefined_(mathematics)
As these terms are not defined in terms of other concepts, they may be referred to as "undefined terms". A function is said to be "undefined" at points outside of its domain – for example, the real-valued function f ( x ) = x {\displaystyle f(x)={\sqrt {x}}} is undefined for negative x {\displaystyle x} (i.e., it assigns no value to negative arguments). In algebra, some arithmetic operations may not assign a meaning to certain values of its operands (e.g., division by zero). In which case, the expressions involving such operands are termed "undefined".
https://en.wikipedia.org/wiki/Undefined_(mathematics)
In mathematics, the term variational analysis usually denotes the combination and extension of methods from convex optimization and the classical calculus of variations to a more general theory. This includes the more general problems of optimization theory, including topics in set-valued analysis, e.g. generalized derivatives. In the Mathematics Subject Classification scheme (MSC2010), the field of "Set-valued and variational analysis" is coded by "49J53".
https://en.wikipedia.org/wiki/Variational_analysis
In mathematics, the term weak inverse is used with several meanings.
https://en.wikipedia.org/wiki/Weak_inverse
In mathematics, the terms continuity, continuous, and continuum are used in a variety of related ways.
https://en.wikipedia.org/wiki/List_of_continuity-related_mathematical_topics
In mathematics, the theorem of Bertini is an existence and genericity theorem for smooth connected hyperplane sections for smooth projective varieties over algebraically closed fields, introduced by Eugenio Bertini. This is the simplest and broadest of the "Bertini theorems" applying to a linear system of divisors; simplest because there is no restriction on the characteristic of the underlying field, while the extensions require characteristic 0.
https://en.wikipedia.org/wiki/Bertini's_theorem
In mathematics, the theorem of the cube is a condition for a line bundle over a product of three complete varieties to be trivial. It was a principle discovered, in the context of linear equivalence, by the Italian school of algebraic geometry. The final version of the theorem of the cube was first published by Lang (1959), who credited it to André Weil. A discussion of the history has been given by Kleiman (2005). A treatment by means of sheaf cohomology, and description in terms of the Picard functor, was given by Mumford (2008).
https://en.wikipedia.org/wiki/Theorem_of_the_cube
In mathematics, the theory of Latin squares is an active research area with many open problems. As in other areas of mathematics, such problems are often made public at professional conferences and meetings. Problems posed here appeared in, for instance, the Loops (Prague) conferences and the Milehigh (Denver) conferences.
https://en.wikipedia.org/wiki/Problems_in_Latin_squares
In mathematics, the theory of fiber bundles with a structure group G {\displaystyle G} (a topological group) allows an operation of creating an associated bundle, in which the typical fiber of a bundle changes from F 1 {\displaystyle F_{1}} to F 2 {\displaystyle F_{2}} , which are both topological spaces with a group action of G {\displaystyle G} . For a fiber bundle F with structure group G, the transition functions of the fiber (i.e., the cocycle) in an overlap of two coordinate systems Uα and Uβ are given as a G-valued function gαβ on Uα∩Uβ. One may then construct a fiber bundle F′ as a new fiber bundle having the same transition functions, but possibly a different fiber.
https://en.wikipedia.org/wiki/Associated_bundle
In mathematics, the theory of finite sphere packing concerns the question of how a finite number of equally-sized spheres can be most efficiently packed. The question of packing finitely many spheres has only been investigated in detail in recent decades, with much of the groundwork being laid by László Fejes Tóth. The similar problem for infinitely many spheres has a longer history of investigation, from which the Kepler conjecture is most well-known.
https://en.wikipedia.org/wiki/Finite_sphere_packing
Atoms in crystal structures can be simplistically viewed as closely-packed spheres and treated as infinite sphere packings thanks to their large number. Sphere packing problems are distinguished between packings in given containers and free packings. This article primarily discusses free packings.
https://en.wikipedia.org/wiki/Finite_sphere_packing
In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance (related to the pricing of American options). A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming.
https://en.wikipedia.org/wiki/Optimal_Stopping
In mathematics, the theta correspondence or Howe correspondence is a mathematical relation between representations of two groups of a reductive dual pair. The local theta correspondence relates irreducible admissible representations over a local field, while the global theta correspondence relates irreducible automorphic representations over a global field. The theta correspondence was introduced by Roger Howe in Howe (1979). Its name arose due to its origin in André Weil's representation theoretical formulation of the theory of theta series in Weil (1964). The Shimura correspondence as constructed by Jean-Loup Waldspurger in Waldspurger (1980) and Waldspurger (1991) may be viewed as an instance of the theta correspondence.
https://en.wikipedia.org/wiki/Theta_correspondence
In mathematics, the theta divisor Θ is the divisor in the sense of algebraic geometry defined on an abelian variety A over the complex numbers (and principally polarized) by the zero locus of the associated Riemann theta-function. It is therefore an algebraic subvariety of A of dimension dim A − 1.
https://en.wikipedia.org/wiki/Riemann–Kempf_singularity_theorem
In mathematics, the theta function of a lattice is a function whose coefficients give the number of vectors of a given norm.
https://en.wikipedia.org/wiki/Theta_function_of_a_lattice
In mathematics, the theta operator is a differential operator defined by θ = z d d z . {\displaystyle \theta =z{d \over dz}.} This is sometimes also called the homogeneity operator, because its eigenfunctions are the monomials in z: θ ( z k ) = k z k , k = 0 , 1 , 2 , … {\displaystyle \theta (z^{k})=kz^{k},\quad k=0,1,2,\dots } In n variables the homogeneity operator is given by θ = ∑ k = 1 n x k ∂ ∂ x k .
https://en.wikipedia.org/wiki/Theta_operator
{\displaystyle \theta =\sum _{k=1}^{n}x_{k}{\frac {\partial }{\partial x_{k}}}.} As in one variable, the eigenspaces of θ are the spaces of homogeneous functions. (Euler's homogeneous function theorem)
https://en.wikipedia.org/wiki/Theta_operator
In mathematics, the theta representation is a particular representation of the Heisenberg group of quantum mechanics. It gains its name from the fact that the Jacobi theta function is invariant under the action of a discrete subgroup of the Heisenberg group. The representation was popularized by David Mumford.
https://en.wikipedia.org/wiki/Theta_representation
In mathematics, the three classical Pythagorean means are the arithmetic mean (AM), the geometric mean (GM), and the harmonic mean (HM). These means were studied with proportions by Pythagoreans and later generations of Greek mathematicians because of their importance in geometry and music.
https://en.wikipedia.org/wiki/Pythagorean_mean
In mathematics, the three spheres inequality bounds the L 2 {\displaystyle L^{2}} norm of a harmonic function on a given sphere in terms of the L 2 {\displaystyle L^{2}} norm of this function on two spheres, one with bigger radius and one with smaller radius.
https://en.wikipedia.org/wiki/Three_spheres_inequality
In mathematics, the three-gap theorem, three-distance theorem, or Steinhaus conjecture states that if one places n points on a circle, at angles of θ, 2θ, 3θ, ... from the starting point, then there will be at most three distinct distances between pairs of points in adjacent positions around the circle. When there are three distances, the largest of the three always equals the sum of the other two. Unless θ is a rational multiple of π, there will also be at least two distinct distances. This result was conjectured by Hugo Steinhaus, and proved in the 1950s by Vera T. Sós, János Surányi, and Stanisław Świerczkowski; more proofs were added by others later. Applications of the three-gap theorem include the study of plant growth and musical tuning systems, and the theory of light reflection within a mirrored square.
https://en.wikipedia.org/wiki/Three-gap_theorem
In mathematics, the tilde operator (which can be represented by a tilde or the dedicated character U+223C ∼ TILDE OPERATOR), sometimes called "twiddle", is often used to denote an equivalence relation between two objects. Thus "x ~ y" means "x is equivalent to y". It is a weaker statement than stating that x equals y. The expression "x ~ y" is sometimes read aloud as "x twiddles y", perhaps as an analogue to the verbal expression of "x = y".The tilde can indicate approximate equality in a variety of ways.
https://en.wikipedia.org/wiki/~
It can be used to denote the asymptotic equality of two functions. For example, f (x) ~ g(x) means that lim x → ∞ f ( x ) g ( x ) = 1 {\displaystyle \lim _{x\to \infty }{\frac {f(x)}{g(x)}}=1} .A tilde is also used to indicate "approximately equal to" (e.g. 1.902 ~= 2). This usage probably developed as a typed alternative to the libra symbol used for the same purpose in written mathematics, which is an equal sign with the upper bar replaced by a bar with an upward hump, bump, or loop in the middle (︍︍♎︎) or, sometimes, a tilde (≃).
https://en.wikipedia.org/wiki/~
The symbol "≈" is also used for this purpose. In physics and astronomy, a tilde can be used between two expressions (e.g. h ~ 10−34 J s) to state that the two are of the same order of magnitude.In statistics and probability theory, the tilde means "is distributed as"; see random variable(e.g. X ~ B(n,p) for a binomial distribution). A tilde can also be used to represent geometric similarity (e.g. ∆ABC ~ ∆DEF, meaning triangle ABC is similar to DEF).
https://en.wikipedia.org/wiki/~
A triple tilde (≋) is often used to show congruence, an equivalence relation in geometry. In graph theory, the tilde can be used to represent adjacency between vertices. The edge ( x , y ) {\displaystyle (x,y)} connects vertices x {\displaystyle x} and y {\displaystyle y} which can be said to be adjacent, and this adjacency can be denoted x ∼ y {\displaystyle x\sim y} .
https://en.wikipedia.org/wiki/~
In mathematics, the tombstone, halmos, end-of-proof, or Q.E.D. symbol "∎" (or "□") is a symbol used to denote the end of a proof, in place of the traditional abbreviation "Q.E.D." for the Latin phrase "quod erat demonstrandum". It is inspired by the typographic practice of end marks, an element that marks the end of an article.In Unicode, it is represented as character U+220E ∎ END OF PROOF.
https://en.wikipedia.org/wiki/Halmos_box
Its graphic form varies, as it may be a hollow or filled rectangle or square. In AMS-LaTeX, the symbol is automatically appended at the end of a proof environment \begin{proof} ... \end{proof}. It can also be obtained from the commands \qedsymbol, \qedhere or \qed (the latter causes the symbol to be right aligned).It is sometimes called a "Halmos finality symbol" or "halmos" after the mathematician Paul Halmos, who first used it in a mathematical context in 1950.
https://en.wikipedia.org/wiki/Halmos_box
He got the idea of using it from seeing end marks in magazines, that is, typographic signs that indicate the end of an article. In his memoir I Want to Be a Mathematician, he wrote the following: The symbol is definitely not my invention — it appeared in popular magazines (not mathematical ones) before I adopted it, but, once again, I seem to have introduced it into mathematics. It is the symbol that sometimes looks like ▯, and is used to indicate an end, usually the end of a proof. It is most frequently called the 'tombstone', but at least one generous author referred to it as the 'halmos'.
https://en.wikipedia.org/wiki/Halmos_box
In mathematics, the topological entropy of a topological dynamical system is a nonnegative extended real number that is a measure of the complexity of the system. Topological entropy was first introduced in 1965 by Adler, Konheim and McAndrew. Their definition was modelled after the definition of the Kolmogorov–Sinai, or metric entropy.
https://en.wikipedia.org/wiki/Topological_entropy
Later, Dinaburg and Rufus Bowen gave a different, weaker definition reminiscent of the Hausdorff dimension. The second definition clarified the meaning of the topological entropy: for a system given by an iterated function, the topological entropy represents the exponential growth rate of the number of distinguishable orbits of the iterates. An important variational principle relates the notions of topological and measure-theoretic entropy.
https://en.wikipedia.org/wiki/Topological_entropy
In mathematics, the total derivative of a function f at a point is the best linear approximation near this point of the function with respect to its arguments. Unlike partial derivatives, the total derivative approximates the function with respect to all of its arguments, not just a single one. In many situations, this is the same as considering all partial derivatives simultaneously. The term "total derivative" is primarily used when f is a function of several variables, because when f is a function of a single variable, the total derivative is the same as the ordinary derivative of the function. : 198–203
https://en.wikipedia.org/wiki/Total_derivative
In mathematics, the total variation identifies several slightly different concepts, related to the (local or global) structure of the codomain of a function or a measure. For a real-valued continuous function f, defined on an interval ⊂ R, its total variation on the interval of definition is a measure of the one-dimensional arclength of the curve with parametric equation x ↦ f(x), for x ∈ . Functions whose total variation is finite are called functions of bounded variation.
https://en.wikipedia.org/wiki/Total_variation_norm
In mathematics, the trace field of a linear group is the field generated by the traces of its elements. It is mostly studied for Kleinian and Fuchsian groups, though related objects are used in the theory of lattices in Lie groups, often under the name field of definition.
https://en.wikipedia.org/wiki/Trace_field
In mathematics, the trace operator extends the notion of the restriction of a function to the boundary of its domain to "generalized" functions in a Sobolev space. This is particularly important for the study of partial differential equations with prescribed boundary conditions (boundary value problems), where weak solutions may not be regular enough to satisfy the boundary conditions in the classical sense of functions.
https://en.wikipedia.org/wiki/Trace_operator
In mathematics, the transcendental law of homogeneity (TLH) is a heuristic principle enunciated by Gottfried Wilhelm Leibniz most clearly in a 1710 text entitled Symbolismus memorabilis calculi algebraici et infinitesimalis in comparatione potentiarum et differentiarum, et de lege homogeneorum transcendentali. Henk J. M. Bos describes it as the principle to the effect that in a sum involving infinitesimals of different orders, only the lowest-order term must be retained, and the remainder discarded. Thus, if a {\displaystyle a} is finite and d x {\displaystyle dx} is infinitesimal, then one sets a + d x = a . {\displaystyle a+dx=a.} Similarly, u d v + v d u + d u d v = u d v + v d u , {\displaystyle u\,dv+v\,du+du\,dv=u\,dv+v\,du,} where the higher-order term du dv is discarded in accordance with the TLH. A recent study argues that Leibniz's TLH was a precursor of the standard part function over the hyperreals.
https://en.wikipedia.org/wiki/Transcendental_law_of_homogeneity
In mathematics, the transfer operator encodes information about an iterated map and is frequently used to study the behavior of dynamical systems, statistical mechanics, quantum chaos and fractals. In all usual cases, the largest eigenvalue is 1, and the corresponding eigenvector is the invariant measure of the system. The transfer operator is sometimes called the Ruelle operator, after David Ruelle, or the Perron–Frobenius operator or Ruelle–Perron–Frobenius operator, in reference to the applicability of the Perron–Frobenius theorem to the determination of the eigenvalues of the operator.
https://en.wikipedia.org/wiki/Bernoulli_operator
In mathematics, the transitive closure R+ of a homogeneous binary relation R on a set X is the smallest relation on X that contains R and is transitive. For finite sets, "smallest" can be taken in its usual sense, of having the fewest related pairs; for infinite sets R+ is the unique minimal transitive superset of R. For example, if X is a set of airports and x R y means "there is a direct flight from airport x to airport y" (for x and y in X), then the transitive closure of R on X is the relation R+ such that x R+ y means "it is possible to fly from x to y in one or more flights". More formally, the transitive closure of a binary relation R on a set X is the smallest (w.r.t. ⊆) transitive relation R+ on X such that R ⊆ R+; see Lidl & Pilz (1998, p.
https://en.wikipedia.org/wiki/Transitive_closure_logic
337). We have R+ = R if, and only if, R itself is transitive. Conversely, transitive reduction adduces a minimal relation S from a given relation R such that they have the same closure, that is, S+ = R+; however, many different S with this property may exist. Both transitive closure and transitive reduction are also used in the closely related area of graph theory.
https://en.wikipedia.org/wiki/Transitive_closure_logic
In mathematics, the triangle inequality states that for any triangle, the sum of the lengths of any two sides must be greater than or equal to the length of the remaining side. This statement permits the inclusion of degenerate triangles, but some authors, especially those writing about elementary geometry, will exclude this possibility, thus leaving out the possibility of equality. If x, y, and z are the lengths of the sides of the triangle, with no side being greater than z, then the triangle inequality states that z ≤ x + y , {\displaystyle z\leq x+y,} with equality only in the degenerate case of a triangle with zero area. In Euclidean geometry and some other geometries, the triangle inequality is a theorem about distances, and it is written using vectors and vector lengths (norms): ‖ x + y ‖ ≤ ‖ x ‖ + ‖ y ‖ , {\displaystyle \|\mathbf {x} +\mathbf {y} \|\leq \|\mathbf {x} \|+\|\mathbf {y} \|,} where the length z of the third side has been replaced by the vector sum x + y. When x and y are real numbers, they can be viewed as vectors in R1, and the triangle inequality expresses a relationship between absolute values.
https://en.wikipedia.org/wiki/Segment_addition_postulate
In Euclidean geometry, for right triangles the triangle inequality is a consequence of the Pythagorean theorem, and for general triangles, a consequence of the law of cosines, although it may be proved without these theorems. The inequality can be viewed intuitively in either R2 or R3.
https://en.wikipedia.org/wiki/Segment_addition_postulate
The figure at the right shows three examples beginning with clear inequality (top) and approaching equality (bottom). In the Euclidean case, equality occurs only if the triangle has a 180° angle and two 0° angles, making the three vertices collinear, as shown in the bottom example. Thus, in Euclidean geometry, the shortest distance between two points is a straight line. In spherical geometry, the shortest distance between two points is an arc of a great circle, but the triangle inequality holds provided the restriction is made that the distance between two points on a sphere is the length of a minor spherical line segment (that is, one with central angle in ) with those endpoints.The triangle inequality is a defining property of norms and measures of distance. This property must be established as a theorem for any function proposed for such purposes for each particular space: for example, spaces such as the real numbers, Euclidean spaces, the Lp spaces (p ≥ 1), and inner product spaces.
https://en.wikipedia.org/wiki/Segment_addition_postulate
In mathematics, the tricorn, sometimes called the Mandelbar set, is a fractal defined in a similar way to the Mandelbrot set, but using the mapping z ↦ z ¯ 2 + c {\displaystyle z\mapsto {\bar {z}}^{2}+c} instead of z ↦ z 2 + c {\displaystyle z\mapsto z^{2}+c} used for the Mandelbrot set. It was introduced by W. D. Crowe, R. Hasson, P. J. Rippon, and P. E. D. Strain-Clark. John Milnor found tricorn-like sets as a prototypical configuration in the parameter space of real cubic polynomials, and in various other families of rational maps.The characteristic three-cornered shape created by this fractal repeats with variations at different scales, showing the same sort of self-similarity as the Mandelbrot set. In addition to smaller tricorns, smaller versions of the Mandelbrot set are also contained within the tricorn fractal.
https://en.wikipedia.org/wiki/Tricorn_(mathematics)
In mathematics, the trigamma function, denoted ψ1(z) or ψ(1)(z), is the second of the polygamma functions, and is defined by ψ 1 ( z ) = d 2 d z 2 ln ⁡ Γ ( z ) {\displaystyle \psi _{1}(z)={\frac {d^{2}}{dz^{2}}}\ln \Gamma (z)} .It follows from this definition that ψ 1 ( z ) = d d z ψ ( z ) {\displaystyle \psi _{1}(z)={\frac {d}{dz}}\psi (z)} where ψ(z) is the digamma function. It may also be defined as the sum of the series ψ 1 ( z ) = ∑ n = 0 ∞ 1 ( z + n ) 2 , {\displaystyle \psi _{1}(z)=\sum _{n=0}^{\infty }{\frac {1}{(z+n)^{2}}},} making it a special case of the Hurwitz zeta function ψ 1 ( z ) = ζ ( 2 , z ) . {\displaystyle \psi _{1}(z)=\zeta (2,z).} Note that the last two formulas are valid when 1 − z is not a natural number.
https://en.wikipedia.org/wiki/Trigamma_function
In mathematics, the trigonometric functions (also called circular functions, angle functions or goniometric functions) are real functions which relate an angle of a right-angled triangle to ratios of two side lengths. They are widely used in all sciences that are related to geometry, such as navigation, solid mechanics, celestial mechanics, geodesy, and many others. They are among the simplest periodic functions, and as such are also widely used for studying periodic phenomena through Fourier analysis. The trigonometric functions most widely used in modern mathematics are the sine, the cosine, and the tangent.
https://en.wikipedia.org/wiki/Cotangent_(trigonometric_function)
Their reciprocals are respectively the cosecant, the secant, and the cotangent, which are less used. Each of these six trigonometric functions has a corresponding inverse function, and an analog among the hyperbolic functions.
https://en.wikipedia.org/wiki/Cotangent_(trigonometric_function)
The oldest definitions of trigonometric functions, related to right-angle triangles, define them only for acute angles. To extend the sine and cosine functions to functions whose domain is the whole real line, geometrical definitions using the standard unit circle (i.e., a circle with radius 1 unit) are often used; then the domain of the other functions is the real line with some isolated points removed. Modern definitions express trigonometric functions as infinite series or as solutions of differential equations. This allows extending the domain of sine and cosine functions to the whole complex plane, and the domain of the other trigonometric functions to the complex plane with some isolated points removed.
https://en.wikipedia.org/wiki/Cotangent_(trigonometric_function)
In mathematics, the trigonometric moment problem is formulated as follows: given a finite sequence {α0, ... αn }, does there exist a positive Borel measure μ on the interval such that α k = 1 2 π ∫ 0 2 π e − i k t d μ ( t ) . {\displaystyle \alpha _{k}={\frac {1}{2\pi }}\int _{0}^{2\pi }e^{-ikt}\,d\mu (t).} In other words, an affirmative answer to the problems means that {α0, ... αn } are the first n + 1 Fourier coefficients of some positive Borel measure μ on .
https://en.wikipedia.org/wiki/Trigonometric_moment_problem
In mathematics, the truncated power function with exponent n {\displaystyle n} is defined as x + n = { x n: x > 0 0: x ≤ 0. {\displaystyle x_{+}^{n}={\begin{cases}x^{n}&:\ x>0\\0&:\ x\leq 0.\end{cases}}} In particular, x + = { x: x > 0 0: x ≤ 0. {\displaystyle x_{+}={\begin{cases}x&:\ x>0\\0&:\ x\leq 0.\end{cases}}} and interpret the exponent as conventional power.
https://en.wikipedia.org/wiki/Truncated_power_function
In mathematics, the tunnel number of a knot, as first defined by Bradd Clark, is a knot invariant, given by the minimal number of arcs (called tunnels) that must be added to the knot so that the complement becomes a handlebody. The tunnel number can equally be defined for links. The boundary of a regular neighbourhood of the union of the link and its tunnels forms a Heegaard splitting of the link exterior.
https://en.wikipedia.org/wiki/Tunnel_number
In mathematics, the twisted Poincaré duality is a theorem removing the restriction on Poincaré duality to oriented manifolds. The existence of a global orientation is replaced by carrying along local information, by means of a local coefficient system.
https://en.wikipedia.org/wiki/Twisted_Poincaré_duality
In mathematics, the two families cλn(x;k) and Bλn(x;k) of sieved ultraspherical polynomials, introduced by Waleed Al-Salam, W.R. Allaway and Richard Askey in 1984, are the archetypal examples of sieved orthogonal polynomials. Their recurrence relations are a modified (or "sieved") version of the recurrence relations for ultraspherical polynomials.
https://en.wikipedia.org/wiki/Sieved_ultraspherical_polynomials
In mathematics, the two-sided Laplace transform or bilateral Laplace transform is an integral transform equivalent to probability's moment generating function. Two-sided Laplace transforms are closely related to the Fourier transform, the Mellin transform, the Z-transform and the ordinary or one-sided Laplace transform. If f(t) is a real- or complex-valued function of the real variable t defined for all real numbers, then the two-sided Laplace transform is defined by the integral B { f } ( s ) = F ( s ) = ∫ − ∞ ∞ e − s t f ( t ) d t . {\displaystyle {\mathcal {B}}\{f\}(s)=F(s)=\int _{-\infty }^{\infty }e^{-st}f(t)\,dt.}
https://en.wikipedia.org/wiki/Bilateral_Laplace_transform
The integral is most commonly understood as an improper integral, which converges if and only if both integrals ∫ 0 ∞ e − s t f ( t ) d t , ∫ − ∞ 0 e − s t f ( t ) d t {\displaystyle \int _{0}^{\infty }e^{-st}f(t)\,dt,\quad \int _{-\infty }^{0}e^{-st}f(t)\,dt} exist. There seems to be no generally accepted notation for the two-sided transform; the B {\displaystyle {\mathcal {B}}} used here recalls "bilateral". The two-sided transform used by some authors is T { f } ( s ) = s B { f } ( s ) = s F ( s ) = s ∫ − ∞ ∞ e − s t f ( t ) d t .
https://en.wikipedia.org/wiki/Bilateral_Laplace_transform
{\displaystyle {\mathcal {T}}\{f\}(s)=s{\mathcal {B}}\{f\}(s)=sF(s)=s\int _{-\infty }^{\infty }e^{-st}f(t)\,dt.} In pure mathematics the argument t can be any variable, and Laplace transforms are used to study how differential operators transform the function. In science and engineering applications, the argument t often represents time (in seconds), and the function f(t) often represents a signal or waveform that varies with time.
https://en.wikipedia.org/wiki/Bilateral_Laplace_transform
In these cases, the signals are transformed by filters, that work like a mathematical operator, but with a restriction. They have to be causal, which means that the output in a given time t cannot depend on an output which is a higher value of t. In population ecology, the argument t often represents spatial displacement in a dispersal kernel. When working with functions of time, f(t) is called the time domain representation of the signal, while F(s) is called the s-domain (or Laplace domain) representation. The inverse transformation then represents a synthesis of the signal as the sum of its frequency components taken over all frequencies, whereas the forward transformation represents the analysis of the signal into its frequency components.
https://en.wikipedia.org/wiki/Bilateral_Laplace_transform
In mathematics, the uncertainty exponent is a method of measuring the fractal dimension of a basin boundary. In a chaotic scattering system, the invariant set of the system is usually not directly accessible because it is non-attracting and typically of measure zero. Therefore, the only way to infer the presence of members and to measure the properties of the invariant set is through the basins of attraction. Note that in a scattering system, basins of attraction are not limit cycles therefore do not constitute members of the invariant set.
https://en.wikipedia.org/wiki/Uncertainty_exponent
Suppose we start with a random trajectory and perturb it by a small amount, ϵ {\displaystyle \epsilon } , in a random direction. If the new trajectory ends up in a different basin from the old one, then it is called epsilon uncertain. If we take a large number of such trajectories, then the fraction of them that are epsilon uncertain is the uncertainty fraction, f ( ϵ ) {\displaystyle f(\epsilon )} , and we expect it to scale exponentially with ε {\displaystyle \varepsilon }: f ( ε ) ∼ ε γ {\displaystyle f(\varepsilon )\sim \varepsilon ^{\gamma }\,} Thus the uncertainty exponent, γ {\displaystyle \gamma } , is defined as follows: γ = lim ε → 0 ln ⁡ f ( ε ) ln ⁡ ε {\displaystyle \gamma =\lim _{\varepsilon \to 0}{\frac {\ln f(\varepsilon )}{\ln \varepsilon }}} The uncertainty exponent can be shown to approximate the box-counting dimension as follows: D 0 = N − γ {\displaystyle D_{0}=N-\gamma \,} where N is the embedding dimension. Please refer to the article on chaotic mixing for an example of numerical computation of the uncertainty dimension compared with that of a box-counting dimension.
https://en.wikipedia.org/wiki/Uncertainty_exponent
In mathematics, the uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis. Together with the Hahn–Banach theorem and the open mapping theorem, it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators (and thus bounded operators) whose domain is a Banach space, pointwise boundedness is equivalent to uniform boundedness in operator norm. The theorem was first published in 1927 by Stefan Banach and Hugo Steinhaus, but it was also proven independently by Hans Hahn.
https://en.wikipedia.org/wiki/Banach–Steinhaus_theorem
In mathematics, the uniform limit theorem states that the uniform limit of any sequence of continuous functions is continuous.
https://en.wikipedia.org/wiki/Uniform_limit_theorem
In mathematics, the uniformization theorem says that every simply connected Riemann surface is conformally equivalent to one of the three domains: the open unit disk, the complex plane, or the Riemann sphere. In particular it admits a Riemannian metric of constant curvature. This classifies Riemannian surfaces as elliptic (positively curved—rather, admitting a constant positively curved metric), parabolic (flat), and hyperbolic (negatively curved) according to their universal cover. The uniformization theorem is a generalization of the Riemann mapping theorem from proper simply connected open subsets of the plane to arbitrary simply connected Riemann surfaces.
https://en.wikipedia.org/wiki/Low_dimensional_topology
In mathematics, the uniformization theorem says that every simply connected Riemann surface is conformally equivalent to one of three Riemann surfaces: the open unit disk, the complex plane, or the Riemann sphere. The theorem is a generalization of the Riemann mapping theorem from simply connected open subsets of the plane to arbitrary simply connected Riemann surfaces. Since every Riemann surface has a universal cover which is a simply connected Riemann surface, the uniformization theorem leads to a classification of Riemann surfaces into three types: those that have the Riemann sphere as universal cover ("elliptic"), those with the plane as universal cover ("parabolic") and those with the unit disk as universal cover ("hyperbolic").
https://en.wikipedia.org/wiki/Uniformization_theorem
It further follows that every Riemann surface admits a Riemannian metric of constant curvature, where the curvature can be taken to be 1 in the elliptic, 0 in the parabolic and -1 in the hyperbolic case. The uniformization theorem also yields a similar classification of closed orientable Riemannian 2-manifolds into elliptic/parabolic/hyperbolic cases. Each such manifold has a conformally equivalent Riemannian metric with constant curvature, where the curvature can be taken to be 1 in the elliptic, 0 in the parabolic and -1 in the hyperbolic case.
https://en.wikipedia.org/wiki/Uniformization_theorem
In mathematics, the unit doublet is the derivative of the Dirac delta function. It can be used to differentiate signals in electrical engineering: If u1 is the unit doublet, then ( x ∗ u 1 ) ( t ) = d x ( t ) d t {\displaystyle (x*u_{1})(t)={\frac {dx(t)}{dt}}} where ∗ {\displaystyle *} is the convolution operator.The function is zero for all values except zero, where its behaviour is interesting. Its integral over any interval enclosing zero is zero. However, the integral of its absolute value over any region enclosing zero goes to infinity.
https://en.wikipedia.org/wiki/Unit_doublet
The function can be thought of as the limiting case of two rectangles, one in the second quadrant, and the other in the fourth. The length of each rectangle is k, whereas their breadth is 1/k2, where k tends to zero. == References ==
https://en.wikipedia.org/wiki/Unit_doublet
In mathematics, the unit interval is the closed interval , that is, the set of all real numbers that are greater than or equal to 0 and less than or equal to 1. It is often denoted I (capital letter I). In addition to its role in real analysis, the unit interval is used to study homotopy theory in the field of topology. In the literature, the term "unit interval" is sometimes applied to the other shapes that an interval from 0 to 1 could take: (0,1], .
https://en.wikipedia.org/wiki/Closed_unit_interval
In mathematics, the unitary group of degree n, denoted U(n), is the group of n × n unitary matrices, with the group operation of matrix multiplication. The unitary group is a subgroup of the general linear group GL(n, C). Hyperorthogonal group is an archaic name for the unitary group, especially over finite fields. For the group of unitary matrices with determinant 1, see Special unitary group.
https://en.wikipedia.org/wiki/Unitary_symmetry
In the simple case n = 1, the group U(1) corresponds to the circle group, consisting of all complex numbers with absolute value 1, under multiplication. All the unitary groups contain copies of this group. The unitary group U(n) is a real Lie group of dimension n2. The Lie algebra of U(n) consists of n × n skew-Hermitian matrices, with the Lie bracket given by the commutator. The general unitary group (also called the group of unitary similitudes) consists of all matrices A such that A∗A is a nonzero multiple of the identity matrix, and is just the product of the unitary group with the group of all positive multiples of the identity matrix.
https://en.wikipedia.org/wiki/Unitary_symmetry
In mathematics, the universal bundle in the theory of fiber bundles with structure group a given topological group G, is a specific bundle over a classifying space BG, such that every bundle with the given structure group G over M is a pullback by means of a continuous map M → BG.
https://en.wikipedia.org/wiki/Universal_bundle
In mathematics, the universal enveloping algebra of a Lie algebra is the unital associative algebra whose representations correspond precisely to the representations of that Lie algebra. Universal enveloping algebras are used in the representation theory of Lie groups and Lie algebras. For example, Verma modules can be constructed as quotients of the universal enveloping algebra. In addition, the enveloping algebra gives a precise definition for the Casimir operators.
https://en.wikipedia.org/wiki/Universal_enveloping_algebra
Because Casimir operators commute with all elements of a Lie algebra, they can be used to classify representations. The precise definition also allows the importation of Casimir operators into other areas of mathematics, specifically, those that have a differential algebra. They also play a central role in some recent developments in mathematics.
https://en.wikipedia.org/wiki/Universal_enveloping_algebra
In particular, their dual provides a commutative example of the objects studied in non-commutative geometry, the quantum groups. This dual can be shown, by the Gelfand–Naimark theorem, to contain the C* algebra of the corresponding Lie group. This relationship generalizes to the idea of Tannaka–Krein duality between compact topological groups and their representations. From an analytic viewpoint, the universal enveloping algebra of the Lie algebra of a Lie group may be identified with the algebra of left-invariant differential operators on the group.
https://en.wikipedia.org/wiki/Universal_enveloping_algebra
In mathematics, the universal invariant or u-invariant of a field describes the structure of quadratic forms over the field. The universal invariant u(F) of a field F is the largest dimension of an anisotropic quadratic space over F, or ∞ if this does not exist. Since formally real fields have anisotropic quadratic forms (sums of squares) in every dimension, the invariant is only of interest for other fields. An equivalent formulation is that u is the smallest number such that every form of dimension greater than u is isotropic, or that every form of dimension at least u is universal.
https://en.wikipedia.org/wiki/U-invariant
In mathematics, the universality of zeta functions is the remarkable ability of the Riemann zeta function and other similar functions (such as the Dirichlet L-functions) to approximate arbitrary non-vanishing holomorphic functions arbitrarily well. The universality of the Riemann zeta function was first proven by Sergei Mikhailovitch Voronin in 1975 and is sometimes known as Voronin's universality theorem.
https://en.wikipedia.org/wiki/Zeta_function_universality
In mathematics, the unknotting problem is the problem of algorithmically recognizing the unknot, given some representation of a knot, e.g., a knot diagram. There are several types of unknotting algorithms. A major unresolved challenge is to determine if the problem admits a polynomial time algorithm; that is, whether the problem lies in the complexity class P.
https://en.wikipedia.org/wiki/Unknotting_problem
In mathematics, the upper and lower incomplete gamma functions are types of special functions which arise as solutions to various mathematical problems such as certain integrals. Their respective names stem from their integral definitions, which are defined similarly to the gamma function but with different or "incomplete" integral limits. The gamma function is defined as an integral from zero to infinity. This contrasts with the lower incomplete gamma function, which is defined as an integral from zero to a variable upper limit. Similarly, the upper incomplete gamma function is defined as an integral from a variable lower limit to infinity.
https://en.wikipedia.org/wiki/Upper_incomplete_gamma_function