text
stringlengths 9
3.55k
| source
stringlengths 31
280
|
|---|---|
Appel and Haken's approach started by showing that there is a particular set of 1,936 maps, each of which cannot be part of a smallest-sized counterexample to the four color theorem (i.e., if they did appear, one could make a smaller counter-example). Appel and Haken used a special-purpose computer program to confirm that each of these maps had this property. Additionally, any map that could potentially be a counterexample must have a portion that looks like one of these 1,936 maps.
|
https://en.wikipedia.org/wiki/Conjecture
|
Showing this with hundreds of pages of hand analysis, Appel and Haken concluded that no smallest counterexample exists because any must contain, yet do not contain, one of these 1,936 maps. This contradiction means there are no counterexamples at all and that the theorem is therefore true. Initially, their proof was not accepted by mathematicians at all because the computer-assisted proof was infeasible for a human to check by hand. However, the proof has since then gained wider acceptance, although doubts still remain.
|
https://en.wikipedia.org/wiki/Conjecture
|
In mathematics, the four color theorem, or the four color map theorem, states that no more than four colors are required to color the regions of any map so that no two adjacent regions have the same color. Adjacent means that two regions share a common boundary curve segment, not merely a corner where three or more regions meet. It was the first major theorem to be proved using a computer.
|
https://en.wikipedia.org/wiki/Four-color_conjecture
|
Initially, this proof was not accepted by all mathematicians because the computer-assisted proof was infeasible for a human to check by hand. The proof has gained wide acceptance since then, although some doubters remain.The four color theorem was proved in 1976 by Kenneth Appel and Wolfgang Haken after many false proofs and counterexamples (unlike the five color theorem, proved in the 1800s, which states that five colors are enough to color a map). To dispel any remaining doubts about the Appel–Haken proof, a simpler proof using the same ideas and still relying on computers was published in 1997 by Robertson, Sanders, Seymour, and Thomas. In 2005, the theorem was also proved by Georges Gonthier with general-purpose theorem-proving software.
|
https://en.wikipedia.org/wiki/Four-color_conjecture
|
In mathematics, the four-spiral semigroup is a special semigroup generated by four idempotent elements. This special semigroup was first studied by Karl Byleen in a doctoral dissertation submitted to the University of Nebraska in 1977. It has several interesting properties: it is one of the most important examples of bi-simple but not completely-simple semigroups; it is also an important example of a fundamental regular semigroup; it is an indispensable building block of bisimple, idempotent-generated regular semigroups. A certain semigroup, called double four-spiral semigroup, generated by five idempotent elements has also been studied along with the four-spiral semigroup.
|
https://en.wikipedia.org/wiki/Four-spiral_semigroup
|
In mathematics, the fractional Laplacian is an operator, which generalizes the notion of Laplacian spatial derivatives to fractional powers.
|
https://en.wikipedia.org/wiki/Fractional_Laplacian
|
In mathematics, the free category or path category generated by a directed graph or quiver is the category that results from freely concatenating arrows together, whenever the target of one arrow is the source of the next. More precisely, the objects of the category are the vertices of the quiver, and the morphisms are paths between objects. Here, a path is defined as a finite sequence V 0 → E 0 V 1 → E 1 ⋯ → E n − 1 V n {\displaystyle V_{0}{\xrightarrow {\;\;E_{0}\;\;}}V_{1}{\xrightarrow {\;\;E_{1}\;\;}}\cdots {\xrightarrow {E_{n-1}}}V_{n}} where V k {\displaystyle V_{k}} is a vertex of the quiver, E k {\displaystyle E_{k}} is an edge of the quiver, and n ranges over the non-negative integers.
|
https://en.wikipedia.org/wiki/Free_category
|
For every vertex V {\displaystyle V} of the quiver, there is an "empty path" which constitutes the identity morphisms of the category. The composition operation is concatenation of paths. Given paths V 0 → E 0 ⋯ → E n − 1 V n , V n → F 0 W 0 → F 1 ⋯ → F n − 1 W m , {\displaystyle V_{0}{\xrightarrow {E_{0}}}\cdots {\xrightarrow {E_{n-1}}}V_{n},\quad V_{n}{\xrightarrow {F_{0}}}W_{0}{\xrightarrow {F_{1}}}\cdots {\xrightarrow {F_{n-1}}}W_{m},} their composition is ( V n → F 0 W 0 → F 1 ⋯ → F n − 1 W m ) ∘ ( V 0 → E 0 ⋯ → E n − 1 V n ) := V 0 → E 0 ⋯ → E n − 1 V n → F 0 W 0 → F 1 ⋯ → F n − 1 W m {\displaystyle \left(V_{n}{\xrightarrow {F_{0}}}W_{0}{\xrightarrow {F_{1}}}\cdots {\xrightarrow {F_{n-1}}}W_{m}\right)\circ \left(V_{0}{\xrightarrow {E_{0}}}\cdots {\xrightarrow {E_{n-1}}}V_{n}\right):=V_{0}{\xrightarrow {E_{0}}}\cdots {\xrightarrow {E_{n-1}}}V_{n}{\xrightarrow {F_{0}}}W_{0}{\xrightarrow {F_{1}}}\cdots {\xrightarrow {F_{n-1}}}W_{m}} .Note that the result of the composition starts with the right operand of the composition, and ends with its left operand.
|
https://en.wikipedia.org/wiki/Free_category
|
In mathematics, the free factor complex (sometimes also called the complex of free factors) is a free group counterpart of the notion of the curve complex of a finite type surface. The free factor complex was originally introduced in a 1998 paper of Allen Hatcher and Karen Vogtmann. Like the curve complex, the free factor complex is known to be Gromov-hyperbolic. The free factor complex plays a significant role in the study of large-scale geometry of Out ( F n ) {\displaystyle \operatorname {Out} (F_{n})} .
|
https://en.wikipedia.org/wiki/Free_factor_complex
|
In mathematics, the free group FS over a given set S consists of all words that can be built from members of S, considering two words to be different unless their equality follows from the group axioms (e.g. st = suu−1t, but s ≠ t−1 for s,t,u ∈ S). The members of S are called generators of FS, and the number of generators is the rank of the free group. An arbitrary group G is called free if it is isomorphic to FS for some subset S of G, that is, if there is a subset S of G such that every element of G can be written in exactly one way as a product of finitely many elements of S and their inverses (disregarding trivial variations such as st = suu−1t). A related but different notion is a free abelian group; both notions are particular instances of a free object from universal algebra. As such, free groups are defined by their universal property.
|
https://en.wikipedia.org/wiki/Free_group
|
In mathematics, the free matroid over a given ground-set E is the matroid in which the independent sets are all subsets of E. It is a special case of a uniform matroid. The unique basis of this matroid is the ground-set itself, E. Among matroids on E, the free matroid on E has the most independent sets, the highest rank, and the fewest circuits.
|
https://en.wikipedia.org/wiki/Free_matroid
|
In mathematics, the fundamental class is a homology class associated to a connected orientable compact manifold of dimension n, which corresponds to the generator of the homology group H n ( M , ∂ M ; Z ) ≅ Z {\displaystyle H_{n}(M,\partial M;\mathbf {Z} )\cong \mathbf {Z} } . The fundamental class can be thought of as the orientation of the top-dimensional simplices of a suitable triangulation of the manifold.
|
https://en.wikipedia.org/wiki/Orientation_homology_class
|
In mathematics, the fundamental group scheme is a group scheme canonically attached to a scheme over a Dedekind scheme (e.g. the spectrum of a field or the spectrum of a discrete valuation ring). It is a generalisation of the étale fundamental group. Although its existence was conjectured by Alexander Grothendieck, the first proof if its existence is due, for schemes defined over fields, to Madhav Nori. A proof of its existence for schemes defined over Dedekind schemes is due to Marco Antei, Michel Emsalem and Carlo Gasbarri.
|
https://en.wikipedia.org/wiki/Fundamental_group_scheme
|
In mathematics, the fundamental theorem of Galois theory is a result that describes the structure of certain types of field extensions in relation to groups. It was proved by Évariste Galois in his development of Galois theory. In its most basic form, the theorem asserts that given a field extension E/F that is finite and Galois, there is a one-to-one correspondence between its intermediate fields and subgroups of its Galois group. (Intermediate fields are fields K satisfying F ⊆ K ⊆ E; they are also called subextensions of E/F.)
|
https://en.wikipedia.org/wiki/Fundamental_theorem_of_Galois_theory
|
In mathematics, the fundamental theorem of arithmetic, also called the unique factorization theorem and prime factorization theorem, states that every integer greater than 1 can be represented uniquely as a product of prime numbers, up to the order of the factors. For example, 1200 = 2 4 ⋅ 3 1 ⋅ 5 2 = ( 2 ⋅ 2 ⋅ 2 ⋅ 2 ) ⋅ 3 ⋅ ( 5 ⋅ 5 ) = 5 ⋅ 2 ⋅ 5 ⋅ 2 ⋅ 3 ⋅ 2 ⋅ 2 = … {\displaystyle 1200=2^{4}\cdot 3^{1}\cdot 5^{2}=(2\cdot 2\cdot 2\cdot 2)\cdot 3\cdot (5\cdot 5)=5\cdot 2\cdot 5\cdot 2\cdot 3\cdot 2\cdot 2=\ldots } The theorem says two things about this example: first, that 1200 can be represented as a product of primes, and second, that no matter how this is done, there will always be exactly four 2s, one 3, two 5s, and no other primes in the product. The requirement that the factors be prime is necessary: factorizations containing composite numbers may not be unique (for example, 12 = 2 ⋅ 6 = 3 ⋅ 4 {\displaystyle 12=2\cdot 6=3\cdot 4} ).
|
https://en.wikipedia.org/wiki/Fundamental_Theorem_of_Arithmetic
|
This theorem is one of the main reasons why 1 is not considered a prime number: if 1 were prime, then factorization into primes would not be unique; for example, 2 = 2 ⋅ 1 = 2 ⋅ 1 ⋅ 1 = … {\displaystyle 2=2\cdot 1=2\cdot 1\cdot 1=\ldots } The theorem generalizes to other algebraic structures that are called unique factorization domains and include principal ideal domains, Euclidean domains, and polynomial rings over a field. However, the theorem does not hold for algebraic integers. This failure of unique factorization is one of the reasons for the difficulty of the proof of Fermat's Last Theorem. The implicit use of unique factorization in rings of algebraic integers is behind the error of many of the numerous false proofs that have been written during the 358 years between Fermat's statement and Wiles's proof.
|
https://en.wikipedia.org/wiki/Fundamental_Theorem_of_Arithmetic
|
In mathematics, the fuzzy sphere is one of the simplest and most canonical examples of non-commutative geometry. Ordinarily, the functions defined on a sphere form a commuting algebra. A fuzzy sphere differs from an ordinary sphere because the algebra of functions on it is not commutative. It is generated by spherical harmonics whose spin l is at most equal to some j. The terms in the product of two spherical harmonics that involve spherical harmonics with spin exceeding j are simply omitted in the product.
|
https://en.wikipedia.org/wiki/Fuzzy_sphere
|
This truncation replaces an infinite-dimensional commutative algebra by a j 2 {\displaystyle j^{2}} -dimensional non-commutative algebra. The simplest way to see this sphere is to realize this truncated algebra of functions as a matrix algebra on some finite-dimensional vector space. Take the three j-dimensional matrices J a , a = 1 , 2 , 3 {\displaystyle J_{a},~a=1,2,3} that form a basis for the j dimensional irreducible representation of the Lie algebra su(2).
|
https://en.wikipedia.org/wiki/Fuzzy_sphere
|
In mathematics, the gamma function (represented by Γ, the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except the non-positive integers. For every positive integer n, Derived by Daniel Bernoulli, for complex numbers with a positive real part, the gamma function is defined via a convergent improper integral: The gamma function then is defined as the analytic continuation of this integral function to a meromorphic function that is holomorphic in the whole complex plane except zero and the negative integers, where the function has simple poles.
|
https://en.wikipedia.org/wiki/Gamma_function
|
The gamma function has no zeros, so the reciprocal gamma function 1/Γ(z) is an entire function. In fact, the gamma function corresponds to the Mellin transform of the negative exponential function: Other extensions of the factorial function do exist, but the gamma function is the most popular and useful. It is a component in various probability-distribution functions, and as such it is applicable in the fields of probability and statistics, as well as combinatorics.
|
https://en.wikipedia.org/wiki/Gamma_function
|
In mathematics, the general linear group of degree n is the set of n×n invertible matrices, together with the operation of ordinary matrix multiplication. This forms a group, because the product of two invertible matrices is again invertible, and the inverse of an invertible matrix is invertible, with identity matrix as the identity element of the group. The group is so named because the columns (and also the rows) of an invertible matrix are linearly independent, hence the vectors/points they define are in general linear position, and matrices in the general linear group take points in general linear position to points in general linear position. To be more precise, it is necessary to specify what kind of objects may appear in the entries of the matrix.
|
https://en.wikipedia.org/wiki/Infinite_general_linear_group
|
For example, the general linear group over R (the set of real numbers) is the group of n×n invertible matrices of real numbers, and is denoted by GLn(R) or GL(n, R). More generally, the general linear group of degree n over any field F (such as the complex numbers), or a ring R (such as the ring of integers), is the set of n×n invertible matrices with entries from F (or R), again with matrix multiplication as the group operation. Typical notation is GLn(F) or GL(n, F), or simply GL(n) if the field is understood.
|
https://en.wikipedia.org/wiki/Infinite_general_linear_group
|
More generally still, the general linear group of a vector space GL(V) is the automorphism group, not necessarily written as matrices. The special linear group, written SL(n, F) or SLn(F), is the subgroup of GL(n, F) consisting of matrices with a determinant of 1. The group GL(n, F) and its subgroups are often called linear groups or matrix groups (the automorphism group GL(V) is a linear group but not a matrix group).
|
https://en.wikipedia.org/wiki/Infinite_general_linear_group
|
These groups are important in the theory of group representations, and also arise in the study of spatial symmetries and symmetries of vector spaces in general, as well as the study of polynomials. The modular group may be realised as a quotient of the special linear group SL(2, Z). If n ≥ 2, then the group GL(n, F) is not abelian.
|
https://en.wikipedia.org/wiki/Infinite_general_linear_group
|
In mathematics, the generalized Pochhammer symbol of parameter α > 0 {\displaystyle \alpha >0} and partition κ = ( κ 1 , κ 2 , … , κ m ) {\displaystyle \kappa =(\kappa _{1},\kappa _{2},\ldots ,\kappa _{m})} generalizes the classical Pochhammer symbol, named after Leo August Pochhammer, and is defined as ( a ) κ ( α ) = ∏ i = 1 m ∏ j = 1 κ i ( a − i − 1 α + j − 1 ) . {\displaystyle (a)_{\kappa }^{(\alpha )}=\prod _{i=1}^{m}\prod _{j=1}^{\kappa _{i}}\left(a-{\frac {i-1}{\alpha }}+j-1\right).} It is used in multivariate analysis.
|
https://en.wikipedia.org/wiki/Generalized_Pochhammer_symbol
|
In mathematics, the generalized dihedral groups are a family of groups with algebraic structures similar to that of the dihedral groups. They include the finite dihedral groups, the infinite dihedral group, and the orthogonal group O(2). Dihedral groups play an important role in group theory, geometry, and chemistry.
|
https://en.wikipedia.org/wiki/Generalized_dihedral_group
|
In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector. The GMRES method was developed by Yousef Saad and Martin H. Schultz in 1986.
|
https://en.wikipedia.org/wiki/Generalized_minimal_residual_method
|
It is a generalization and improvement of the MINRES method due to Paige and Saunders in 1975. The MINRES method requires that the matrix is symmetric, but has the advantage that it only requires handling of three vectors. GMRES is a special case of the DIIS method developed by Peter Pulay in 1980. DIIS is applicable to non-linear systems.
|
https://en.wikipedia.org/wiki/Generalized_minimal_residual_method
|
In mathematics, the generalized symmetric group is the wreath product S ( m , n ) := Z m ≀ S n {\displaystyle S(m,n):=Z_{m}\wr S_{n}} of the cyclic group of order m and the symmetric group of order n.
|
https://en.wikipedia.org/wiki/Generalized_symmetric_group
|
In mathematics, the generalized taxicab number Taxicab(k, j, n) is the smallest number — if it exists — that can be expressed as the sum of j kth positive powers in n different ways. For k = 3 and j = 2, they coincide with taxicab number. T a x i c a b ( 1 , 2 , 2 ) = 4 = 1 + 3 = 2 + 2. {\displaystyle \mathrm {Taxicab} (1,2,2)=4=1+3=2+2.}
|
https://en.wikipedia.org/wiki/Generalized_taxicab_number
|
T a x i c a b ( 2 , 2 , 2 ) = 50 = 1 2 + 7 2 = 5 2 + 5 2 . {\displaystyle \mathrm {Taxicab} (2,2,2)=50=1^{2}+7^{2}=5^{2}+5^{2}.} T a x i c a b ( 3 , 2 , 2 ) = 1729 = 1 3 + 12 3 = 9 3 + 10 3 {\displaystyle \mathrm {Taxicab} (3,2,2)=1729=1^{3}+12^{3}=9^{3}+10^{3}} — 1729 (number) by Ramanujan.Euler showed that T a x i c a b ( 4 , 2 , 2 ) = 635318657 = 59 4 + 158 4 = 133 4 + 134 4 . {\displaystyle \mathrm {Taxicab} (4,2,2)=635318657=59^{4}+158^{4}=133^{4}+134^{4}.} However, Taxicab(5, 2, n) is not known for any n ≥ 2:No positive integer is known that can be written as the sum of two 5th powers in more than one way, and it is not known whether such a number exists.The largest variable of a 5 + b 5 = c 5 + d 5 {\displaystyle a^{5}+b^{5}=c^{5}+d^{5}} must be at least 3450.
|
https://en.wikipedia.org/wiki/Generalized_taxicab_number
|
In mathematics, the genus is a classification of quadratic forms and lattices over the ring of integers. An integral quadratic form is a quadratic form on Zn, or equivalently a free Z-module of finite rank. Two such forms are in the same genus if they are equivalent over the local rings Zp for each prime p and also equivalent over R. Equivalent forms are in the same genus, but the converse does not hold. For example, x2 + 82y2 and 2x2 + 41y2 are in the same genus but not equivalent over Z. Forms in the same genus have equal discriminant and hence there are only finitely many equivalence classes in a genus. The Smith–Minkowski–Siegel mass formula gives the weight or mass of the quadratic forms in a genus, the count of equivalence classes weighted by the reciprocals of the orders of their automorphism groups.
|
https://en.wikipedia.org/wiki/Genus_of_a_quadratic_form
|
In mathematics, the geodesic equations are second-order non-linear differential equations, and are commonly presented in the form of Euler–Lagrange equations of motion. However, they can also be presented as a set of coupled first-order equations, in the form of Hamilton's equations. This latter formulation is developed in this article.
|
https://en.wikipedia.org/wiki/Geodesics_as_Hamiltonian_flows
|
In mathematics, the geometric Langlands correspondence is a reformulation of the Langlands correspondence obtained by replacing the number fields appearing in the original number theoretic version by function fields and applying techniques from algebraic geometry. The geometric Langlands correspondence relates algebraic geometry and representation theory.
|
https://en.wikipedia.org/wiki/Geometric_Langlands_correspondence
|
In mathematics, the geometric mean is a mean or average which indicates a central tendency of a finite set of real numbers by using the product of their values (as opposed to the arithmetic mean which uses their sum). The geometric mean is defined as the nth root of the product of n numbers, i.e., for a set of numbers a1, a2, ..., an, the geometric mean is defined as ( ∏ i = 1 n a i ) 1 n = a 1 a 2 ⋯ a n n {\displaystyle \left(\prod _{i=1}^{n}a_{i}\right)^{\frac {1}{n}}={\sqrt{a_{1}a_{2}\cdots a_{n}}}} or, equivalently, as the arithmetic mean in logscale: exp ( 1 n ∑ i = 1 n ln a i ) {\displaystyle \exp {\left({{\frac {1}{n}}\sum \limits _{i=1}^{n}\ln a_{i}}\right)}} Most commonly the numbers are restricted to being non-negative, to avoid complications related to negative numbers not having real roots, and frequently they are restricted to being positive, to enable the use of logarithms. For instance, the geometric mean of two numbers, say 2 and 8, is just the square root of their product, that is, 2 ⋅ 8 = 4 {\displaystyle {\sqrt {2\cdot 8}}=4} . As another example, the geometric mean of the three numbers 4, 1, and 1/32 is the cube root of their product (1/8), which is 1/2, that is, 4 ⋅ 1 ⋅ 1 / 32 3 = 1 / 2 {\displaystyle {\sqrt{4\cdot 1\cdot 1/32}}=1/2} .
|
https://en.wikipedia.org/wiki/Geometric_mean
|
The geometric mean is often used for a set of numbers whose values are meant to be multiplied together or are exponential in nature, such as a set of growth figures: values of the human population or interest rates of a financial investment over time. It also applies to benchmarking, where it is particularly useful for computing means of speedup ratios: since the mean of 0.5x (half as fast) and 2x (twice as fast) will be 1 (i.e., no speedup overall). The geometric mean can be understood in terms of geometry.
|
https://en.wikipedia.org/wiki/Geometric_mean
|
The geometric mean of two numbers, a {\displaystyle a} and b {\displaystyle b} , is the length of one side of a square whose area is equal to the area of a rectangle with sides of lengths a {\displaystyle a} and b {\displaystyle b} . Similarly, the geometric mean of three numbers, a {\displaystyle a} , b {\displaystyle b} , and c {\displaystyle c} , is the length of one edge of a cube whose volume is the same as that of a cuboid with sides whose lengths are equal to the three given numbers. The geometric mean is one of the three classical Pythagorean means, together with the arithmetic mean and the harmonic mean. For all positive data sets containing at least one pair of unequal values, the harmonic mean is always the least of the three means, while the arithmetic mean is always the greatest of the three and the geometric mean is always in between (see Inequality of arithmetic and geometric means.)
|
https://en.wikipedia.org/wiki/Geometric_mean
|
In mathematics, the geometric topology is a topology one can put on the set H of hyperbolic 3-manifolds of finite volume.
|
https://en.wikipedia.org/wiki/Geometric_topology_(object)
|
In mathematics, the geometric–harmonic mean M(x, y) of two positive real numbers x and y is defined as follows: we form the geometric mean of g0 = x and h0 = y and call it g1, i.e. g1 is the square root of xy. We also form the harmonic mean of x and y and call it h1, i.e. h1 is the reciprocal of the arithmetic mean of the reciprocals of x and y. These may be done sequentially (in any order) or simultaneously. Now we can iterate this operation with g1 taking the place of x and h1 taking the place of y. In this way, two interdependent sequences (gn) and (hn) are defined: g n + 1 = g n h n {\displaystyle g_{n+1}={\sqrt {g_{n}h_{n}}}} and h n + 1 = 2 1 g n + 1 h n {\displaystyle h_{n+1}={\frac {2}{{\frac {1}{g_{n}}}+{\frac {1}{h_{n}}}}}} Both of these sequences converge to the same number, which we call the geometric–harmonic mean M(x, y) of x and y. The geometric–harmonic mean is also designated as the harmonic–geometric mean. (cf. Wolfram MathWorld below.) The existence of the limit can be proved by the means of Bolzano–Weierstrass theorem in a manner almost identical to the proof of existence of arithmetic–geometric mean.
|
https://en.wikipedia.org/wiki/Geometric-harmonic_mean
|
For example, a vector field is a section of a tangent bundle on a smooth manifold; this says that a vector field on the union of two open sets is (no more and no less than) vector fields on the two sets that agree where they overlap. Given this basic understanding, there are further issues in the theory, and some will be addressed here. A different direction is that of the Grothendieck topology, and yet another is the logical status of 'local existence' (see Kripke–Joyal semantics).
|
https://en.wikipedia.org/wiki/Gluing_axiom
|
In mathematics, the goal of lattice basis reduction is to find a basis with short, nearly orthogonal vectors when given an integer lattice basis as input. This is realized using different algorithms, whose running time is usually at least exponential in the dimension of the lattice.
|
https://en.wikipedia.org/wiki/Lattice_reduction
|
In mathematics, the gonality of an algebraic curve C is defined as the lowest degree of a nonconstant rational map from C to the projective line. In more algebraic terms, if C is defined over the field K and K(C) denotes the function field of C, then the gonality is the minimum value taken by the degrees of field extensions K(C)/K(f)of the function field over its subfields generated by single functions f. If K is algebraically closed, then the gonality is 1 precisely for curves of genus 0. The gonality is 2 for curves of genus 1 (elliptic curves) and for hyperelliptic curves (this includes all curves of genus 2). For genus g ≥ 3 it is no longer the case that the genus determines the gonality.
|
https://en.wikipedia.org/wiki/Gonality_of_an_algebraic_curve
|
The gonality of the generic curve of genus g is the floor function of (g + 3)/2.Trigonal curves are those with gonality 3, and this case gave rise to the name in general. Trigonal curves include the Picard curves, of genus three and given by an equation y3 = Q(x)where Q is of degree 4. The gonality conjecture, of M. Green and R. Lazarsfeld, predicts that the gonality of the algebraic curve C can be calculated by homological algebra means, from a minimal resolution of an invertible sheaf of high degree.
|
https://en.wikipedia.org/wiki/Gonality_of_an_algebraic_curve
|
In many cases the gonality is two more than the Clifford index. The Green–Lazarsfeld conjecture is an exact formula in terms of the graded Betti numbers for a degree d embedding in r dimensions, for d large with respect to the genus. Writing b(C), with respect to a given such embedding of C and the minimal free resolution for its homogeneous coordinate ring, for the minimum index i for which βi, i + 1 is zero, then the conjectured formula for the gonality is r + 1 − b(C).According to the 1900 ICM talk of Federico Amodeo, the notion (but not the terminology) originated in Section V of Riemann's Theory of Abelian Functions. Amodeo used the term "gonalità" as early as 1893.
|
https://en.wikipedia.org/wiki/Gonality_of_an_algebraic_curve
|
In mathematics, the gradient conjecture, due to René Thom (1989), was proved in 2000 by three Polish mathematicians, Krzysztof Kurdyka (University of Savoie, France), Tadeusz Mostowski (Warsaw University, Poland) and Adam Parusiński (University of Angers, France). The conjecture states that given a real-valued analytic function f defined on Rn and a trajectory x(t) of the gradient vector field of f having a limit point x0 ∈ Rn, where f has an isolated critical point at x0, there exists a limit (in the projective space PRn-1) for the secant lines from x(t) to x0, as t tends to zero. The proof depends on a theorem due to Stanisław Łojasiewicz.
|
https://en.wikipedia.org/wiki/Gradient_conjecture
|
In mathematics, the grand Riemann hypothesis is a generalisation of the Riemann hypothesis and generalized Riemann hypothesis. It states that the nontrivial zeros of all automorphic L-functions lie on the critical line 1 2 + i t {\displaystyle {\frac {1}{2}}+it} with t {\displaystyle t} a real number variable and i {\displaystyle i} the imaginary unit. The modified grand Riemann hypothesis is the assertion that the nontrivial zeros of all automorphic L-functions lie on the critical line or the real line.
|
https://en.wikipedia.org/wiki/Grand_Riemann_hypothesis
|
In mathematics, the graph Fourier transform is a mathematical transform which eigendecomposes the Laplacian matrix of a graph into eigenvalues and eigenvectors. Analogously to the classical Fourier transform, the eigenvalues represent frequencies and eigenvectors form what is known as a graph Fourier basis. The Graph Fourier transform is important in spectral graph theory. It is widely applied in the recent study of graph structured learning algorithms, such as the widely employed convolutional networks.
|
https://en.wikipedia.org/wiki/Graph_Fourier_Transform
|
In mathematics, the graph of a function f {\displaystyle f} is the set of ordered pairs ( x , y ) {\displaystyle (x,y)} , where f ( x ) = y . {\displaystyle f(x)=y.} In the common case where x {\displaystyle x} and f ( x ) {\displaystyle f(x)} are real numbers, these pairs are Cartesian coordinates of points in two-dimensional space and thus form a subset of this plane. In the case of functions of two variables, that is functions whose domain consists of pairs ( x , y ) , {\displaystyle (x,y),} the graph usually refers to the set of ordered triples ( x , y , z ) {\displaystyle (x,y,z)} where f ( x , y ) = z , {\displaystyle f(x,y)=z,} instead of the pairs ( ( x , y ) , z ) {\displaystyle ((x,y),z)} as in the definition above.
|
https://en.wikipedia.org/wiki/Graph_(function)
|
This set is a subset of three-dimensional space; for a continuous real-valued function of two real variables, it is a surface. In science, engineering, technology, finance, and other areas, graphs are tools used for many purposes.
|
https://en.wikipedia.org/wiki/Graph_(function)
|
In the simplest case one variable is plotted as a function of another, typically using rectangular axes; see Plot (graphics) for details. A graph of a function is a special case of a relation. In the modern foundations of mathematics, and, typically, in set theory, a function is actually equal to its graph.
|
https://en.wikipedia.org/wiki/Graph_(function)
|
However, it is often useful to see functions as mappings, which consist not only of the relation between input and output, but also which set is the domain, and which set is the codomain. For example, to say that a function is onto (surjective) or not the codomain should be taken into account. The graph of a function on its own does not determine the codomain. It is common to use both terms function and graph of a function since even if considered the same object, they indicate viewing it from a different perspective.
|
https://en.wikipedia.org/wiki/Graph_(function)
|
In mathematics, the graph structure theorem is a major result in the area of graph theory. The result establishes a deep and fundamental connection between the theory of graph minors and topological embeddings. The theorem is stated in the seventeenth of a series of 23 papers by Neil Robertson and Paul Seymour. Its proof is very long and involved. Kawarabayashi & Mohar (2007) and Lovász (2006) are surveys accessible to nonspecialists, describing the theorem and its consequences.
|
https://en.wikipedia.org/wiki/Graph_structure_theorem
|
In mathematics, the greatest common divisor (GCD) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers x, y, the greatest common divisor of x and y is denoted gcd ( x , y ) {\displaystyle \gcd(x,y)} . For example, the GCD of 8 and 12 is 4, that is, gcd ( 8 , 12 ) = 4 {\displaystyle \gcd(8,12)=4} .In the name "greatest common divisor", the adjective "greatest" may be replaced by "highest", and the word "divisor" may be replaced by "factor", so that other names include highest common factor (hcf), etc. Historically, other names for the same concept have included greatest common measure.This notion can be extended to polynomials (see Polynomial greatest common divisor) and other commutative rings (see § In commutative rings below).
|
https://en.wikipedia.org/wiki/Greatest_Common_Divisor
|
In mathematics, the greedy algorithm for Egyptian fractions is a greedy algorithm, first described by Fibonacci, for transforming rational numbers into Egyptian fractions. An Egyptian fraction is a representation of an irreducible fraction as a sum of distinct unit fractions, such as 5/6 = 1/2 + 1/3. As the name indicates, these representations have been used as long ago as ancient Egypt, but the first published systematic method for constructing such expansions was described in 1202 in the Liber Abaci of Leonardo of Pisa (Fibonacci). It is called a greedy algorithm because at each step the algorithm chooses greedily the largest possible unit fraction that can be used in any representation of the remaining fraction.
|
https://en.wikipedia.org/wiki/Greedy_algorithm_for_Egyptian_fractions
|
Fibonacci actually lists several different methods for constructing Egyptian fraction representations. He includes the greedy method as a last resort for situations when several simpler methods fail; see Egyptian fraction for a more detailed listing of these methods. As Salzer (1948) details, the greedy method, and extensions of it for the approximation of irrational numbers, have been rediscovered several times by modern mathematicians, earliest and most notably by J. J. Sylvester (1880) A closely related expansion method that produces closer approximations at each step by allowing some unit fractions in the sum to be negative dates back to Lambert (1770). The expansion produced by this method for a number x {\displaystyle x} is called the greedy Egyptian expansion, Sylvester expansion, or Fibonacci–Sylvester expansion of x {\displaystyle x} . However, the term Fibonacci expansion usually refers, not to this method, but to representation of integers as sums of Fibonacci numbers.
|
https://en.wikipedia.org/wiki/Greedy_algorithm_for_Egyptian_fractions
|
In mathematics, the group Hopf algebra of a given group is a certain construct related to the symmetries of group actions. Deformations of group Hopf algebras are foundational in the theory of quantum groups.
|
https://en.wikipedia.org/wiki/Group_Hopf_algebra
|
In mathematics, the group of rotations about a fixed point in four-dimensional Euclidean space is denoted SO(4). The name comes from the fact that it is the special orthogonal group of order 4. In this article rotation means rotational displacement. For the sake of uniqueness, rotation angles are assumed to be in the segment except where mentioned or clearly implied by the context otherwise. A "fixed plane" is a plane for which every vector in the plane is unchanged after the rotation. An "invariant plane" is a plane for which every vector in the plane, although it may be affected by the rotation, remains in the plane after the rotation.
|
https://en.wikipedia.org/wiki/Rotations_in_4-dimensional_Euclidean_space
|
In mathematics, the hafnian of an adjacency matrix of a graph is the number of perfect matchings in the graph. It was so named by Eduardo R. Caianiello "to mark the fruitful period of stay in Copenhagen (Hafnia in Latin). "The hafnian of a 2 n × 2 n {\displaystyle 2n\times 2n} symmetric matrix is computed as haf ( A ) = 1 n ! 2 n ∑ σ ∈ S 2 n ∏ j = 1 n A σ ( 2 j − 1 ) , σ ( 2 j ) , {\displaystyle \operatorname {haf} (A)={\frac {1}{n!2^{n}}}\sum _{\sigma \in S_{2n}}\prod _{j=1}^{n}A_{\sigma (2j-1),\sigma (2j)},} where S 2 n {\displaystyle S_{2n}} is the symmetric group on = { 1 , 2 , .
|
https://en.wikipedia.org/wiki/Hafnian
|
. . , 2 n } {\displaystyle =\{1,2,...,2n\}} .Equivalently, haf ( A ) = ∑ M ∈ M ∏ ( u , v ) ∈ M A u , v {\displaystyle \operatorname {haf} (A)=\sum _{M\in {\mathcal {M}}}\prod _{\scriptscriptstyle (u,v)\in M}A_{u,v}} where M {\displaystyle {\mathcal {M}}} is the set of all 1-factors (perfect matchings) on the complete graph K 2 n {\displaystyle K_{2n}} , namely the set of all ( 2 n ) !
|
https://en.wikipedia.org/wiki/Hafnian
|
In mathematics, the half-period ratio τ of an elliptic function is the ratio τ = ω 2 ω 1 {\displaystyle \tau ={\frac {\omega _{2}}{\omega _{1}}}} of the two half-periods ω 1 2 {\displaystyle {\frac {\omega _{1}}{2}}} and ω 2 2 {\displaystyle {\frac {\omega _{2}}{2}}} of the elliptic function, where the elliptic function is defined in such a way that ℑ ( τ ) > 0 {\displaystyle \Im (\tau )>0} is in the upper half-plane. Quite often in the literature, ω1 and ω2 are defined to be the periods of an elliptic function rather than its half-periods. Regardless of the choice of notation, the ratio ω2/ω1 of periods is identical to the ratio (ω2/2)/(ω1/2) of half-periods.
|
https://en.wikipedia.org/wiki/Half-period_ratio
|
Hence, the period ratio is the same as the "half-period ratio". Note that the half-period ratio can be thought of as a simple number, namely, one of the parameters to elliptic functions, or it can be thought of as a function itself, because the half periods can be given in terms of the elliptic modulus or in terms of the nome. See the pages on quarter period and elliptic integrals for additional definitions and relations on the arguments and parameters to elliptic functions.
|
https://en.wikipedia.org/wiki/Half-period_ratio
|
In mathematics, the harmonic mean is one of several kinds of average, and in particular, one of the Pythagorean means. It is sometimes appropriate for situations when the average rate is desired.The harmonic mean can be expressed as the reciprocal of the arithmetic mean of the reciprocals of the given set of observations. As a simple example, the harmonic mean of 1, 4, and 4 is ( 1 − 1 + 4 − 1 + 4 − 1 3 ) − 1 = 3 1 1 + 1 4 + 1 4 = 3 1.5 = 2 . {\displaystyle \left({\frac {1^{-1}+4^{-1}+4^{-1}}{3}}\right)^{-1}={\frac {3}{{\frac {1}{1}}+{\frac {1}{4}}+{\frac {1}{4}}}}={\frac {3}{1.5}}=2\,.}
|
https://en.wikipedia.org/wiki/Weighted_harmonic_mean
|
In mathematics, the harmonic series is the infinite series formed by summing all positive unit fractions: The first n {\displaystyle n} terms of the series sum to approximately ln n + γ {\displaystyle \ln n+\gamma } , where ln {\displaystyle \ln } is the natural logarithm and γ ≈ 0.577 {\displaystyle \gamma \approx 0.577} is the Euler–Mascheroni constant. Because the logarithm has arbitrarily large values, the harmonic series does not have a finite limit: it is a divergent series. Its divergence was proven in the 14th century by Nicole Oresme using a precursor to the Cauchy condensation test for the convergence of infinite series. It can also be proven to diverge by comparing the sum to an integral, according to the integral test for convergence. Applications of the harmonic series and its partial sums include Euler's proof that there are infinitely many prime numbers, the analysis of the coupon collector's problem on how many random trials are needed to provide a complete range of responses, the connected components of random graphs, the block-stacking problem on how far over the edge of a table a stack of blocks can be cantilevered, and the average case analysis of the quicksort algorithm.
|
https://en.wikipedia.org/wiki/Alternating_harmonic_series
|
In mathematics, the height of an element g of an abelian group A is an invariant that captures its divisibility properties: it is the largest natural number N such that the equation Nx = g has a solution x ∈ A, or the symbol ∞ if there is no such N. The p-height considers only divisibility properties by the powers of a fixed prime number p. The notion of height admits a refinement so that the p-height becomes an ordinal number. Height plays an important role in Prüfer theorems and also in Ulm's theorem, which describes the classification of certain infinite abelian groups in terms of their Ulm factors or Ulm invariants.
|
https://en.wikipedia.org/wiki/Ulm's_theorem
|
In mathematics, the height zeta function of an algebraic variety or more generally a subset of a variety encodes the distribution of points of given height.
|
https://en.wikipedia.org/wiki/Height_zeta_function
|
In mathematics, the homology or cohomology of an algebra may refer to Banach algebra cohomology of a bimodule over a Banach algebra Cyclic homology of an associative algebra Group cohomology of a module over a group ring or a representation of a group Hochschild homology of a bimodule over an associative algebra Lie algebra cohomology of a module over a Lie algebra Supplemented algebra cohomology of a module over a supplemented associative algebra
|
https://en.wikipedia.org/wiki/Cohomology_of_algebras
|
In mathematics, the homotopy category is a category built from the category of topological spaces which in a sense identifies two spaces that have the same shape. The phrase is in fact used for two different (but related) categories, as discussed below. More generally, instead of starting with the category of topological spaces, one may start with any model category and define its associated homotopy category, with a construction introduced by Quillen in 1967. In this way, homotopy theory can be applied to many other categories in geometry and algebra.
|
https://en.wikipedia.org/wiki/Homotopy_category_of_topological_spaces
|
In mathematics, the homotopy principle (or h-principle) is a very general way to solve partial differential equations (PDEs), and more generally partial differential relations (PDRs). The h-principle is good for underdetermined PDEs or PDRs, such as the immersion problem, isometric immersion problem, fluid dynamics, and other areas. The theory was started by Yakov Eliashberg, Mikhail Gromov and Anthony V. Phillips.
|
https://en.wikipedia.org/wiki/Homotopy_principle
|
It was based on earlier results that reduced partial differential relations to homotopy, particularly for immersions. The first evidence of h-principle appeared in the Whitney–Graustein theorem. This was followed by the Nash–Kuiper isometric C1 embedding theorem and the Smale–Hirsch immersion theorem.
|
https://en.wikipedia.org/wiki/Homotopy_principle
|
In mathematics, the horizontal line test is a test used to determine whether a function is injective (i.e., one-to-one).
|
https://en.wikipedia.org/wiki/Horizontal_line_test
|
In mathematics, the hypergeometric function of a matrix argument is a generalization of the classical hypergeometric series. It is a function defined by an infinite summation which can be used to evaluate certain multivariate integrals. Hypergeometric functions of a matrix argument have applications in random matrix theory. For example, the distributions of the extreme eigenvalues of random matrices are often expressed in terms of the hypergeometric function of a matrix argument.
|
https://en.wikipedia.org/wiki/Hypergeometric_function_of_a_matrix_argument
|
In mathematics, the hypergraph regularity method is a powerful tool in extremal graph theory that refers to the combined application of the hypergraph regularity lemma and the associated counting lemma. It is a generalization of the graph regularity method, which refers to the use of Szemerédi's regularity and counting lemmas. Very informally, the hypergraph regularity lemma decomposes any given k {\displaystyle k} -uniform hypergraph into a random-like object with bounded parts (with an appropriate boundedness and randomness notions) that is usually easier to work with. On the other hand, the hypergraph counting lemma estimates the number of hypergraphs of a given isomorphism class in some collections of the random-like parts.
|
https://en.wikipedia.org/wiki/Hypergraph_regularity_method
|
This is an extension of Szemerédi's regularity lemma that partitions any given graph into bounded number parts such that edges between the parts behave almost randomly. Similarly, the hypergraph counting lemma is a generalization of the graph counting lemma that estimates number of copies of a fixed graph as a subgraph of a larger graph. There are several distinct formulations of the method, all of which imply the hypergraph removal lemma and a number of other powerful results, such as Szemerédi's theorem, as well as some of its multidimensional extensions. The following formulations are due to V. Rödl, B. Nagle, J. Skokan, M. Schacht, and Y. Kohayakawa, for alternative versions see Tao (2006), and Gowers (2007).
|
https://en.wikipedia.org/wiki/Hypergraph_regularity_method
|
In mathematics, the hyperkähler quotient of a hyperkähler manifold acted on by a Lie group G is the quotient of a fiber of a hyperkähler moment map M → g ⊗ R 3 {\displaystyle M\to {\mathfrak {g}}\otimes \mathbb {R} ^{3}} over a G-fixed point by the action of G. It was introduced by Nigel Hitchin, Anders Karlhede, Ulf Lindström, and Martin Roček in 1987. It is a hyperkähler analogue of the Kähler quotient.
|
https://en.wikipedia.org/wiki/Hyperkähler_quotient
|
In mathematics, the hyperoperation sequence is an infinite sequence of arithmetic operations (called hyperoperations in this context) that starts with a unary operation (the successor function with n = 0). The sequence continues with the binary operations of addition (n = 1), multiplication (n = 2), and exponentiation (n = 3). After that, the sequence proceeds with further binary operations extending beyond exponentiation, using right-associativity. For the operations beyond exponentiation, the nth member of this sequence is named by Reuben Goodstein after the Greek prefix of n suffixed with -ation (such as tetration (n = 4), pentation (n = 5), hexation (n = 6), etc.) and can be written as using n − 2 arrows in Knuth's up-arrow notation. Each hyperoperation may be understood recursively in terms of the previous one by: a b = a ( a ( a ( ⋯ ( a ( a a ) ) ⋯ ) ) ) ⏟ b copies of a , n ≥ 2 {\displaystyle ab=\underbrace {a(a(a(\cdots (a(aa))\cdots )))} _{\displaystyle b{\mbox{ copies of }}a},\quad n\geq 2} It may also be defined according to the recursion rule part of the definition, as in Knuth's up-arrow version of the Ackermann function: a b = a ( a ( b − 1 ) ) , n ≥ 1 {\displaystyle ab=a\left(a\left(b-1\right)\right),\quad n\geq 1} This can be used to easily show numbers much larger than those which scientific notation can, such as Skewes's number and googolplexplex (e.g. 50 50 {\displaystyle 5050} is much larger than Skewes's number and googolplexplex), but there are some numbers which even they cannot easily show, such as Graham's number and TREE(3).This recursion rule is common to many variants of hyperoperations.
|
https://en.wikipedia.org/wiki/Hyperoperation
|
In mathematics, the hypograph or subgraph of a function f: R n → R {\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} } is the set of points lying on or below its graph. A related definition is that of such a function's epigraph, which is the set of points on or above the function's graph. The domain (rather than the codomain) of the function is not particularly important for this definition; it can be an arbitrary set instead of R n {\displaystyle \mathbb {R} ^{n}} .
|
https://en.wikipedia.org/wiki/Hypograph_(mathematics)
|
In mathematics, the idea of a free object is one of the basic concepts of abstract algebra. Informally, a free object over a set A can be thought of as being a "generic" algebraic structure over A: the only equations that hold between elements of the free object are those that follow from the defining axioms of the algebraic structure. Examples include free groups, tensor algebras, or free lattices. The concept is a part of universal algebra, in the sense that it relates to all types of algebraic structure (with finitary operations). It also has a formulation in terms of category theory, although this is in yet more abstract terms.
|
https://en.wikipedia.org/wiki/Free_functor
|
In mathematics, the idea of descent extends the intuitive idea of 'gluing' in topology. Since the topologists' glue is the use of equivalence relations on topological spaces, the theory starts with some ideas on identification.
|
https://en.wikipedia.org/wiki/Descent_theory
|
In mathematics, the idea of geometric scaling can be generalized. The scale between two mathematical objects need not be a fixed ratio but may vary in some systematic way; this is part of mathematical projection, which generally defines a point by point relationship between two mathematical objects. (Generally, these may be mathematical sets and may not represent geometric objects.)
|
https://en.wikipedia.org/wiki/Scale_ratio
|
In mathematics, the identity theorem for Riemann surfaces is a theorem that states that a holomorphic function is completely determined by its values on any subset of its domain that has a limit point.
|
https://en.wikipedia.org/wiki/Identity_theorem_for_Riemann_surfaces
|
In mathematics, the image of a function is the set of all output values it may produce. More generally, evaluating a given function f {\displaystyle f} at each element of a given subset A {\displaystyle A} of its domain produces a set, called the "image of A {\displaystyle A} under (or through) f {\displaystyle f} ". Similarly, the inverse image (or preimage) of a given subset B {\displaystyle B} of the codomain of f {\displaystyle f} is the set of all elements of the domain that map to the members of B . {\displaystyle B.} Image and inverse image may also be defined for general binary relations, not just functions.
|
https://en.wikipedia.org/wiki/Inverse_image
|
In mathematics, the imaginary unit i {\displaystyle i} is the square root of − 1 {\displaystyle -1} , such that i 2 {\displaystyle i^{2}} is defined to be − 1 {\displaystyle -1} . A number which is a direct multiple of i {\displaystyle i} is known as an imaginary number. : Chp 4 In certain physical theories, periods of time are multiplied by i {\displaystyle i} in this way.
|
https://en.wikipedia.org/wiki/Imaginary_time
|
Mathematically, an imaginary time period τ {\textstyle \tau } may be obtained from real time t {\textstyle t} via a Wick rotation by π / 2 {\textstyle \pi /2} in the complex plane: τ = i t {\textstyle \tau =it} . : 769 Stephen Hawking popularized the concept of imaginary time in his book The Universe in a Nutshell. "One might think this means that imaginary numbers are just a mathematical game having nothing to do with the real world.
|
https://en.wikipedia.org/wiki/Imaginary_time
|
From the viewpoint of positivist philosophy, however, one cannot determine what is real. All one can do is find which mathematical models describe the universe we live in. It turns out that a mathematical model involving imaginary time predicts not only effects we have already observed but also effects we have not been able to measure yet nevertheless believe in for other reasons.
|
https://en.wikipedia.org/wiki/Imaginary_time
|
So what is real and what is imaginary? Is the distinction just in our minds?" In fact, the terms "real" and "imaginary" for numbers are just a historical accident, much like the terms "rational" and "irrational": "...the words real and imaginary are picturesque relics of an age when the nature of complex numbers was not properly understood."
|
https://en.wikipedia.org/wiki/Imaginary_time
|
In mathematics, the immanant of a matrix was defined by Dudley E. Littlewood and Archibald Read Richardson as a generalisation of the concepts of determinant and permanent. Let λ = ( λ 1 , λ 2 , … ) {\displaystyle \lambda =(\lambda _{1},\lambda _{2},\ldots )} be a partition of an integer n {\displaystyle n} and let χ λ {\displaystyle \chi _{\lambda }} be the corresponding irreducible representation-theoretic character of the symmetric group S n {\displaystyle S_{n}} . The immanant of an n × n {\displaystyle n\times n} matrix A = ( a i j ) {\displaystyle A=(a_{ij})} associated with the character χ λ {\displaystyle \chi _{\lambda }} is defined as the expression Imm λ ( A ) = ∑ σ ∈ S n χ λ ( σ ) a 1 σ ( 1 ) a 2 σ ( 2 ) ⋯ a n σ ( n ) . {\displaystyle \operatorname {Imm} _{\lambda }(A)=\sum _{\sigma \in S_{n}}\chi _{\lambda }(\sigma )a_{1\sigma (1)}a_{2\sigma (2)}\cdots a_{n\sigma (n)}.}
|
https://en.wikipedia.org/wiki/Immanant
|
In mathematics, the incomplete Fermi–Dirac integral for an index j is given by F j ( x , b ) = 1 Γ ( j + 1 ) ∫ b ∞ t j exp ( t − x ) + 1 d t . {\displaystyle F_{j}(x,b)={\frac {1}{\Gamma (j+1)}}\int _{b}^{\infty }{\frac {t^{j}}{\exp(t-x)+1}}\,dt.} This is an alternate definition of the incomplete polylogarithm.
|
https://en.wikipedia.org/wiki/Incomplete_Fermi–Dirac_integral
|
In mathematics, the incompressibility method is a proof method like the probabilistic method, the counting method or the pigeonhole principle. To prove that an object in a certain class (on average) satisfies a certain property, select an object of that class that is incompressible. If it does not satisfy the property, it can be compressed by computable coding. Since it can be generally proven that almost all objects in a given class are incompressible, the argument demonstrates that almost all objects in the class have the property involved (not just the average). To select an incompressible object is ineffective, and cannot be done by a computer program. However, a simple counting argument usually shows that almost all objects of a given class can be compressed by only a few bits (are incompressible).
|
https://en.wikipedia.org/wiki/Incompressibility_method
|
In mathematics, the ind-completion or ind-construction is the process of freely adding filtered colimits to a given category C. The objects in this ind-completed category, denoted Ind(C), are known as direct systems, they are functors from a small filtered category I to C. The dual concept is the pro-completion, Pro(C).
|
https://en.wikipedia.org/wiki/Pro-category
|
In mathematics, the indefinite orthogonal group, O(p, q) is the Lie group of all linear transformations of an n-dimensional real vector space that leave invariant a nondegenerate, symmetric bilinear form of signature (p, q), where n = p + q. It is also called the pseudo-orthogonal group or generalized orthogonal group. The dimension of the group is n(n − 1)/2. The indefinite special orthogonal group, SO(p, q) is the subgroup of O(p, q) consisting of all elements with determinant 1. Unlike in the definite case, SO(p, q) is not connected – it has 2 components – and there are two additional finite index subgroups, namely the connected SO+(p, q) and O+(p, q), which has 2 components – see § Topology for definition and discussion.
|
https://en.wikipedia.org/wiki/Indefinite_orthogonal_group
|
The signature of the form determines the group up to isomorphism; interchanging p with q amounts to replacing the metric by its negative, and so gives the same group. If either p or q equals zero, then the group is isomorphic to the ordinary orthogonal group O(n). We assume in what follows that both p and q are positive.
|
https://en.wikipedia.org/wiki/Indefinite_orthogonal_group
|
The group O(p, q) is defined for vector spaces over the reals. For complex spaces, all groups O(p, q; C) are isomorphic to the usual orthogonal group O(p + q; C), since the transform z j ↦ i z j {\displaystyle z_{j}\mapsto iz_{j}} changes the signature of a form. This should not be confused with the indefinite unitary group U(p, q) which preserves a sesquilinear form of signature (p, q). In even dimension n = 2p, O(p, p) is known as the split orthogonal group.
|
https://en.wikipedia.org/wiki/Indefinite_orthogonal_group
|
In mathematics, the indefinite product operator is the inverse operator of Q ( f ( x ) ) = f ( x + 1 ) f ( x ) {\textstyle Q(f(x))={\frac {f(x+1)}{f(x)}}} . It is a discrete version of the geometric integral of geometric calculus, one of the non-Newtonian calculi. Some authors use term discrete multiplicative integration.Thus Q ( ∏ x f ( x ) ) = f ( x ) .
|
https://en.wikipedia.org/wiki/Indefinite_product
|
{\displaystyle Q\left(\prod _{x}f(x)\right)=f(x)\,.} More explicitly, if ∏ x f ( x ) = F ( x ) {\textstyle \prod _{x}f(x)=F(x)} , then F ( x + 1 ) F ( x ) = f ( x ) . {\displaystyle {\frac {F(x+1)}{F(x)}}=f(x)\,.} If F(x) is a solution of this functional equation for a given f(x), then so is CF(x) for any constant C. Therefore, each indefinite product actually represents a family of functions, differing by a multiplicative constant.
|
https://en.wikipedia.org/wiki/Indefinite_product
|
In mathematics, the indicator vector or characteristic vector or incidence vector of a subset T of a set S is the vector x T := ( x s ) s ∈ S {\displaystyle x_{T}:=(x_{s})_{s\in S}} such that x s = 1 {\displaystyle x_{s}=1} if s ∈ T {\displaystyle s\in T} and x s = 0 {\displaystyle x_{s}=0} if s ∉ T . {\displaystyle s\notin T.} If S is countable and its elements are numbered so that S = { s 1 , s 2 , … , s n } {\displaystyle S=\{s_{1},s_{2},\ldots ,s_{n}\}} , then x T = ( x 1 , x 2 , … , x n ) {\displaystyle x_{T}=(x_{1},x_{2},\ldots ,x_{n})} where x i = 1 {\displaystyle x_{i}=1} if s i ∈ T {\displaystyle s_{i}\in T} and x i = 0 {\displaystyle x_{i}=0} if s i ∉ T . {\displaystyle s_{i}\notin T.} To put it more simply, the indicator vector of T is a vector with one element for each element in S, with that element being one if the corresponding element of S is in T, and zero if it is not.An indicator vector is a special (countable) case of an indicator function.
|
https://en.wikipedia.org/wiki/Indicator_vector
|
In mathematics, the inequality of arithmetic and geometric means, or more briefly the AM–GM inequality, states that the arithmetic mean of a list of non-negative real numbers is greater than or equal to the geometric mean of the same list; and further, that the two means are equal if and only if every number in the list is the same (in which case they are both that number). The simplest non-trivial case – i.e., with more than one variable – for two non-negative numbers x and y, is the statement that x + y 2 ≥ x y {\displaystyle {\frac {x+y}{2}}\geq {\sqrt {xy}}} with equality if and only if x = y. This case can be seen from the fact that the square of a real number is always non-negative (greater than or equal to zero) and from the elementary case (a ± b)2 = a2 ± 2ab + b2 of the binomial formula: 0 ≤ ( x − y ) 2 = x 2 − 2 x y + y 2 = x 2 + 2 x y + y 2 − 4 x y = ( x + y ) 2 − 4 x y . {\displaystyle {\begin{aligned}0&\leq (x-y)^{2}\\&=x^{2}-2xy+y^{2}\\&=x^{2}+2xy+y^{2}-4xy\\&=(x+y)^{2}-4xy.\end{aligned}}} Hence (x + y)2 ≥ 4xy, with equality precisely when (x − y)2 = 0, i.e. x = y. The AM–GM inequality then follows from taking the positive square root of both sides and then dividing both sides by 2. For a geometrical interpretation, consider a rectangle with sides of length x and y, hence it has perimeter 2x + 2y and area xy.
|
https://en.wikipedia.org/wiki/AM–GM_inequality
|
Similarly, a square with all sides of length √xy has the perimeter 4√xy and the same area as the rectangle. The simplest non-trivial case of the AM–GM inequality implies for the perimeters that 2x + 2y ≥ 4√xy and that only the square has the smallest perimeter amongst all rectangles of equal area. Extensions of the AM–GM inequality are available to include weights or generalized means.
|
https://en.wikipedia.org/wiki/AM–GM_inequality
|
In mathematics, the infimum (abbreviated inf; plural infima) of a subset S {\displaystyle S} of a partially ordered set P {\displaystyle P} is the greatest element in P {\displaystyle P} that is less than or equal to each element of S , {\displaystyle S,} if such an element exists. In other words, it is the greatest element of P {\displaystyle P} that is lower or equal to the lowest element of S {\displaystyle S} . Consequently, the term greatest lower bound (abbreviated as GLB) is also commonly used. The supremum (abbreviated sup; plural suprema) of a subset S {\displaystyle S} of a partially ordered set P {\displaystyle P} is the least element in P {\displaystyle P} that is greater than or equal to each element of S , {\displaystyle S,} if such an element exists.
|
https://en.wikipedia.org/wiki/Infimum_and_supremum
|
In other words, it is the least element of P {\displaystyle P} that is greater than or equal to the greatest element of S {\displaystyle S} . Consequently, the supremum is also referred to as the least upper bound (or LUB).The infimum is in a precise sense dual to the concept of a supremum. Infima and suprema of real numbers are common special cases that are important in analysis, and especially in Lebesgue integration.
|
https://en.wikipedia.org/wiki/Infimum_and_supremum
|
However, the general definitions remain valid in the more abstract setting of order theory where arbitrary partially ordered sets are considered. The concepts of infimum and supremum are close to minimum and maximum, but are more useful in analysis because they better characterize special sets which may have no minimum or maximum. For instance, the set of positive real numbers R + {\displaystyle \mathbb {R} ^{+}} (not including 0 {\displaystyle 0} ) does not have a minimum, because any given element of R + {\displaystyle \mathbb {R} ^{+}} could simply be divided in half resulting in a smaller number that is still in R + .
|
https://en.wikipedia.org/wiki/Infimum_and_supremum
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.