text
stringlengths 9
3.55k
| source
stringlengths 31
280
|
|---|---|
Equivalently, it is the ratio of the infinitesimal change of the logarithm of a function with respect to the infinitesimal change of the logarithm of the argument. Generalisations to multi-input-multi-output cases also exist in the literature.The elasticity of a function is a constant α {\displaystyle \alpha } if and only if the function has the form f ( x ) = C x α {\displaystyle f(x)=Cx^{\alpha }} for a constant C > 0 {\displaystyle C>0} . The elasticity at a point is the limit of the arc elasticity between two points as the separation between those two points approaches zero. The concept of elasticity is widely used in economics and Metabolic Control Analysis; see elasticity (economics) and Elasticity coefficient respectively for details.
|
https://en.wikipedia.org/wiki/Elasticity_of_a_function
|
In mathematics, the elliptic gamma function is a generalization of the q-gamma function, which is itself the q-analog of the ordinary gamma function. It is closely related to a function studied by Jackson (1905), and can be expressed in terms of the triple gamma function. It is given by Γ ( z ; p , q ) = ∏ m = 0 ∞ ∏ n = 0 ∞ 1 − p m + 1 q n + 1 / z 1 − p m q n z .
|
https://en.wikipedia.org/wiki/Elliptic_gamma_function
|
{\displaystyle \Gamma (z;p,q)=\prod _{m=0}^{\infty }\prod _{n=0}^{\infty }{\frac {1-p^{m+1}q^{n+1}/z}{1-p^{m}q^{n}z}}.} It obeys several identities: Γ ( z ; p , q ) = 1 Γ ( p q / z ; p , q ) {\displaystyle \Gamma (z;p,q)={\frac {1}{\Gamma (pq/z;p,q)}}\,} Γ ( p z ; p , q ) = θ ( z ; q ) Γ ( z ; p , q ) {\displaystyle \Gamma (pz;p,q)=\theta (z;q)\Gamma (z;p,q)\,} and Γ ( q z ; p , q ) = θ ( z ; p ) Γ ( z ; p , q ) {\displaystyle \Gamma (qz;p,q)=\theta (z;p)\Gamma (z;p,q)\,} where θ is the q-theta function. When p = 0 {\displaystyle p=0} , it essentially reduces to the infinite q-Pochhammer symbol: Γ ( z ; 0 , q ) = 1 ( z ; q ) ∞ . {\displaystyle \Gamma (z;0,q)={\frac {1}{(z;q)_{\infty }}}.}
|
https://en.wikipedia.org/wiki/Elliptic_gamma_function
|
In mathematics, the empty set is the unique set having no elements; its size or cardinality (count of elements in a set) is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, while in other theories, its existence can be deduced. Many possible properties of sets are vacuously true for the empty set.
|
https://en.wikipedia.org/wiki/Nonempty_set
|
Any set other than the empty set is called non-empty. In some textbooks and popularizations, the empty set is referred to as the "null set". However, null set is a distinct notion within the context of measure theory, in which it describes a set of measure zero (which is not necessarily empty). The empty set may also be called the void set.
|
https://en.wikipedia.org/wiki/Nonempty_set
|
In mathematics, the endomorphisms of an abelian group X form a ring. This ring is called the endomorphism ring of X, denoted by End(X); the set of all homomorphisms of X into itself. Addition of endomorphisms arises naturally in a pointwise manner and multiplication via endomorphism composition.
|
https://en.wikipedia.org/wiki/Endomorphism_algebra
|
Using these operations, the set of endomorphisms of an abelian group forms a (unital) ring, with the zero map 0: x ↦ 0 {\textstyle 0:x\mapsto 0} as additive identity and the identity map 1: x ↦ x {\textstyle 1:x\mapsto x} as multiplicative identity.The functions involved are restricted to what is defined as a homomorphism in the context, which depends upon the category of the object under consideration. The endomorphism ring consequently encodes several internal properties of the object. As the resulting object is often an algebra over some ring R, this may also be called the endomorphism algebra.
|
https://en.wikipedia.org/wiki/Endomorphism_algebra
|
An abelian group is the same thing as a module over the ring of integers, which is the initial object in the category of rings. In a similar fashion, if R is any commutative ring, the endomorphisms of an R-module form an algebra over R by the same axioms and derivation. In particular, if R is a field, its modules M are vector spaces and their endomorphism rings are algebras over the field R.
|
https://en.wikipedia.org/wiki/Endomorphism_algebra
|
In mathematics, the energy of a graph is the sum of the absolute values of the eigenvalues of the adjacency matrix of the graph. This quantity is studied in the context of spectral graph theory. More precisely, let G be a graph with n vertices.
|
https://en.wikipedia.org/wiki/Graph_energy
|
It is assumed that G is simple, that is, it does not contain loops or parallel edges. Let A be the adjacency matrix of G and let λ i {\displaystyle \lambda _{i}} , i = 1 , … , n {\displaystyle i=1,\ldots ,n} , be the eigenvalues of A. Then the energy of the graph is defined as: E ( G ) = ∑ i = 1 n | λ i | . {\displaystyle E(G)=\sum _{i=1}^{n}|\lambda _{i}|.}
|
https://en.wikipedia.org/wiki/Graph_energy
|
In mathematics, the entropy influence conjecture is a statement about Boolean functions originally conjectured by Ehud Friedgut and Gil Kalai in 1996.
|
https://en.wikipedia.org/wiki/Entropy_influence_conjecture
|
In mathematics, the epigraph or supergraph of a function f: X → {\displaystyle f:X\to } valued in the extended real numbers = R ∪ { ± ∞ } {\displaystyle =\mathbb {R} \cup \{\pm \infty \}} is the set, denoted by epi f , {\displaystyle \operatorname {epi} f,} of all points in the Cartesian product X × R {\displaystyle X\times \mathbb {R} } lying on or above its graph. The strict epigraph epi S f {\displaystyle \operatorname {epi} _{S}f} is the set of points in X × R {\displaystyle X\times \mathbb {R} } lying strictly above its graph. Importantly, although both the graph and epigraph of f {\displaystyle f} consists of points in X × , {\displaystyle X\times ,} the epigraph consists entirely of points in the subset X × R , {\displaystyle X\times \mathbb {R} ,} which is not necessarily true of the graph of f . {\displaystyle f.}
|
https://en.wikipedia.org/wiki/Epigraph_(mathematics)
|
If the function takes ± ∞ {\displaystyle \pm \infty } as a value then graph f {\displaystyle \operatorname {graph} f} will not be a subset of its epigraph epi f . {\displaystyle \operatorname {epi} f.} For example, if f ( x 0 ) = ∞ {\displaystyle f\left(x_{0}\right)=\infty } then the point ( x 0 , f ( x 0 ) ) = ( x 0 , ∞ ) {\displaystyle \left(x_{0},f\left(x_{0}\right)\right)=\left(x_{0},\infty \right)} will belong to graph f {\displaystyle \operatorname {graph} f} but not to epi f .
|
https://en.wikipedia.org/wiki/Epigraph_(mathematics)
|
{\displaystyle \operatorname {epi} f.} These two sets are nevertheless closely related because the graph can always be reconstructed from the epigraph, and vice versa. The study of continuous real-valued functions in real analysis has traditionally been closely associated with the study of their graphs, which are sets that provide geometric information (and intuition) about these functions.
|
https://en.wikipedia.org/wiki/Epigraph_(mathematics)
|
Epigraphs serve this same purpose in the fields of convex analysis and variational analysis, in which the primary focus is on convex functions valued in {\displaystyle } instead of continuous functions valued in a vector space (such as R {\displaystyle \mathbb {R} } or R 2 {\displaystyle \mathbb {R} ^{2}} ). This is because in general, for such functions, geometric intuition is more readily obtained from a function's epigraph than from its graph. Similarly to how graphs are used in real analysis, the epigraph can often be used to give geometrical interpretations of a convex function's properties, to help formulate or prove hypotheses, or to aid in constructing counterexamples.
|
https://en.wikipedia.org/wiki/Epigraph_(mathematics)
|
In mathematics, the epsilon numbers are a collection of transfinite numbers whose defining property is that they are fixed points of an exponential map. Consequently, they are not reachable from 0 via a finite series of applications of the chosen exponential map and of "weaker" operations like addition and multiplication. The original epsilon numbers were introduced by Georg Cantor in the context of ordinal arithmetic; they are the ordinal numbers ε that satisfy the equation ε = ω ε , {\displaystyle \varepsilon =\omega ^{\varepsilon },\,} in which ω is the smallest infinite ordinal. The least such ordinal is ε0 (pronounced epsilon nought or epsilon zero), which can be viewed as the "limit" obtained by transfinite recursion from a sequence of smaller limit ordinals: ε 0 = ω ω ω ⋅ ⋅ ⋅ = sup { ω , ω ω , ω ω ω , ω ω ω ω , … } , {\displaystyle \varepsilon _{0}=\omega ^{\omega ^{\omega ^{\cdot ^{\cdot ^{\cdot }}}}}=\sup\{\omega ,\omega ^{\omega },\omega ^{\omega ^{\omega }},\omega ^{\omega ^{\omega ^{\omega }}},\dots \}\,,} where sup is the supremum function, which is equivalent to set union in the case of the von Neumann representation of ordinals.
|
https://en.wikipedia.org/wiki/Epsilon_number
|
Larger ordinal fixed points of the exponential map are indexed by ordinal subscripts, resulting in ε 1 , ε 2 , … , ε ω , ε ω + 1 , … , ε ε 0 , … , ε ε 1 , … , ε ε ε ⋅ ⋅ ⋅ , … {\displaystyle \varepsilon _{1},\varepsilon _{2},\ldots ,\varepsilon _{\omega },\varepsilon _{\omega +1},\ldots ,\varepsilon _{\varepsilon _{0}},\ldots ,\varepsilon _{\varepsilon _{1}},\ldots ,\varepsilon _{\varepsilon _{\varepsilon _{\cdot _{\cdot _{\cdot }}}}},\ldots } . The ordinal ε0 is still countable, as is any epsilon number whose index is countable (there exist uncountable ordinals, and uncountable epsilon numbers whose index is an uncountable ordinal).
|
https://en.wikipedia.org/wiki/Epsilon_number
|
The smallest epsilon number ε0 appears in many induction proofs, because for many purposes, transfinite induction is only required up to ε0 (as in Gentzen's consistency proof and the proof of Goodstein's theorem). Its use by Gentzen to prove the consistency of Peano arithmetic, along with Gödel's second incompleteness theorem, show that Peano arithmetic cannot prove the well-foundedness of this ordering (it is in fact the least ordinal with this property, and as such, in proof-theoretic ordinal analysis, is used as a measure of the strength of the theory of Peano arithmetic). Many larger epsilon numbers can be defined using the Veblen function.
|
https://en.wikipedia.org/wiki/Epsilon_number
|
A more general class of epsilon numbers has been identified by John Horton Conway and Donald Knuth in the surreal number system, consisting of all surreals that are fixed points of the base ω exponential map x → ωx. Hessenberg (1906) defined gamma numbers (see additively indecomposable ordinal) to be numbers γ>0 such that α+γ=γ whenever α<γ, and delta numbers (see multiplicatively indecomposable ordinals) to be numbers δ>1 such that αδ=δ whenever 0<α<δ, and epsilon numbers to be numbers ε>2 such that αε=ε whenever 1<α<ε. His gamma numbers are those of the form ωβ, and his delta numbers are those of the form ωωβ.
|
https://en.wikipedia.org/wiki/Epsilon_number
|
In mathematics, the equal sign can be used as a simple statement of fact in a specific case ("x = 2"), or to create definitions ("let x = 2"), conditional statements ("if x = 2, then ..."), or to express a universal equivalence ("(x + 1)2 = x2 + 2x + 1"). The first important computer programming language to use the equal sign was the original version of Fortran, FORTRAN I, designed in 1954 and implemented in 1957. In Fortran, = serves as an assignment operator: X = 2 sets the value of X to 2. This somewhat resembles the use of = in a mathematical definition, but with different semantics: the expression following = is evaluated first, and may refer to a previous value of X. For example, the assignment X = X + 2 increases the value of X by 2.
|
https://en.wikipedia.org/wiki/Not_equal_sign
|
A rival programming-language usage was pioneered by the original version of ALGOL, which was designed in 1958 and implemented in 1960. ALGOL included a relational operator that tested for equality, allowing constructions like if x = 2 with essentially the same meaning of = as the conditional usage in mathematics. The equal sign was reserved for this usage.
|
https://en.wikipedia.org/wiki/Not_equal_sign
|
Both usages have remained common in different programming languages into the early 21st century. As well as Fortran, = is used for assignment in such languages as C, Perl, Python, awk, and their descendants. But = is used for equality and not assignment in the Pascal family, Ada, Eiffel, APL, and other languages.
|
https://en.wikipedia.org/wiki/Not_equal_sign
|
A few languages, such as BASIC and PL/I, have used the equal sign to mean both assignment and equality, distinguished by context. However, in most languages where = has one of these meanings, a different character or, more often, a sequence of characters is used for the other meaning. Following ALGOL, most languages that use = for equality use := for assignment, although APL, with its special character set, uses a left-pointing arrow.
|
https://en.wikipedia.org/wiki/Not_equal_sign
|
Fortran did not have an equality operator (it was only possible to compare an expression to zero, using the arithmetic IF statement) until FORTRAN IV was released in 1962, since when it has used the four characters .EQ. to test for equality. The language B introduced the use of == with this meaning, which has been copied by its descendant C and most later languages where = means assignment. The equal sign is also used in defining attribute–value pairs, in which an attribute is assigned a value.
|
https://en.wikipedia.org/wiki/Not_equal_sign
|
In mathematics, the equations governing the isomonodromic deformation of meromorphic linear systems of ordinary differential equations are, in a fairly precise sense, the most fundamental exact nonlinear differential equations. As a result, their solutions and properties lie at the heart of the field of exact nonlinearity and integrable systems. Isomonodromic deformations were first studied by Richard Fuchs, with early pioneering contributions from Lazarus Fuchs, Paul Painlevé, René Garnier, and Ludwig Schlesinger. Inspired by results in statistical mechanics, a seminal contribution to the theory was made by Michio Jimbo, Tetsuji Miwa, and Kimio Ueno, who studied cases involving irregular singularities.
|
https://en.wikipedia.org/wiki/Schlesinger_equations
|
In mathematics, the equidistribution theorem is the statement that the sequence a, 2a, 3a, ... mod 1is uniformly distributed on the circle R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } , when a is an irrational number. It is a special case of the ergodic theorem where one takes the normalized angle measure μ = d θ 2 π {\displaystyle \mu ={\frac {d\theta }{2\pi }}} .
|
https://en.wikipedia.org/wiki/Equidistribution_theorem
|
In mathematics, the equilateral dimension of a metric space is the maximum size of any subset of the space whose points are all at equal distances to each other. Equilateral dimension has also been called "metric dimension", but the term "metric dimension" also has many other inequivalent usages. The equilateral dimension of a d {\displaystyle d} -dimensional Euclidean space is d + 1 {\displaystyle d+1} , achieved by a regular simplex, and the equilateral dimension of a d {\displaystyle d} -dimensional vector space with the Chebyshev distance ( L ∞ {\displaystyle L^{\infty }} norm) is 2 d {\displaystyle 2^{d}} , achieved by a hypercube. However, the equilateral dimension of a space with the Manhattan distance ( L 1 {\displaystyle L^{1}} norm) is not known; Kusner's conjecture, named after Robert B. Kusner, states that it is exactly 2 d {\displaystyle 2d} , achieved by a cross polytope.
|
https://en.wikipedia.org/wiki/Equilateral_dimension
|
In mathematics, the equioscillation theorem concerns the approximation of continuous functions using polynomials when the merit function is the maximum difference (uniform norm). Its discovery is attributed to Chebyshev.
|
https://en.wikipedia.org/wiki/Equioscillation_theorem
|
In mathematics, the equivariant algebraic K-theory is an algebraic K-theory associated to the category Coh G ( X ) {\displaystyle \operatorname {Coh} ^{G}(X)} of equivariant coherent sheaves on an algebraic scheme X with action of a linear algebraic group G, via Quillen's Q-construction; thus, by definition, K i G ( X ) = π i ( B + Coh G ( X ) ) . {\displaystyle K_{i}^{G}(X)=\pi _{i}(B^{+}\operatorname {Coh} ^{G}(X)).} In particular, K 0 G ( C ) {\displaystyle K_{0}^{G}(C)} is the Grothendieck group of Coh G ( X ) {\displaystyle \operatorname {Coh} ^{G}(X)} . The theory was developed by R. W. Thomason in 1980s.
|
https://en.wikipedia.org/wiki/Equivariant_algebraic_K-theory
|
Specifically, he proved equivariant analogs of fundamental theorems such as the localization theorem. Equivalently, K i G ( X ) {\displaystyle K_{i}^{G}(X)} may be defined as the K i {\displaystyle K_{i}} of the category of coherent sheaves on the quotient stack {\displaystyle } . (Hence, the equivariant K-theory is a specific case of the K-theory of a stack.) A version of the Lefschetz fixed-point theorem holds in the setting of equivariant (algebraic) K-theory.
|
https://en.wikipedia.org/wiki/Equivariant_algebraic_K-theory
|
In mathematics, the error function (also called the Gauss error function), often denoted by erf, is a complex function of a complex variable defined as: erf z = 2 π ∫ 0 z e − t 2 d t . {\displaystyle \operatorname {erf} z={\frac {2}{\sqrt {\pi }}}\int _{0}^{z}e^{-t^{2}}\,\mathrm {d} t.} Some authors define erf {\displaystyle \operatorname {erf} } without the factor of 2 / π {\displaystyle 2/{\sqrt {\pi }}} . This nonelementary integral is a sigmoid function that occurs often in probability, statistics, and partial differential equations.
|
https://en.wikipedia.org/wiki/Complementary_error_function
|
In many of these applications, the function argument is a real number. If the function argument is real, then the function value is also real. In statistics, for non-negative values of x, the error function has the following interpretation: for a random variable Y that is normally distributed with mean 0 and standard deviation 1/√2, erf x is the probability that Y falls in the range . Two closely related functions are the complementary error function (erfc) defined as erfc z = 1 − erf z , {\displaystyle \operatorname {erfc} z=1-\operatorname {erf} z,} and the imaginary error function (erfi) defined as erfi z = − i erf i z , {\displaystyle \operatorname {erfi} z=-i\operatorname {erf} iz,} where i is the imaginary unit.
|
https://en.wikipedia.org/wiki/Complementary_error_function
|
In mathematics, the essence of counting a set and finding a result n, is that it establishes a one-to-one correspondence (or bijection) of the set with the subset of positive integers {1, 2, ..., n}. A fundamental fact, which can be proved by mathematical induction, is that no bijection can exist between {1, 2, ..., n} and {1, 2, ..., m} unless n = m; this fact (together with the fact that two bijections can be composed to give another bijection) ensures that counting the same set in different ways can never result in different numbers (unless an error is made). This is the fundamental mathematical theorem that gives counting its purpose; however you count a (finite) set, the answer is the same. In a broader context, the theorem is an example of a theorem in the mathematical field of (finite) combinatorics—hence (finite) combinatorics is sometimes referred to as "the mathematics of counting."
|
https://en.wikipedia.org/wiki/Inclusive_counting
|
Many sets that arise in mathematics do not allow a bijection to be established with {1, 2, ..., n} for any natural number n; these are called infinite sets, while those sets for which such a bijection does exist (for some n) are called finite sets. Infinite sets cannot be counted in the usual sense; for one thing, the mathematical theorems which underlie this usual sense for finite sets are false for infinite sets. Furthermore, different definitions of the concepts in terms of which these theorems are stated, while equivalent for finite sets, are inequivalent in the context of infinite sets.
|
https://en.wikipedia.org/wiki/Inclusive_counting
|
The notion of counting may be extended to them in the sense of establishing (the existence of) a bijection with some well-understood set. For instance, if a set can be brought into bijection with the set of all natural numbers, then it is called "countably infinite." This kind of counting differs in a fundamental way from counting of finite sets, in that adding new elements to a set does not necessarily increase its size, because the possibility of a bijection with the original set is not excluded.
|
https://en.wikipedia.org/wiki/Inclusive_counting
|
For instance, the set of all integers (including negative numbers) can be brought into bijection with the set of natural numbers, and even seemingly much larger sets like that of all finite sequences of rational numbers are still (only) countably infinite. Nevertheless, there are sets, such as the set of real numbers, that can be shown to be "too large" to admit a bijection with the natural numbers, and these sets are called "uncountable." Sets for which there exists a bijection between them are said to have the same cardinality, and in the most general sense counting a set can be taken to mean determining its cardinality.
|
https://en.wikipedia.org/wiki/Inclusive_counting
|
Beyond the cardinalities given by each of the natural numbers, there is an infinite hierarchy of infinite cardinalities, although only very few such cardinalities occur in ordinary mathematics (that is, outside set theory that explicitly studies possible cardinalities). Counting, mostly of finite sets, has various applications in mathematics.
|
https://en.wikipedia.org/wiki/Inclusive_counting
|
One important principle is that if two sets X and Y have the same finite number of elements, and a function f: X → Y is known to be injective, then it is also surjective, and vice versa. A related fact is known as the pigeonhole principle, which states that if two sets X and Y have finite numbers of elements n and m with n > m, then any map f: X → Y is not injective (so there exist two distinct elements of X that f sends to the same element of Y); this follows from the former principle, since if f were injective, then so would its restriction to a strict subset S of X with m elements, which restriction would then be surjective, contradicting the fact that for x in X outside S, f(x) cannot be in the image of the restriction. Similar counting arguments can prove the existence of certain objects without explicitly providing an example. In the case of infinite sets this can even apply in situations where it is impossible to give an example.The domain of enumerative combinatorics deals with computing the number of elements of finite sets, without actually counting them; the latter usually being impossible because infinite families of finite sets are considered at once, such as the set of permutations of {1, 2, ..., n} for any natural number n.
|
https://en.wikipedia.org/wiki/Inclusive_counting
|
In mathematics, the essential spectrum of a bounded operator (or, more generally, of a densely defined closed linear operator) is a certain subset of its spectrum, defined by a condition of the type that says, roughly speaking, "fails badly to be invertible".
|
https://en.wikipedia.org/wiki/Essential_spectrum
|
In mathematics, the eta invariant of a self-adjoint elliptic differential operator on a compact manifold is formally the number of positive eigenvalues minus the number of negative eigenvalues. In practice both numbers are often infinite so are defined using zeta function regularization. It was introduced by Atiyah, Patodi, and Singer (1973, 1975) who used it to extend the Hirzebruch signature theorem to manifolds with boundary. The name comes from the fact that it is a generalization of the Dirichlet eta function. They also later used the eta invariant of a self-adjoint operator to define the eta invariant of a compact odd-dimensional smooth manifold. Michael Francis Atiyah, H. Donnelly, and I. M. Singer (1983) defined the signature defect of the boundary of a manifold as the eta invariant, and used this to show that Hirzebruch's signature defect of a cusp of a Hilbert modular surface can be expressed in terms of the value at s=0 or 1 of a Shimizu L-function.
|
https://en.wikipedia.org/wiki/Eta_invariant
|
In mathematics, the excluded point topology is a topology where exclusion of a particular point defines openness. Formally, let X be any non-empty set and p ∈ X. The collection T = { S ⊆ X: p ∉ S } ∪ { X } {\displaystyle T=\{S\subseteq X:p\notin S\}\cup \{X\}} of subsets of X is then the excluded point topology on X. There are a variety of cases which are individually named: If X has two points, it is called the Sierpiński space. This case is somewhat special and is handled separately. If X is finite (with at least 3 points), the topology on X is called the finite excluded point topology If X is countably infinite, the topology on X is called the countable excluded point topology If X is uncountable, the topology on X is called the uncountable excluded point topologyA generalization is the open extension topology; if X ∖ { p } {\displaystyle X\setminus \{p\}} has the discrete topology, then the open extension topology on ( X ∖ { p } ) ∪ { p } {\displaystyle (X\setminus \{p\})\cup \{p\}} is the excluded point topology. This topology is used to provide interesting examples and counterexamples.
|
https://en.wikipedia.org/wiki/Excluded_point_topology
|
In mathematics, the explicit formulae for L-functions are relations between sums over the complex number zeroes of an L-function and sums over prime powers, introduced by Riemann (1859) for the Riemann zeta function. Such explicit formulae have been applied also to questions on bounding the discriminant of an algebraic number field, and the conductor of a number field.
|
https://en.wikipedia.org/wiki/Riemann's_explicit_formula
|
In mathematics, the exponential function can be characterized in many ways. The following characterizations (definitions) are most common. This article discusses why each characterization makes sense, and why the characterizations are independent of and equivalent to each other. As a special case of these considerations, it will be demonstrated that the three most common definitions given for the mathematical constant e are equivalent to each other.
|
https://en.wikipedia.org/wiki/Characterizations_of_the_exponential_function
|
In mathematics, the exponential integral Ei is a special function on the complex plane. It is defined as one particular definite integral of the ratio between an exponential function and its argument.
|
https://en.wikipedia.org/wiki/Well_function
|
In mathematics, the exponential response formula (ERF), also known as exponential response and complex replacement, is a method used to find a particular solution of a non-homogeneous linear ordinary differential equation of any order. The exponential response formula is applicable to non-homogeneous linear ordinary differential equations with constant coefficients if the function is polynomial, sinusoidal, exponential or the combination of the three. The general solution of a non-homogeneous linear ordinary differential equation is a superposition of the general solution of the associated homogeneous ODE and a particular solution to the non-homogeneous ODE. Alternative methods for solving ordinary differential equations of higher order are method of undetermined coefficients and method of variation of parameters.
|
https://en.wikipedia.org/wiki/Exponential_response_formula
|
In mathematics, the exponential sheaf sequence is a fundamental short exact sequence of sheaves used in complex geometry. Let M be a complex manifold, and write OM for the sheaf of holomorphic functions on M. Let OM* be the subsheaf consisting of the non-vanishing holomorphic functions. These are both sheaves of abelian groups. The exponential function gives a sheaf homomorphism exp: O M → O M ∗ , {\displaystyle \exp :{\mathcal {O}}_{M}\to {\mathcal {O}}_{M}^{*},} because for a holomorphic function f, exp(f) is a non-vanishing holomorphic function, and exp(f + g) = exp(f)exp(g).
|
https://en.wikipedia.org/wiki/Exponential_sequence
|
Its kernel is the sheaf 2πiZ of locally constant functions on M taking the values 2πin, with n an integer. The exponential sheaf sequence is therefore 0 → 2 π i Z → O M → O M ∗ → 0. {\displaystyle 0\to 2\pi i\,\mathbb {Z} \to {\mathcal {O}}_{M}\to {\mathcal {O}}_{M}^{*}\to 0.}
|
https://en.wikipedia.org/wiki/Exponential_sequence
|
The exponential mapping here is not always a surjective map on sections; this can be seen for example when M is a punctured disk in the complex plane. The exponential map is surjective on the stalks: Given a germ g of an holomorphic function at a point P such that g(P) ≠ 0, one can take the logarithm of g in a neighborhood of P. The long exact sequence of sheaf cohomology shows that we have an exact sequence ⋯ → H 0 ( O U ) → H 0 ( O U ∗ ) → H 1 ( 2 π i Z | U ) → ⋯ {\displaystyle \cdots \to H^{0}({\mathcal {O}}_{U})\to H^{0}({\mathcal {O}}_{U}^{*})\to H^{1}(2\pi i\,\mathbb {Z} |_{U})\to \cdots } for any open set U of M. Here H0 means simply the sections over U, and the sheaf cohomology H1(2πiZ|U) is the singular cohomology of U. One can think of H1(2πiZ|U) as associating an integer to each loop in U. For each section of OM*, the connecting homomorphism to H1(2πiZ|U) gives the winding number for each loop. So this homomorphism is therefore a generalized winding number and measures the failure of U to be contractible.
|
https://en.wikipedia.org/wiki/Exponential_sequence
|
In other words, there is a potential topological obstruction to taking a global logarithm of a non-vanishing holomorphic function, something that is always locally possible. A further consequence of the sequence is the exactness of ⋯ → H 1 ( O M ) → H 1 ( O M ∗ ) → H 2 ( 2 π i Z ) → ⋯ . {\displaystyle \cdots \to H^{1}({\mathcal {O}}_{M})\to H^{1}({\mathcal {O}}_{M}^{*})\to H^{2}(2\pi i\,\mathbb {Z} )\to \cdots .} Here H1(OM*) can be identified with the Picard group of holomorphic line bundles on M. The connecting homomorphism sends a line bundle to its first Chern class.
|
https://en.wikipedia.org/wiki/Exponential_sequence
|
In mathematics, the extended natural numbers is a set which contains the values 0 , 1 , 2 , … {\displaystyle 0,1,2,\dots } and ∞ {\displaystyle \infty } (infinity). That is, it is the result of adding a maximum element ∞ {\displaystyle \infty } to the natural numbers. Addition and multiplication work as normal for finite values, and are extended by the rules n + ∞ = ∞ + n = ∞ {\displaystyle n+\infty =\infty +n=\infty } ( n ∈ N ∪ { ∞ } {\displaystyle n\in \mathbb {N} \cup \{\infty \}} ), 0 × ∞ = ∞ × 0 = 0 {\displaystyle 0\times \infty =\infty \times 0=0} and m × ∞ = ∞ × m = ∞ {\displaystyle m\times \infty =\infty \times m=\infty } for m ≠ 0 {\displaystyle m\neq 0} . With addition and multiplication, N ∪ { ∞ } {\displaystyle \mathbb {N} \cup \{\infty \}} is a semiring but not a ring, as ∞ {\displaystyle \infty } lacks an additive inverse. The set can be denoted by N ¯ {\displaystyle {\overline {\mathbb {N} }}} , N ∞ {\displaystyle \mathbb {N} _{\infty }} or N ∞ {\displaystyle \mathbb {N} ^{\infty }} . It is a subset of the extended real number line, which extends the real numbers by adding − ∞ {\displaystyle -\infty } and + ∞ {\displaystyle +\infty } .
|
https://en.wikipedia.org/wiki/Extended_natural_numbers
|
In mathematics, the exterior algebra, or Grassmann algebra, named after Hermann Grassmann, is an algebra that uses the exterior product or wedge product as its multiplication. In mathematics, the exterior product or wedge product of vectors is an algebraic construction used in geometry to study areas, volumes, and their higher-dimensional analogues. The exterior product of two vectors u {\displaystyle u} and v {\displaystyle v} , denoted by u ∧ v , {\displaystyle u\wedge v,} is called a bivector and lives in a space called the exterior square, a vector space that is distinct from the original space of vectors. The magnitude of u ∧ v {\displaystyle u\wedge v} can be interpreted as the area of the parallelogram with sides u {\displaystyle u} and v , {\displaystyle v,} which in three dimensions can also be computed using the cross product of the two vectors.
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
More generally, all parallel plane surfaces with the same orientation and area have the same bivector as a measure of their oriented area. Like the cross product, the exterior product is anticommutative, meaning that u ∧ v = − ( v ∧ u ) {\displaystyle u\wedge v=-(v\wedge u)} for all vectors u {\displaystyle u} and v , {\displaystyle v,} but, unlike the cross product, the exterior product is associative (after introducing the exterior cubic, that is, oriented volume). When regarded in this manner, the exterior product of two vectors is called a 2-blade.
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
More generally, the exterior product of any number k {\displaystyle k} of vectors can be defined and is sometimes called a k {\displaystyle k} -blade (or decomposable, or simple, by some authors). It lives in a space known as the k {\displaystyle k} -th exterior power (generalizing exterior square and exterior cubic). Blades are basic objects in Projective Geometry, where no measure for length or angle (hence no parallelism) is assumed, but the main structure in there is linearity.
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
If Euclidean product is given for the vectors, the magnitude (that is, a scalar) of the resulting k {\displaystyle k} -blade is the oriented hypervolume of the k {\displaystyle k} -dimensional parallelotope whose edges are the given vectors, just as the magnitude of the scalar triple product of vectors in three dimensions gives the volume of the parallelepiped generated by those vectors. The exterior algebra provides an algebraic setting to answer some type of geometric questions. For instance, blades have a concrete geometric interpretation and objects in the exterior algebra can be manipulated according to a set of unambiguous rules.
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
The exterior algebra contains objects that are not only k {\displaystyle k} -blades, but sums of k {\displaystyle k} -blades; such a sum is called a k-vector. Combining k {\displaystyle k} -blades in a linear structure by adding and scalar multiplication is the core of Projective Geometry, (see Plücker coordinates). k {\displaystyle k} -vectors are somehow similar to homogeneous polynomials, just being skew-commutative or anticommutative; naturally, k {\displaystyle k} is called the degree of the k {\displaystyle k} -vector.
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
For any k {\displaystyle k} -vector more associated objects exist: rank is defined to be the smallest number of simple elements of which it is a sum; support is defined as the minimal subspace the k {\displaystyle k} -vector lives in; the divisor space (some authors might have other names for it, like kernel or factor) consists of the set of vectors that factor out. Once defined for two vectors (in a linearly way), the exterior product extends to the full exterior algebra, so that it makes sense to multiply any two elements of the algebra. Equipped with this product, the exterior algebra is an associative algebra, which means that α ∧ ( β ∧ γ ) = ( α ∧ β ) ∧ γ {\displaystyle \alpha \wedge (\beta \wedge \gamma )=(\alpha \wedge \beta )\wedge \gamma } for any elements α , β , γ .
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
{\displaystyle \alpha ,\beta ,\gamma .} .
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
As said, the k {\displaystyle k} -vectors are a lot like homogeneous polynomials of degree k {\displaystyle k} , such that when elements of different degrees are multiplied, the degrees add for the degree of the product. This means that the exterior algebra is a graded algebra. Exterior algebra emerged on two paths: as abstract skew(anti)-commuting objects in a vector space setting and also as antisymmetric (also called alternating) tensors.
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
This twofold approach exists in almost all cases, but an exception has to be singled out: when the vector spaces in the construction are over a field of characteristic 2. Hence, whenever antisymmetric/alternating tensors are considered in connection to exterior algebra, the basic field is supposed of 0 or odd characteristic, but not 2. Also note that exterior algebra has two basic ingredients: first, the vector spaces involved, then the exterior product.
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
On the first path, the abstract one, both ingredients are clearly given (pretty abstract, though) and this is its main power. On the second path, the vector space is clear and less abstract, but the exterior product can be defined in more (equivalent) ways, and much care is needed to avoid mistakes. The definition of the exterior algebra makes sense for spaces not just of geometric vectors, but of other vector-like objects (infinite dimensional) such as vector fields or functions.
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
Moreover, the field the vector spaces are based on may not be numeric, as real or complex numbers, but other (less usual) field (finite or not) with zero or positive characteristic. In full generality, the exterior algebra can be defined for modules over a commutative ring, and for other structures of interest in abstract algebra. It is one of these more general constructions where the exterior algebra finds one of its most important applications, where it appears as the algebra of differential forms that is fundamental in areas that use differential geometry.
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
The exterior algebra also has many algebraic properties that make it a convenient tool in algebra itself. The association of the exterior algebra to a vector space is a type of functor on vector spaces, which means that it is compatible in a certain way with linear transformations of vector spaces. The exterior algebra is one example of a bialgebra, meaning that its dual space also possesses a product, and this dual product is compatible with the exterior product. This dual algebra is precisely the algebra of alternating multilinear forms, and the pairing between the exterior algebra and its dual is given by the interior product.
|
https://en.wikipedia.org/wiki/Grassmann_algebra
|
In mathematics, the factorial of a non-negative integer n {\displaystyle n} , denoted by n ! {\displaystyle n!} , is the product of all positive integers less than or equal to n {\displaystyle n} . The factorial of n {\displaystyle n} also equals the product of n {\displaystyle n} with the next smaller factorial: For example, The value of 0!
|
https://en.wikipedia.org/wiki/Factorial_function
|
is 1, according to the convention for an empty product.Factorials have been discovered in several ancient cultures, notably in Indian mathematics in the canonical works of Jain literature, and by Jewish mystics in the Talmudic book Sefer Yetzirah. The factorial operation is encountered in many areas of mathematics, notably in combinatorics, where its most basic use counts the possible distinct sequences – the permutations – of n {\displaystyle n} distinct objects: there are n ! {\displaystyle n!}
|
https://en.wikipedia.org/wiki/Factorial_function
|
In mathematical analysis, factorials are used in power series for the exponential function and other functions, and they also have applications in algebra, number theory, probability theory, and computer science. Much of the mathematics of the factorial function was developed beginning in the late 18th and early 19th centuries. Stirling's approximation provides an accurate approximation to the factorial of large numbers, showing that it grows more quickly than exponential growth.
|
https://en.wikipedia.org/wiki/Factorial_function
|
Legendre's formula describes the exponents of the prime numbers in a prime factorization of the factorials, and can be used to count the trailing zeros of the factorials. Daniel Bernoulli and Leonhard Euler interpolated the factorial function to a continuous function of complex numbers, except at the negative integers, the (offset) gamma function. Many other notable functions and number sequences are closely related to the factorials, including the binomial coefficients, double factorials, falling factorials, primorials, and subfactorials. Implementations of the factorial function are commonly used as an example of different computer programming styles, and are included in scientific calculators and scientific computing software libraries. Although directly computing large factorials using the product formula or recurrence is not efficient, faster algorithms are known, matching to within a constant factor the time for fast multiplication algorithms for numbers with the same number of digits.
|
https://en.wikipedia.org/wiki/Factorial_function
|
In mathematics, the falling factorial (sometimes called the descending factorial, falling sequential product, or lower factorial) is defined as the polynomial The rising factorial (sometimes called the Pochhammer function, Pochhammer polynomial, ascending factorial, rising sequential product, or upper factorial) is defined as The value of each is taken to be 1 (an empty product) when n = 0 . These symbols are collectively called factorial powers.The Pochhammer symbol, introduced by Leo August Pochhammer, is the notation (x)n , where n is a non-negative integer. It may represent either the rising or the falling factorial, with different articles and authors using different conventions. Pochhammer himself actually used (x)n with yet another meaning, namely to denote the binomial coefficient ( x n ) .
|
https://en.wikipedia.org/wiki/Falling_factorial_power
|
{\displaystyle {\tbinom {x}{n}}.} In this article, the symbol (x)n is used to represent the falling factorial, and the symbol x(n) is used for the rising factorial. These conventions are used in combinatorics, although Knuth's underline and overline notations x n _ {\displaystyle x^{\underline {n}}} and x n ¯ {\displaystyle x^{\overline {n}}} are increasingly popular. In the theory of special functions (in particular the hypergeometric function) and in the standard reference work Abramowitz and Stegun, the Pochhammer symbol (x)n is used to represent the rising factorial.When x is a positive integer, (x)n gives the number of n-permutations (sequences of distinct elements) from an x-element set, or equivalently the number of injective functions from a set of size n to a set of size x. The rising factorial x(n) gives the number of partitions of an n-element set into x ordered sequences (possibly empty).
|
https://en.wikipedia.org/wiki/Falling_factorial_power
|
In mathematics, the family of Debye functions is defined by D n ( x ) = n x n ∫ 0 x t n e t − 1 d t . {\displaystyle D_{n}(x)={\frac {n}{x^{n}}}\int _{0}^{x}{\frac {t^{n}}{e^{t}-1}}\,dt.} The functions are named in honor of Peter Debye, who came across this function (with n = 3) in 1912 when he analytically computed the heat capacity of what is now called the Debye model.
|
https://en.wikipedia.org/wiki/Debye_function
|
In mathematics, the fibbinary numbers are the numbers whose binary representation does not contain two consecutive ones. That is, they are sums of distinct and non-consecutive powers of two.
|
https://en.wikipedia.org/wiki/Fibbinary_number
|
In mathematics, the fiber bundle construction theorem is a theorem which constructs a fiber bundle from a given base space, fiber and a suitable set of transition functions. The theorem also gives conditions under which two such bundles are isomorphic. The theorem is important in the associated bundle construction where one starts with a given bundle and surgically replaces the fiber with a new space while keeping all other data the same.
|
https://en.wikipedia.org/wiki/Fibre_bundle_construction_theorem
|
In mathematics, the field T L E {\displaystyle \mathbb {T} ^{LE}} of logarithmic-exponential transseries is a non-Archimedean ordered differential field which extends comparability of asymptotic growth rates of elementary nontrigonometric functions to a much broader class of objects. Each log-exp transseries represents a formal asymptotic behavior, and it can be manipulated formally, and when it converges (or in every case if using special semantics such as through infinite surreal numbers), corresponds to actual behavior. Transseries can also be convenient for representing functions. Through their inclusion of exponentiation and logarithms, transseries are a strong generalization of the power series at infinity ( ∑ n = 0 ∞ a n x n {\textstyle \sum _{n=0}^{\infty }{\frac {a_{n}}{x^{n}}}} ) and other similar asymptotic expansions.
|
https://en.wikipedia.org/wiki/Transseries
|
The field T L E {\displaystyle \mathbb {T} ^{LE}} was introduced independently by Dahn-Göring and Ecalle in the respective contexts of model theory or exponential fields and of the study of analytic singularity and proof by Ecalle of the Dulac conjectures. It constitutes a formal object, extending the field of exp-log functions of Hardy and the field of accelerando-summable series of Ecalle. The field T L E {\displaystyle \mathbb {T} ^{LE}} enjoys a rich structure: an ordered field with a notion of generalized series and sums, with a compatible derivation with distinguished antiderivation, compatible exponential and logarithm functions and a notion of formal composition of series.
|
https://en.wikipedia.org/wiki/Transseries
|
In mathematics, the field of definition of an algebraic variety V is essentially the smallest field to which the coefficients of the polynomials defining V can belong. Given polynomials, with coefficients in a field K, it may not be obvious whether there is a smaller field k, and other polynomials defined over k, which still define V. The issue of field of definition is of concern in diophantine geometry.
|
https://en.wikipedia.org/wiki/Field_of_definition
|
In mathematics, the field trace is a particular function defined with respect to a finite field extension L/K, which is a K-linear map from L onto K.
|
https://en.wikipedia.org/wiki/Field_trace
|
In mathematics, the field with one element is a suggestive name for an object that should behave similarly to a finite field with a single element, if such a field could exist. This object is denoted F1, or, in a French–English pun, Fun. The name "field with one element" and the notation F1 are only suggestive, as there is no field with one element in classical abstract algebra. Instead, F1 refers to the idea that there should be a way to replace sets and operations, the traditional building blocks for abstract algebra, with other, more flexible objects.
|
https://en.wikipedia.org/wiki/Field_with_one_element
|
Many theories of F1 have been proposed, but it is not clear which, if any, of them give F1 all the desired properties. While there is still no field with a single element in these theories, there is a field-like object whose characteristic is one.
|
https://en.wikipedia.org/wiki/Field_with_one_element
|
Most proposed theories of F1 replace abstract algebra entirely. Mathematical objects such as vector spaces and polynomial rings can be carried over into these new theories by mimicking their abstract properties. This allows the development of commutative algebra and algebraic geometry on new foundations.
|
https://en.wikipedia.org/wiki/Field_with_one_element
|
One of the defining features of theories of F1 is that these new foundations allow more objects than classical abstract algebra does, one of which behaves like a field of characteristic one. The possibility of studying the mathematics of F1 was originally suggested in 1956 by Jacques Tits, published in Tits 1957, on the basis of an analogy between symmetries in projective geometry and the combinatorics of simplicial complexes. F1 has been connected to noncommutative geometry and to a possible proof of the Riemann hypothesis.
|
https://en.wikipedia.org/wiki/Field_with_one_element
|
In mathematics, the finite lattice representation problem, or finite congruence lattice problem, asks whether every finite lattice is isomorphic to the congruence lattice of some finite algebra.
|
https://en.wikipedia.org/wiki/Finite_lattice_representation_problem
|
In mathematics, the finite-dimensional representations of the complex classical Lie groups G L ( n , C ) {\displaystyle GL(n,\mathbb {C} )} , S L ( n , C ) {\displaystyle SL(n,\mathbb {C} )} , O ( n , C ) {\displaystyle O(n,\mathbb {C} )} , S O ( n , C ) {\displaystyle SO(n,\mathbb {C} )} , S p ( 2 n , C ) {\displaystyle Sp(2n,\mathbb {C} )} , can be constructed using the general representation theory of semisimple Lie algebras. The groups S L ( n , C ) {\displaystyle SL(n,\mathbb {C} )} , S O ( n , C ) {\displaystyle SO(n,\mathbb {C} )} , S p ( 2 n , C ) {\displaystyle Sp(2n,\mathbb {C} )} are indeed simple Lie groups, and their finite-dimensional representations coincide with those of their maximal compact subgroups, respectively S U ( n ) {\displaystyle SU(n)} , S O ( n ) {\displaystyle SO(n)} , S p ( n ) {\displaystyle Sp(n)} . In the classification of simple Lie algebras, the corresponding algebras are S L ( n , C ) → A n − 1 S O ( n odd , C ) → B n − 1 2 S O ( n even ) → D n 2 S p ( 2 n , C ) → C n {\displaystyle {\begin{aligned}SL(n,\mathbb {C} )&\to A_{n-1}\\SO(n_{\text{odd}},\mathbb {C} )&\to B_{\frac {n-1}{2}}\\SO(n_{\text{even}})&\to D_{\frac {n}{2}}\\Sp(2n,\mathbb {C} )&\to C_{n}\end{aligned}}} However, since the complex classical Lie groups are linear groups, their representations are tensor representations. Each irreducible representation is labelled by a Young diagram, which encodes its structure and properties.
|
https://en.wikipedia.org/wiki/Representations_of_classical_Lie_groups
|
In mathematics, the first Blakers–Massey theorem, named after Albert Blakers and William S. Massey, gave vanishing conditions for certain triad homotopy groups of spaces.
|
https://en.wikipedia.org/wiki/Blakers–Massey_theorem
|
In mathematics, the first uncountable ordinal, traditionally denoted by ω 1 {\displaystyle \omega _{1}} or sometimes by Ω {\displaystyle \Omega } , is the smallest ordinal number that, considered as a set, is uncountable. It is the supremum (least upper bound) of all countable ordinals. When considered as a set, the elements of ω 1 {\displaystyle \omega _{1}} are the countable ordinals (including finite ordinals), of which there are uncountably many.
|
https://en.wikipedia.org/wiki/First_uncountable_ordinal
|
Like any ordinal number (in von Neumann's approach), ω 1 {\displaystyle \omega _{1}} is a well-ordered set, with set membership serving as the order relation. ω 1 {\displaystyle \omega _{1}} is a limit ordinal, i.e. there is no ordinal α {\displaystyle \alpha } such that ω 1 = α + 1 {\displaystyle \omega _{1}=\alpha +1} . The cardinality of the set ω 1 {\displaystyle \omega _{1}} is the first uncountable cardinal number, ℵ 1 {\displaystyle \aleph _{1}} (aleph-one).
|
https://en.wikipedia.org/wiki/First_uncountable_ordinal
|
The ordinal ω 1 {\displaystyle \omega _{1}} is thus the initial ordinal of ℵ 1 {\displaystyle \aleph _{1}} . Under the continuum hypothesis, the cardinality of ω 1 {\displaystyle \omega _{1}} is ℶ 1 {\displaystyle \beth _{1}} , the same as that of R {\displaystyle \mathbb {R} } —the set of real numbers.In most constructions, ω 1 {\displaystyle \omega _{1}} and ℵ 1 {\displaystyle \aleph _{1}} are considered equal as sets. To generalize: if α {\displaystyle \alpha } is an arbitrary ordinal, we define ω α {\displaystyle \omega _{\alpha }} as the initial ordinal of the cardinal ℵ α {\displaystyle \aleph _{\alpha }} . The existence of ω 1 {\displaystyle \omega _{1}} can be proven without the axiom of choice. For more, see Hartogs number.
|
https://en.wikipedia.org/wiki/First_uncountable_ordinal
|
In mathematics, the fixed-point index is a concept in topological fixed-point theory, and in particular Nielsen theory. The fixed-point index can be thought of as a multiplicity measurement for fixed points. The index can be easily defined in the setting of complex analysis: Let f(z) be a holomorphic mapping on the complex plane, and let z0 be a fixed point of f. Then the function f(z) − z is holomorphic, and has an isolated zero at z0. We define the fixed-point index of f at z0, denoted i(f, z0), to be the multiplicity of the zero of the function f(z) − z at the point z0.
|
https://en.wikipedia.org/wiki/Fixed-point_index
|
In real Euclidean space, the fixed-point index is defined as follows: If x0 is an isolated fixed point of f, then let g be the function defined by g ( x ) = x − f ( x ) | | x − f ( x ) | | . {\displaystyle g(x)={\frac {x-f(x)}{||x-f(x)||}}.} Then g has an isolated singularity at x0, and maps the boundary of some deleted neighborhood of x0 to the unit sphere. We define i(f, x0) to be the Brouwer degree of the mapping induced by g on some suitably chosen small sphere around x0.
|
https://en.wikipedia.org/wiki/Fixed-point_index
|
In mathematics, the flat topology is a Grothendieck topology used in algebraic geometry. It is used to define the theory of flat cohomology; it also plays a fundamental role in the theory of descent (faithfully flat descent). The term flat here comes from flat modules. There are several slightly different flat topologies, the most common of which are the fppf topology and the fpqc topology.
|
https://en.wikipedia.org/wiki/Flat_cohomology
|
fppf stands for fidèlement plate de présentation finie, and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat and of finite presentation. fpqc stands for fidèlement plate et quasi-compacte, and in this topology, a morphism of affine schemes is a covering morphism if it is faithfully flat. In both categories, a covering family is defined be a family which is a cover on Zariski open subsets.
|
https://en.wikipedia.org/wiki/Flat_cohomology
|
In the fpqc topology, any faithfully flat and quasi-compact morphism is a cover. These topologies are closely related to descent. The "pure" faithfully flat topology without any further finiteness conditions such as quasi compactness or finite presentation is not used much as is not subcanonical; in other words, representable functors need not be sheaves.
|
https://en.wikipedia.org/wiki/Flat_cohomology
|
Unfortunately the terminology for flat topologies is not standardized. Some authors use the term "topology" for a pretopology, and there are several slightly different pretopologies sometimes called the fppf or fpqc (pre)topology, which sometimes give the same topology. Flat cohomology was introduced by Grothendieck in about 1960.
|
https://en.wikipedia.org/wiki/Flat_cohomology
|
In mathematics, the flatness (symbol: ⏥) of a surface is the degree to which it approximates a mathematical plane. The term is often generalized for higher-dimensional manifolds to describe the degree to which they approximate the Euclidean space of the same dimensionality. (See curvature. )Flatness in homological algebra and algebraic geometry means, of an object A {\displaystyle A} in an abelian category, that − ⊗ A {\displaystyle -\otimes A} is an exact functor. See flat module or, for more generality, flat morphism.
|
https://en.wikipedia.org/wiki/Flatness_(mathematics)
|
In mathematics, the folded spectrum method (FSM) is an iterative method for solving large eigenvalue problems. Here you always find a vector with an eigenvalue close to a search-value ε {\displaystyle \varepsilon } . This means you can get a vector Ψ {\displaystyle \Psi } in the middle of the spectrum without solving the matrix.
|
https://en.wikipedia.org/wiki/Folded_spectrum_method
|
Ψ i + 1 = Ψ i − α ( H − ε 1 ) 2 Ψ i {\displaystyle \Psi _{i+1}=\Psi _{i}-\alpha (H-\varepsilon \mathbf {1} )^{2}\Psi _{i}} , with 0 < α < 1 {\displaystyle 0<\alpha ^{\,}<1} and 1 {\displaystyle \mathbf {1} } the Identity matrix. In contrast to the Conjugate gradient method, here the gradient calculates by twice multiplying matrix H: G ∼ H → G ∼ H 2 . {\displaystyle H:\;G\sim H\rightarrow G\sim H^{2}.}
|
https://en.wikipedia.org/wiki/Folded_spectrum_method
|
In mathematics, the following matrix was given by Indian mathematician Brahmagupta: B ( x , y ) = . {\displaystyle B(x,y)={\begin{bmatrix}x&y\\\pm ty&\pm x\end{bmatrix}}.} It satisfies B ( x 1 , y 1 ) B ( x 2 , y 2 ) = B ( x 1 x 2 ± t y 1 y 2 , x 1 y 2 ± y 1 x 2 ) . {\displaystyle B(x_{1},y_{1})B(x_{2},y_{2})=B(x_{1}x_{2}\pm ty_{1}y_{2},x_{1}y_{2}\pm y_{1}x_{2}).\,} Powers of the matrix are defined by B n = n = ≡ B n .
|
https://en.wikipedia.org/wiki/Brahmagupta_matrix
|
{\displaystyle B^{n}={\begin{bmatrix}x&y\\ty&x\end{bmatrix}}^{n}={\begin{bmatrix}x_{n}&y_{n}\\ty_{n}&x_{n}\end{bmatrix}}\equiv B_{n}.} The x n {\displaystyle \ x_{n}} and y n {\displaystyle \ y_{n}} are called Brahmagupta polynomials. The Brahmagupta matrices can be extended to negative integers: B − n = − n = ≡ B − n . {\displaystyle B^{-n}={\begin{bmatrix}x&y\\ty&x\end{bmatrix}}^{-n}={\begin{bmatrix}x_{-n}&y_{-n}\\ty_{-n}&x_{-n}\end{bmatrix}}\equiv B_{-n}.}
|
https://en.wikipedia.org/wiki/Brahmagupta_matrix
|
In mathematics, the formal derivative is an operation on elements of a polynomial ring or a ring of formal power series that mimics the form of the derivative from calculus. Though they appear similar, the algebraic advantage of a formal derivative is that it does not rely on the notion of a limit, which is in general impossible to define for a ring. Many of the properties of the derivative are true of the formal derivative, but some, especially those that make numerical statements, are not. Formal differentiation is used in algebra to test for multiple roots of a polynomial.
|
https://en.wikipedia.org/wiki/Formal_derivative
|
In mathematics, the four color theorem, or the four color map theorem, states that given any separation of a plane into contiguous regions, producing a figure called a map, no more than four colors are required to color the regions of the map—so that no two adjacent regions have the same color. Two regions are called adjacent if they share a common boundary that is not a corner, where corners are the points shared by three or more regions. For example, in the map of the United States of America, Utah and Arizona are adjacent, but Utah and New Mexico, which only share a point that also belongs to Arizona and Colorado, are not.
|
https://en.wikipedia.org/wiki/Conjecture
|
Möbius mentioned the problem in his lectures as early as 1840. The conjecture was first proposed on October 23, 1852 when Francis Guthrie, while trying to color the map of counties of England, noticed that only four different colors were needed. The five color theorem, which has a short elementary proof, states that five colors suffice to color a map and was proven in the late 19th century; however, proving that four colors suffice turned out to be significantly harder.
|
https://en.wikipedia.org/wiki/Conjecture
|
A number of false proofs and false counterexamples have appeared since the first statement of the four color theorem in 1852. The four color theorem was ultimately proven in 1976 by Kenneth Appel and Wolfgang Haken. It was the first major theorem to be proved using a computer.
|
https://en.wikipedia.org/wiki/Conjecture
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.