text
stringlengths 9
3.55k
| source
stringlengths 31
280
|
|---|---|
In mathematics, specifically transcendental number theory, the six exponentials theorem is a result that, given the right conditions on the exponents, guarantees the transcendence of at least one of a set of exponentials.
|
https://en.wikipedia.org/wiki/Six_exponentials_theorem
|
In mathematics, specifically, in category theory, a 2-functor is a morphism between 2-categories. They may be defined formally using enrichment by saying that a 2-category is exactly a Cat-enriched category and a 2-functor is a Cat-functor.Explicitly, if C and D are 2-categories then a 2-functor F: C → D {\displaystyle F\colon C\to D} consists of a function F: Ob C → Ob D {\displaystyle F\colon {\text{Ob}}C\to {\text{Ob}}D} , and for each pair of objects c , c ′ ∈ Ob C {\displaystyle c,c'\in {\text{Ob}}C} , a functor F c , c ′: Hom C ( c , c ′ ) → Hom D ( F c , F c ′ ) {\displaystyle F_{c,c'}\colon {\text{Hom}}_{C}(c,c')\to {\text{Hom}}_{D}(Fc,Fc')} such that each F c , c {\displaystyle F_{c,c}} strictly preserves identity objects and they commute with horizontal composition in C and D. See for more details and for lax versions. == References ==
|
https://en.wikipedia.org/wiki/2-functor
|
In mathematics, spectral graph theory is the study of the properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix. The adjacency matrix of a simple undirected graph is a real symmetric matrix and is therefore orthogonally diagonalizable; its eigenvalues are real algebraic integers. While the adjacency matrix depends on the vertex labeling, its spectrum is a graph invariant, although not a complete one. Spectral graph theory is also concerned with graph parameters that are defined via multiplicities of eigenvalues of matrices associated to the graph, such as the Colin de Verdière number.
|
https://en.wikipedia.org/wiki/Graph_spectrum
|
In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operators in a variety of mathematical spaces. It is a result of studies of linear algebra and the solutions of systems of linear equations and their generalizations. The theory is connected to that of analytic functions because the spectral properties of an operator are related to analytic functions of the spectral parameter.
|
https://en.wikipedia.org/wiki/Spectral_theory
|
In mathematics, spin geometry is the area of differential geometry and topology where objects like spin manifolds and Dirac operators, and the various associated index theorems have come to play a fundamental role both in mathematics and in mathematical physics. An important generalisation is the theory of symplectic Dirac operators in symplectic spin geometry and symplectic topology, which have become important fields of mathematical research.
|
https://en.wikipedia.org/wiki/Spin_geometry
|
In mathematics, spin networks have been used to study skein modules and character varieties, which correspond to spaces of connections.
|
https://en.wikipedia.org/wiki/Spin_network
|
In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differential equation because small perturbations of initial data lead to small variations in temperature at a later time as a result of the maximum principle. In partial differential equations one may measure the distances between functions using Lp norms or the sup norm, while in differential geometry one may measure the distance between spaces using the Gromov–Hausdorff distance. In dynamical systems, an orbit is called Lyapunov stable if the forward orbit of any point is in a small enough neighborhood or it stays in a small (but perhaps, larger) neighborhood.
|
https://en.wikipedia.org/wiki/Stability_theory
|
Various criteria have been developed to prove stability or instability of an orbit. Under favorable circumstances, the question may be reduced to a well-studied problem involving eigenvalues of matrices. A more general method involves Lyapunov functions. In practice, any one of a number of different stability criteria are applied.
|
https://en.wikipedia.org/wiki/Stability_theory
|
In mathematics, stable homotopy theory is the part of homotopy theory (and thus algebraic topology) concerned with all structure and phenomena that remain after sufficiently many applications of the suspension functor. A founding result was the Freudenthal suspension theorem, which states that given any pointed space X {\displaystyle X} , the homotopy groups π n + k ( Σ n X ) {\displaystyle \pi _{n+k}(\Sigma ^{n}X)} stabilize for n {\displaystyle n} sufficiently large. In particular, the homotopy groups of spheres π n + k ( S n ) {\displaystyle \pi _{n+k}(S^{n})} stabilize for n ≥ k + 2 {\displaystyle n\geq k+2} . For example, ⟨ id S 1 ⟩ = Z = π 1 ( S 1 ) ≅ π 2 ( S 2 ) ≅ π 3 ( S 3 ) ≅ ⋯ {\displaystyle \langle {\text{id}}_{S^{1}}\rangle =\mathbb {Z} =\pi _{1}(S^{1})\cong \pi _{2}(S^{2})\cong \pi _{3}(S^{3})\cong \cdots } ⟨ η ⟩ = Z = π 3 ( S 2 ) → π 4 ( S 3 ) ≅ π 5 ( S 4 ) ≅ ⋯ {\displaystyle \langle \eta \rangle =\mathbb {Z} =\pi _{3}(S^{2})\to \pi _{4}(S^{3})\cong \pi _{5}(S^{4})\cong \cdots } In the two examples above all the maps between homotopy groups are applications of the suspension functor.
|
https://en.wikipedia.org/wiki/Stable_homotopy
|
The first example is a standard corollary of the Hurewicz theorem, that π n ( S n ) ≅ Z {\displaystyle \pi _{n}(S^{n})\cong \mathbb {Z} } . In the second example the Hopf map, η {\displaystyle \eta } , is mapped to its suspension Σ η {\displaystyle \Sigma \eta } , which generates π 4 ( S 3 ) ≅ Z / 2 {\displaystyle \pi _{4}(S^{3})\cong \mathbb {Z} /2} . One of the most important problems in stable homotopy theory is the computation of stable homotopy groups of spheres.
|
https://en.wikipedia.org/wiki/Stable_homotopy
|
According to Freudenthal's theorem, in the stable range the homotopy groups of spheres depend not on the specific dimensions of the spheres in the domain and target, but on the difference in those dimensions. With this in mind the k-th stable stem is π k s := lim n π n + k ( S n ) {\displaystyle \pi _{k}^{s}:=\lim _{n}\pi _{n+k}(S^{n})} .This is an abelian group for all k. It is a theorem of Jean-Pierre Serre that these groups are finite for k ≠ 0 {\displaystyle k\neq 0} . In fact, composition makes π ∗ S {\displaystyle \pi _{*}^{S}} into a graded ring.
|
https://en.wikipedia.org/wiki/Stable_homotopy
|
A theorem of Goro Nishida states that all elements of positive grading in this ring are nilpotent. Thus the only prime ideals are the primes in π 0 s ≅ Z {\displaystyle \pi _{0}^{s}\cong \mathbb {Z} } .
|
https://en.wikipedia.org/wiki/Stable_homotopy
|
So the structure of π ∗ s {\displaystyle \pi _{*}^{s}} is quite complicated. In the modern treatment of stable homotopy theory, spaces are typically replaced by spectra.
|
https://en.wikipedia.org/wiki/Stable_homotopy
|
Following this line of thought, an entire stable homotopy category can be created. This category has many nice properties that are not present in the (unstable) homotopy category of spaces, following from the fact that the suspension functor becomes invertible. For example, the notion of cofibration sequence and fibration sequence are equivalent.
|
https://en.wikipedia.org/wiki/Stable_homotopy
|
In mathematics, statistics and elsewhere, sums of squares occur in a number of contexts:
|
https://en.wikipedia.org/wiki/Sum_of_squares
|
In mathematics, statistics, and computational modelling, a grey box model combines a partial theoretical structure with data to complete the model. The theoretical structure may vary from information on the smoothness of results, to models that need only parameter values from data or existing literature. Thus, almost all models are grey box models as opposed to black box where no model form is assumed or white box models that are purely theoretical. Some models assume a special form such as a linear regression or neural network.
|
https://en.wikipedia.org/wiki/Grey_box_model
|
These have special analysis methods. In particular linear regression techniques are much more efficient than most non-linear techniques. The model can be deterministic or stochastic (i.e. containing random components) depending on its planned use.
|
https://en.wikipedia.org/wiki/Grey_box_model
|
In mathematics, statistics, finance, computer science, particularly in machine learning and inverse problems, regularization is a process that changes the result answer to be "simpler". It is often used to obtain results for ill-posed problems or to prevent overfitting.Although regularization procedures can be divided in many ways, the following delineation is particularly helpful: Explicit regularization is regularization whenever one explicitly adds a term to the optimization problem. These terms could be priors, penalties, or constraints. Explicit regularization is commonly employed with ill-posed optimization problems.
|
https://en.wikipedia.org/wiki/Regularization_(machine_learning)
|
The regularization term, or penalty, imposes a cost on the optimization function to make the optimal solution unique. Implicit regularization is all other forms of regularization. This includes, for example, early stopping, using a robust loss function, and discarding outliers.
|
https://en.wikipedia.org/wiki/Regularization_(machine_learning)
|
Implicit regularization is essentially ubiquitous in modern machine learning approaches, including stochastic gradient descent for training deep neural networks, and ensemble methods (such as random forests and gradient boosted trees).In explicit regularization, independent of the problem or model, there is always a data term, that corresponds to a likelihood of the measurement and a regularization term that corresponds to a prior. By combining both using Bayesian statistics, one can compute a posterior, that includes both information sources and therefore stabilizes the estimation process. By trading off both objectives, one chooses to be more addictive to the data or to enforce generalization (to prevent overfitting).
|
https://en.wikipedia.org/wiki/Regularization_(machine_learning)
|
There is a whole research branch dealing with all possible regularizations. In practice, one usually tries a specific regularization and then figures out the probability density that corresponds to that regularization to justify the choice. It can also be physically motivated by common sense or intuition. In machine learning, the data term corresponds to the training data and the regularization is either the choice of the model or modifications to the algorithm. It is always intended to reduce the generalization error, i.e. the error score with the trained model on the evaluation set and not the training data.One of the earliest uses of regularization is Tikhonov regularization, related to the method of least squares.
|
https://en.wikipedia.org/wiki/Regularization_(machine_learning)
|
In mathematics, stochastic analysis on manifolds or stochastic differential geometry is the study of stochastic analysis over smooth manifolds. It is therefore a synthesis of stochastic analysis and differential geometry. The connection between analysis and stochastic processes stems from the fundamental relation that the infinitesimal generator of a continuous strong Markov process is a second-order elliptic operator. The infinitesimal generator of Brownian motion is the Laplace operator and the transition probability density p ( t , x , y ) {\displaystyle p(t,x,y)} of Brownian motion is the minimal heat kernel of the heat equation.
|
https://en.wikipedia.org/wiki/Stochastic_analysis_on_manifolds
|
Interpreting the paths of Brownian motion as characteristic curves of the operator, Brownian motion can be seen as a stochastic counterpart of a flow to a second-order partial differential operator. Stochastic analysis on manifolds investigates stochastic processes on non-linear state spaces or manifolds. Classical theory can be reformulated in a coordinate-free representation.
|
https://en.wikipedia.org/wiki/Stochastic_analysis_on_manifolds
|
In that, it is often complicated (or not possible) to formulate objects with coordinates of R d {\displaystyle \mathbb {R} ^{d}} . Thus, we require an additional structure in form of a linear connection or Riemannian metric to define martingales and Brownian motion on manifolds. Therefore, controlled by the Riemannian metric, Brownian motion will be a local object by definition.
|
https://en.wikipedia.org/wiki/Stochastic_analysis_on_manifolds
|
However, its stochastic behaviour determines global aspects of the topology and geometry of the manifold. Brownian motion is defined to be the diffusion process generated by the Laplace-Beltrami operator 1 2 Δ M {\displaystyle {\tfrac {1}{2}}\Delta _{M}} with respect to a manifold M {\displaystyle M} and can be constructed as the solution to a non-canonical stochastic differential equation on a Riemannian manifold. As there is no Hörmander representation of the operator Δ M {\displaystyle \Delta _{M}} if the manifold is not parallelizable, i.e. if the tangent bundle is not trivial, there is no canonical procedure to construct Brownian motion.
|
https://en.wikipedia.org/wiki/Stochastic_analysis_on_manifolds
|
However, this obstacle can be overcome if the manifold is equipped with a connection: We can then introduce the stochastic horizontal lift of a semimartingale and the stochastic development by the so-called Eells-Elworthy-Malliavin construction.The latter is a generalisation of a horizontal lift of smooth curves to horizontal curves in the frame bundle, such that the anti-development and the horizontal lift are connected by a stochastic differential equation. Using this, we can consider an SDE on the orthonormal frame bundle of a Riemannian manifold, whose solution is Brownian motion, and projects down to the (base) manifold via stochastic development. A visual representation of this construction corresponds to the construction of a spherical Brownian motion by rolling without slipping the manifold along the paths (or footprints) of Brownian motion left in Euclidean space.Stochastic differential geometry provides insight into classical analytic problems, and offers new approaches to prove results by means of probability.
|
https://en.wikipedia.org/wiki/Stochastic_analysis_on_manifolds
|
For example, one can apply Brownian motion to the Dirichlet problem at infinity for Cartan-Hadamard manifolds or give a probabilistic proof of the Atiyah-Singer index theorem. Stochastic differential geometry also applies in other areas of mathematics (e.g. mathematical finance). For example, we can convert classical arbitrage theory into differential-geometric language (also called geometric arbitrage theory).
|
https://en.wikipedia.org/wiki/Stochastic_analysis_on_manifolds
|
In mathematics, stochastic geometry is the study of random spatial patterns. At the heart of the subject lies the study of random point patterns. This leads to the theory of spatial point processes, hence notions of Palm conditioning, which extend to the more abstract setting of random measures.
|
https://en.wikipedia.org/wiki/Stochastic_geometry
|
In mathematics, stratified Morse theory is an analogue to Morse theory for general stratified spaces, originally developed by Mark Goresky and Robert MacPherson. The main point of the theory is to consider functions f: M → R {\displaystyle f:M\to \mathbb {R} } and consider how the stratified space f − 1 ( − ∞ , c ] {\displaystyle f^{-1}(-\infty ,c]} changes as the real number c ∈ R {\displaystyle c\in \mathbb {R} } changes. Morse theory of stratified spaces has uses everywhere from pure mathematics topics such as braid groups and representations to robot motion planning and potential theory. A popular application in pure mathematics is Morse theory on manifolds with boundary, and manifolds with corners.
|
https://en.wikipedia.org/wiki/Stratified_Morse_theory
|
In mathematics, strict differentiability is a modification of the usual notion of differentiability of functions that is particularly suited to p-adic analysis. In short, the definition is made more restrictive by allowing both points used in the difference quotient to "move".
|
https://en.wikipedia.org/wiki/Strict_differentiability
|
In mathematics, strict positivity is a concept in measure theory. Intuitively, a strictly positive measure is one that is "nowhere zero", or that is zero "only on points".
|
https://en.wikipedia.org/wiki/Strictly_positive_measure
|
In mathematics, structural Ramsey theory is a categorical generalisation of Ramsey theory, rooted in the idea that many important results of Ramsey theory have "similar" logical structures. The key observation is noting that these Ramsey-type theorems can be expressed as the assertion that a certain category (or class of finite structures) has the Ramsey property (defined below). Structural Ramsey theory began in the 1970s with the work of Nešetřil and Rödl, and is intimately connected to Fraïssé theory. It received some renewed interest in the mid-2000s due to the discovery of the Kechris–Pestov–Todorčević correspondence, which connected structural Ramsey theory to topological dynamics.
|
https://en.wikipedia.org/wiki/Structural_Ramsey_theory
|
In mathematics, structural stability is a fundamental property of a dynamical system which means that the qualitative behavior of the trajectories is unaffected by small perturbations (to be exact C1-small perturbations). Examples of such qualitative properties are numbers of fixed points and periodic orbits (but not their periods). Unlike Lyapunov stability, which considers perturbations of initial conditions for a fixed system, structural stability deals with perturbations of the system itself.
|
https://en.wikipedia.org/wiki/Structural_stability
|
Variants of this notion apply to systems of ordinary differential equations, vector fields on smooth manifolds and flows generated by them, and diffeomorphisms. Structurally stable systems were introduced by Aleksandr Andronov and Lev Pontryagin in 1937 under the name "systèmes grossiers", or rough systems. They announced a characterization of rough systems in the plane, the Andronov–Pontryagin criterion.
|
https://en.wikipedia.org/wiki/Structural_stability
|
In this case, structurally stable systems are typical, they form an open dense set in the space of all systems endowed with appropriate topology. In higher dimensions, this is no longer true, indicating that typical dynamics can be very complex (cf. strange attractor). An important class of structurally stable systems in arbitrary dimensions is given by Anosov diffeomorphisms and flows. During the late 1950s and the early 1960s, Maurício Peixoto and Marília Chaves Peixoto, motivated by the work of Andronov and Pontryagin, developed and proved Peixoto's theorem, the first global characterization of structural stability.
|
https://en.wikipedia.org/wiki/Structural_stability
|
In mathematics, subadditivity is a property of a function that states, roughly, that evaluating the function for the sum of two elements of the domain always returns something less than or equal to the sum of the function's values at each element. There are numerous examples of subadditive functions in various areas of mathematics, particularly norms and square roots. Additive maps are special cases of subadditive functions.
|
https://en.wikipedia.org/wiki/Subadditive_function
|
In mathematics, subgroup growth is a branch of group theory, dealing with quantitative questions about subgroups of a given group.Let G {\displaystyle G} be a finitely generated group. Then, for each integer n {\displaystyle n} define a n ( G ) {\displaystyle a_{n}(G)} to be the number of subgroups H {\displaystyle H} of index n {\displaystyle n} in G {\displaystyle G} . Similarly, if G {\displaystyle G} is a topological group, s n ( G ) {\displaystyle s_{n}(G)} denotes the number of open subgroups U {\displaystyle U} of index n {\displaystyle n} in G {\displaystyle G} .
|
https://en.wikipedia.org/wiki/Subgroup_growth
|
One similarly defines m n ( G ) {\displaystyle m_{n}(G)} and s n ◃ ( G ) {\displaystyle s_{n}^{\triangleleft }(G)} to denote the number of maximal and normal subgroups of index n {\displaystyle n} , respectively. Subgroup growth studies these functions, their interplay, and the characterization of group theoretical properties in terms of these functions. The theory was motivated by the desire to enumerate finite groups of given order, and the analogy with Mikhail Gromov's notion of word growth.
|
https://en.wikipedia.org/wiki/Subgroup_growth
|
In mathematics, subharmonic and superharmonic functions are important classes of functions used extensively in partial differential equations, complex analysis and potential theory. Intuitively, subharmonic functions are related to convex functions of one variable as follows. If the graph of a convex function and a line intersect at two points, then the graph of the convex function is below the line between those points. In the same way, if the values of a subharmonic function are no larger than the values of a harmonic function on the boundary of a ball, then the values of the subharmonic function are no larger than the values of the harmonic function also inside the ball. Superharmonic functions can be defined by the same description, only replacing "no larger" with "no smaller". Alternatively, a superharmonic function is just the negative of a subharmonic function, and for this reason any property of subharmonic functions can be easily transferred to superharmonic functions.
|
https://en.wikipedia.org/wiki/Subharmonic_function
|
In mathematics, subshifts of finite type are used to model dynamical systems, and in particular are the objects of study in symbolic dynamics and ergodic theory. They also describe the set of all possible sequences executed by a finite state machine. The most widely studied shift spaces are the subshifts of finite type.
|
https://en.wikipedia.org/wiki/Subshifts_of_finite_type
|
In mathematics, subtle cardinals and ethereal cardinals are closely related kinds of large cardinal number. A cardinal κ is called subtle if for every closed and unbounded C ⊂ κ and for every sequence A of length κ for which element number δ (for an arbitrary δ), Aδ ⊂ δ, there exist α, β, belonging to C, with α < β, such that Aα = Aβ ∩ α. A cardinal κ is called ethereal if for every closed and unbounded C ⊂ κ and for every sequence A of length κ for which element number δ (for an arbitrary δ), Aδ ⊂ δ and Aδ has the same cardinal as δ, there exist α, β, belonging to C, with α < β, such that card(α) = card(Aβ ∩ Aα). Subtle cardinals were introduced by Jensen & Kunen (1969). Ethereal cardinals were introduced by Ketonen (1974). Any subtle cardinal is ethereal, and any strongly inaccessible ethereal cardinal is subtle.
|
https://en.wikipedia.org/wiki/Ethereal_cardinal
|
In mathematics, summation by parts transforms the summation of products of sequences into other summations, often simplifying the computation or (especially) estimation of certain types of sums. It is also called Abel's lemma or Abel transformation, named after Niels Henrik Abel who introduced it in 1826.
|
https://en.wikipedia.org/wiki/Summation_by_parts
|
In mathematics, summation is the addition of a sequence of any kind of numbers, called addends or summands; the result is their sum or total. Beside numbers, other types of values can be summed as well: functions, vectors, matrices, polynomials and, in general, elements of any type of mathematical objects on which an operation denoted "+" is defined. Summations of infinite sequences are called series. They involve the concept of limit, and are not considered in this article.
|
https://en.wikipedia.org/wiki/Sigma_notation
|
The summation of an explicit sequence is denoted as a succession of additions. For example, summation of is denoted 1 + 2 + 4 + 2, and results in 9, that is, 1 + 2 + 4 + 2 = 9. Because addition is associative and commutative, there is no need of parentheses, and the result is the same irrespective of the order of the summands.
|
https://en.wikipedia.org/wiki/Sigma_notation
|
Summation of a sequence of only one element results in this element itself. Summation of an empty sequence (a sequence with no elements), by convention, results in 0. Very often, the elements of a sequence are defined, through a regular pattern, as a function of their place in the sequence.
|
https://en.wikipedia.org/wiki/Sigma_notation
|
For simple patterns, summation of long sequences may be represented with most summands replaced by ellipses. For example, summation of the first 100 natural numbers may be written as 1 + 2 + 3 + 4 + ⋯ + 99 + 100. Otherwise, summation is denoted by using Σ notation, where ∑ {\textstyle \sum } is an enlarged capital Greek letter sigma.
|
https://en.wikipedia.org/wiki/Sigma_notation
|
For example, the sum of the first n natural numbers can be denoted as ∑ i = 1 n i . {\textstyle \sum _{i=1}^{n}i.} For long summations, and summations of variable length (defined with ellipses or Σ notation), it is a common problem to find closed-form expressions for the result.
|
https://en.wikipedia.org/wiki/Sigma_notation
|
For example, ∑ i = 1 n i = n ( n + 1 ) 2 . {\displaystyle \sum _{i=1}^{n}i={\frac {n(n+1)}{2}}.} Although such formulas do not always exist, many summation formulas have been discovered—with some of the most common and elementary ones being listed in the remainder of this article.
|
https://en.wikipedia.org/wiki/Sigma_notation
|
In mathematics, superfunction is a nonstandard name for an iterated function for complexified continuous iteration index. Roughly, for some function f and for some variable x, the superfunction could be defined by the expression S ( z ; x ) = f ( f ( … f ( x ) … ) ) ⏟ z evaluations of the function f . {\displaystyle S(z;x)=\underbrace {f{\Big (}f{\big (}\dots f(x)\dots {\big )}{\Big )}} _{z{\text{ evaluations of the function}}\,f}.} Then, S(z; x) can be interpreted as the superfunction of the function f(x).
|
https://en.wikipedia.org/wiki/Superfunction
|
Such a definition is valid only for a positive integer index z. The variable x is often omitted. Much study and many applications of superfunctions employ various extensions of these superfunctions to complex and continuous indices; and the analysis of existence, uniqueness and their evaluation. The Ackermann functions and tetration can be interpreted in terms of superfunctions.
|
https://en.wikipedia.org/wiki/Superfunction
|
In mathematics, surfaces of class VII are non-algebraic complex surfaces studied by (Kodaira 1964, 1968) that have Kodaira dimension −∞ and first Betti number 1. Minimal surfaces of class VII (those with no rational curves with self-intersection −1) are called surfaces of class VII0. Every class VII surface is birational to a unique minimal class VII surface, and can be obtained from this minimal surface by blowing up points a finite number of times.
|
https://en.wikipedia.org/wiki/Global_spherical_shell
|
The name "class VII" comes from (Kodaira 1964, theorem 21), which divided minimal surfaces into 7 classes numbered I0 to VII0. However Kodaira's class VII0 did not have the condition that the Kodaira dimension is −∞, but instead had the condition that the geometric genus is 0. As a result, his class VII0 also included some other surfaces, such as secondary Kodaira surfaces, that are no longer considered to be class VII as they do not have Kodaira dimension −∞. The minimal surfaces of class VII are the class numbered "7" on the list of surfaces in (Kodaira 1968, theorem 55).
|
https://en.wikipedia.org/wiki/Global_spherical_shell
|
In mathematics, symbolic dynamics is the practice of modeling a topological or smooth dynamical system by a discrete space consisting of infinite sequences of abstract symbols, each of which corresponds to a state of the system, with the dynamics (evolution) given by the shift operator. Formally, a Markov partition is used to provide a finite cover for the smooth system; each set of the cover is associated with a single symbol, and the sequences of symbols result as a trajectory of the system moves from one covering set to another.
|
https://en.wikipedia.org/wiki/Symbolic_dynamics
|
In mathematics, symmetric cones, sometimes called domains of positivity, are open convex self-dual cones in Euclidean space which have a transitive group of symmetries, i.e. invertible operators that take the cone onto itself. By the Koecher–Vinberg theorem these correspond to the cone of squares in finite-dimensional real Euclidean Jordan algebras, originally studied and classified by Jordan, von Neumann & Wigner (1934). The tube domain associated with a symmetric cone is a noncompact Hermitian symmetric space of tube type. All the algebraic and geometric structures associated with the symmetric space can be expressed naturally in terms of the Jordan algebra. The other irreducible Hermitian symmetric spaces of noncompact type correspond to Siegel domains of the second kind. These can be described in terms of more complicated structures called Jordan triple systems, which generalize Jordan algebras without identity.
|
https://en.wikipedia.org/wiki/Jordan_frame_(Jordan_algebra)
|
In mathematics, symmetric convolution is a special subset of convolution operations in which the convolution kernel is symmetric across its zero point. Many common convolution-based processes such as Gaussian blur and taking the derivative of a signal in frequency-space are symmetric and this property can be exploited to make these convolutions easier to evaluate.
|
https://en.wikipedia.org/wiki/Symmetric_convolution
|
In mathematics, symmetrization is a process that converts any function in n {\displaystyle n} variables to a symmetric function in n {\displaystyle n} variables. Similarly, antisymmetrization converts any function in n {\displaystyle n} variables into an antisymmetric function.
|
https://en.wikipedia.org/wiki/Symmetric_map
|
In mathematics, synthetic differential geometry is a formalization of the theory of differential geometry in the language of topos theory. There are several insights that allow for such a reformulation. The first is that most of the analytic data for describing the class of smooth manifolds can be encoded into certain fibre bundles on manifolds: namely bundles of jets (see also jet bundle). The second insight is that the operation of assigning a bundle of jets to a smooth manifold is functorial in nature.
|
https://en.wikipedia.org/wiki/Synthetic_differential_geometry
|
The third insight is that over a certain category, these are representable functors. Furthermore, their representatives are related to the algebras of dual numbers, so that smooth infinitesimal analysis may be used. Synthetic differential geometry can serve as a platform for formulating certain otherwise obscure or confusing notions from differential geometry. For example, the meaning of what it means to be natural (or invariant) has a particularly simple expression, even though the formulation in classical differential geometry may be quite difficult.
|
https://en.wikipedia.org/wiki/Synthetic_differential_geometry
|
In mathematics, systolic geometry is the study of systolic invariants of manifolds and polyhedra, as initially conceived by Charles Loewner and developed by Mikhail Gromov, Michael Freedman, Peter Sarnak, Mikhail Katz, Larry Guth, and others, in its arithmetical, ergodic, and topological manifestations. See also a slower-paced Introduction to systolic geometry.
|
https://en.wikipedia.org/wiki/Systolic_geometry
|
In mathematics, systolic inequalities for curves on surfaces were first studied by Charles Loewner in 1949 (unpublished; see remark at end of P. M. Pu's paper in '52). Given a closed surface, its systole, denoted sys, is defined to be the least length of a loop that cannot be contracted to a point on the surface. The systolic area of a metric is defined to be the ratio area/sys2. The systolic ratio SR is the reciprocal quantity sys2/area. See also Introduction to systolic geometry.
|
https://en.wikipedia.org/wiki/Systoles_of_surfaces
|
In mathematics, t-norms are a special kind of binary operations on the real unit interval . Various constructions of t-norms, either by explicit definition or by transformation from previously known functions, provide a plenitude of examples and classes of t-norms. This is important, e.g., for finding counter-examples or supplying t-norms with particular properties for use in engineering applications of fuzzy logic. The main ways of construction of t-norms include using generators, defining parametric classes of t-norms, rotations, or ordinal sums of t-norms. Relevant background can be found in the article on t-norms.
|
https://en.wikipedia.org/wiki/Construction_of_t-norms
|
In mathematics, tables of trigonometric functions are useful in a number of areas. Before the existence of pocket calculators, trigonometric tables were essential for navigation, science and engineering. The calculation of mathematical tables was an important area of study, which led to the development of the first mechanical computing devices. Modern computers and pocket calculators now generate trigonometric function values on demand, using special libraries of mathematical code.
|
https://en.wikipedia.org/wiki/Trigonometric_tables
|
Often, these libraries use pre-calculated tables internally, and compute the required value by using an appropriate interpolation method. Interpolation of simple look-up tables of trigonometric functions is still used in computer graphics, where only modest accuracy may be required and speed is often paramount. Another important application of trigonometric tables and generation schemes is for fast Fourier transform (FFT) algorithms, where the same trigonometric function values (called twiddle factors) must be evaluated many times in a given transform, especially in the common case where many transforms of the same size are computed.
|
https://en.wikipedia.org/wiki/Trigonometric_tables
|
In this case, calling generic library routines every time is unacceptably slow. One option is to call the library routines once, to build up a table of those trigonometric values that will be needed, but this requires significant memory to store the table. The other possibility, since a regular sequence of values is required, is to use a recurrence formula to compute the trigonometric values on the fly. Significant research has been devoted to finding accurate, stable recurrence schemes in order to preserve the accuracy of the FFT (which is very sensitive to trigonometric errors).
|
https://en.wikipedia.org/wiki/Trigonometric_tables
|
In mathematics, taking the nth root is an operation involving two numbers, the radicand and the index or degree. Taking the nth root is written as x n {\displaystyle {\sqrt{x}}} , where x is the radicand and n is the index (also sometimes called the degree). This is pronounced as "the nth root of x". The definition then of an nth root of a number x is a number r (the root) which, when raised to the power of the positive integer n, yields x: r n = x .
|
https://en.wikipedia.org/wiki/Seventh_root
|
{\displaystyle r^{n}=x.} A root of degree 2 is called a square root (usually written without the n as just x {\displaystyle {\sqrt {x}}} ) and a root of degree 3, a cube root (written x 3 {\displaystyle {\sqrt{x}}} ). Roots of higher degree are referred by using ordinal numbers, as in fourth root, twentieth root, etc. The computation of an nth root is a root extraction.
|
https://en.wikipedia.org/wiki/Seventh_root
|
For example, 3 is a square root of 9, since 32 = 9, and −3 is also a square root of 9, since (−3)2 = 9. Any non-zero number considered as a complex number has n different complex nth roots, including the real ones (at most two). The nth root of 0 is zero for all positive integers n, since 0n = 0.
|
https://en.wikipedia.org/wiki/Seventh_root
|
In particular, if n is even and x is a positive real number, one of its nth roots is real and positive, one is negative, and the others (when n > 2) are non-real complex numbers; if n is even and x is a negative real number, none of the nth roots is real. If n is odd and x is real, one nth root is real and has the same sign as x, while the other (n – 1) roots are not real. Finally, if x is not real, then none of its nth roots are real.
|
https://en.wikipedia.org/wiki/Seventh_root
|
Roots of real numbers are usually written using the radical symbol or radix {\displaystyle {\sqrt {{~^{~}}^{~}\!\!}}} , with x {\displaystyle {\sqrt {x}}} denoting the positive square root of x if x is positive; for higher roots, x n {\displaystyle {\sqrt{x}}} denotes the real nth root if n is odd, and the positive nth root if n is even and x is positive. In the other cases, the symbol is not commonly used as being ambiguous.
|
https://en.wikipedia.org/wiki/Seventh_root
|
When complex nth roots are considered, it is often useful to choose one of the roots, called principal root, as a principal value. The common choice is to choose the principal nth root of x as the nth root with the greatest real part, and when there are two (for x real and negative), the one with a positive imaginary part. This makes the nth root a function that is real and positive for x real and positive, and is continuous in the whole complex plane, except for values of x that are real and negative.
|
https://en.wikipedia.org/wiki/Seventh_root
|
A difficulty with this choice is that, for a negative real number and an odd index, the principal nth root is not the real one. For example, − 8 {\displaystyle -8} has three cube roots, − 2 {\displaystyle -2} , 1 + i 3 {\displaystyle 1+i{\sqrt {3}}} and 1 − i 3 . {\displaystyle 1-i{\sqrt {3}}.}
|
https://en.wikipedia.org/wiki/Seventh_root
|
The real cube root is − 2 {\displaystyle -2} and the principal cube root is 1 + i 3 . {\displaystyle 1+i{\sqrt {3}}.} An unresolved root, especially one using the radical symbol, is sometimes referred to as a surd or a radical.
|
https://en.wikipedia.org/wiki/Seventh_root
|
Any expression containing a radical, whether it is a square root, a cube root, or a higher root, is called a radical expression, and if it contains no transcendental functions or transcendental numbers it is called an algebraic expression. The positive root of a number is the inverse operation of Exponentiation with positive integer exponents.
|
https://en.wikipedia.org/wiki/Seventh_root
|
Roots can also be defined as special cases of exponentiation, where the exponent is a fraction: x n = x 1 / n . {\displaystyle {\sqrt{x}}=x^{1/n}.} Roots are used for determining the radius of convergence of a power series with the root test. The nth roots of 1 are called roots of unity and play a fundamental role in various areas of mathematics, such as number theory, theory of equations, and Fourier transform.
|
https://en.wikipedia.org/wiki/Seventh_root
|
In mathematics, tautness is a rigidity property of foliations. A taut foliation is a codimension 1 foliation of a closed manifold with the property that every leaf meets a transverse circle. : 155 By transverse circle, is meant a closed loop that is always transverse to the tangent field of the foliation. If the foliated manifold has non-empty tangential boundary, then a codimension 1 foliation is taut if every leaf meets a transverse circle or a transverse arc with endpoints on the tangential boundary.
|
https://en.wikipedia.org/wiki/Rummler–Sullivan_theorem
|
Equivalently, by a result of Dennis Sullivan, a codimension 1 foliation is taut if there exists a Riemannian metric that makes each leaf a minimal surface. Furthermore, for compact manifolds the existence, for every leaf L {\displaystyle L} , of a transverse circle meeting L {\displaystyle L} , implies the existence of a single transverse circle meeting every leaf. Taut foliations were brought to prominence by the work of William Thurston and David Gabai.
|
https://en.wikipedia.org/wiki/Rummler–Sullivan_theorem
|
In mathematics, tensor calculus, tensor analysis, or Ricci calculus is an extension of vector calculus to tensor fields (tensors that may vary over a manifold, e.g. in spacetime). Developed by Gregorio Ricci-Curbastro and his student Tullio Levi-Civita, it was used by Albert Einstein to develop his general theory of relativity. Unlike the infinitesimal calculus, tensor calculus allows presentation of physics equations in a form that is independent of the choice of coordinates on the manifold. Tensor calculus has many applications in physics, engineering and computer science including elasticity, continuum mechanics, electromagnetism (see mathematical descriptions of the electromagnetic field), general relativity (see mathematics of general relativity), quantum field theory, and machine learning.
|
https://en.wikipedia.org/wiki/Tensor_calculus
|
Working with a main proponent of the exterior calculus Elie Cartan, the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus:In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians.
|
https://en.wikipedia.org/wiki/Tensor_calculus
|
In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus.
|
https://en.wikipedia.org/wiki/Tensor_calculus
|
In mathematics, tetration (or hyper-4) is an operation based on iterated, or repeated, exponentiation. There is no standard notation for tetration, though ↑↑ {\displaystyle \uparrow \uparrow } and the left-exponent xb are common. Under the definition as repeated exponentiation, n a {\displaystyle {^{n}a}} means a a ⋅ ⋅ a {\displaystyle {a^{a^{\cdot ^{\cdot ^{a}}}}}} , where n copies of a are iterated via exponentiation, right-to-left, i.e. the application of exponentiation n − 1 {\displaystyle n-1} times. n is called the "height" of the function, while a is called the "base," analogous to exponentiation.
|
https://en.wikipedia.org/wiki/Iterated_exponentiation
|
It would be read as "the nth tetration of a". It is the next hyperoperation after exponentiation, but before pentation. The word was coined by Reuben Louis Goodstein from tetra- (four) and iteration.
|
https://en.wikipedia.org/wiki/Iterated_exponentiation
|
Tetration is also defined recursively as a ↑↑ n := { 1 if n = 0 , a a ↑↑ ( n − 1 ) if n > 0 , {\displaystyle {a\uparrow \uparrow n}:={\begin{cases}1&{\text{if }}n=0,\\a^{a\uparrow \uparrow (n-1)}&{\text{if }}n>0,\end{cases}}} allowing for attempts to extend tetration to non-natural numbers such as real and complex numbers. The two inverses of tetration are called super-root and super-logarithm, analogous to the nth root and the logarithmic functions. None of the three functions are elementary. Tetration is used for the notation of very large numbers.
|
https://en.wikipedia.org/wiki/Iterated_exponentiation
|
In mathematics, the "happy ending problem" (so named by Paul Erdős because it led to the marriage of George Szekeres and Esther Klein) is the following statement: This was one of the original results that led to the development of Ramsey theory. The happy ending theorem can be proven by a simple case analysis: if four or more points are vertices of the convex hull, any four such points can be chosen. If on the other hand, the convex hull has the form of a triangle with two points inside it, the two inner points and one of the triangle sides can be chosen.
|
https://en.wikipedia.org/wiki/Erdős–Szekeres_conjecture
|
See Peterson (2000) for an illustrated explanation of this proof, and Morris & Soltan (2000) for a more detailed survey of the problem. The Erdős–Szekeres conjecture states precisely a more general relationship between the number of points in a general-position point set and its largest subset forming a convex polygon, namely that the smallest number of points for which any general position arrangement contains a convex subset of n {\displaystyle n} points is 2 n − 2 + 1 {\displaystyle 2^{n-2}+1} . It remains unproven, but less precise bounds are known.
|
https://en.wikipedia.org/wiki/Erdős–Szekeres_conjecture
|
In mathematics, the "strong law of small numbers" is the humorous law that proclaims, in the words of Richard K. Guy (1988): There aren't enough small numbers to meet the many demands made of them. In other words, any given small number appears in far more contexts than may seem reasonable, leading to many apparently surprising coincidences in mathematics, simply because small numbers appear so often and yet are so few. Earlier (1980) this "law" was reported by Martin Gardner. Guy's subsequent 1988 paper of the same title gives numerous examples in support of this thesis. (This paper earned him the MAA Lester R. Ford Award.)
|
https://en.wikipedia.org/wiki/Strong_law_of_small_numbers
|
In mathematics, the 'extension' of a mathematical concept C {\displaystyle C} is the set that is specified by C {\displaystyle C} . (That set might be empty, currently.) For example, the extension of a function is a set of ordered pairs that pair up the arguments and values of the function; in other words, the function's graph. The extension of an object in abstract algebra, such as a group, is the underlying set of the object.
|
https://en.wikipedia.org/wiki/Extension_(semantics)
|
The extension of a set is the set itself. That a set can capture the notion of the extension of anything is the idea behind the axiom of extensionality in axiomatic set theory. This kind of extension is used so constantly in contemporary mathematics based on set theory that it can be called an implicit assumption. A typical effort in mathematics evolves out of an observed mathematical object requiring description, the challenge being to find a characterization for which the object becomes the extension.
|
https://en.wikipedia.org/wiki/Extension_(semantics)
|
In mathematics, the (exponential) shift theorem is a theorem about polynomial differential operators (D-operators) and exponential functions. It permits one to eliminate, in certain cases, the exponential from under the D-operators.
|
https://en.wikipedia.org/wiki/Shift_theorem
|
In mathematics, the (field) norm is a particular mapping defined in field theory, which maps elements of a larger field into a subfield.
|
https://en.wikipedia.org/wiki/Relative_norm
|
In mathematics, the (linear) Peetre theorem, named after Jaak Peetre, is a result of functional analysis that gives a characterisation of differential operators in terms of their effect on generalized function spaces, and without mentioning differentiation in explicit terms. The Peetre theorem is an example of a finite order theorem in which a function or a functor, defined in a very general way, can in fact be shown to be a polynomial because of some extraneous condition or symmetry imposed upon it. This article treats two forms of the Peetre theorem. The first is the original version which, although quite useful in its own right, is actually too general for most applications.
|
https://en.wikipedia.org/wiki/Peetre_theorem
|
In mathematics, the (right) Ziegler spectrum of a ring R is a topological space whose points are (isomorphism classes of) indecomposable pure-injective right R-modules. Its closed subsets correspond to theories of modules closed under arbitrary products and direct summands. Ziegler spectra are named after Martin Ziegler, who first defined and studied them in 1984.
|
https://en.wikipedia.org/wiki/Ziegler_spectrum
|
In mathematics, the (signed and unsigned) Lah numbers are coefficients expressing rising factorials in terms of falling factorials and vice versa. They were discovered by Ivo Lah in 1954. Explicitly, the unsigned Lah numbers L ( n , k ) {\displaystyle L(n,k)} are given by the formula involving the binomial coefficient for n ≥ k ≥ 1 {\displaystyle n\geq k\geq 1} . Unsigned Lah numbers have an interesting meaning in combinatorics: they count the number of ways a set of n {\textstyle n} elements can be partitioned into k {\textstyle k} nonempty linearly ordered subsets.
|
https://en.wikipedia.org/wiki/Lah_number
|
Lah numbers are related to Stirling numbers.For n ≥ 1 {\textstyle n\geq 1} , the Lah number L ( n , 1 ) {\textstyle L(n,1)} is equal to the factorial n ! {\textstyle n!}
|
https://en.wikipedia.org/wiki/Lah_number
|
in the interpretation above, the only partition of { 1 , 2 , 3 } {\textstyle \{1,2,3\}} into 1 set can have its set ordered in 6 ways: L ( 3 , 2 ) {\textstyle L(3,2)} is equal to 6, because there are six partitions of { 1 , 2 , 3 } {\textstyle \{1,2,3\}} into two ordered parts: L ( n , n ) {\textstyle L(n,n)} is always 1 because the only way to partition { 1 , 2 , … , n } {\textstyle \{1,2,\ldots ,n\}} into n {\displaystyle n} non-empty subsets results in subsets of size 1, that can only be permuted in one way. In the more recent literature, Karamata–Knuth style notation has taken over. Lah numbers are now often written as
|
https://en.wikipedia.org/wiki/Lah_number
|
In mathematics, the 15 theorem or Conway–Schneeberger Fifteen Theorem, proved by John H. Conway and W. A. Schneeberger in 1993, states that if a positive definite quadratic form with integer matrix represents all positive integers up to 15, then it represents all positive integers. The proof was complicated, and was never published. Manjul Bhargava found a much simpler proof which was published in 2000.Bhargava used the occasion of his receiving the 2005 SASTRA Ramanujan Prize to announce that he and Jonathan P. Hanke had cracked Conway's conjecture that a similar theorem holds for integral quadratic forms, with the constant 15 replaced by 290. The proof has since appeared in preprint form.
|
https://en.wikipedia.org/wiki/15_and_290_theorems
|
In mathematics, the 2π theorem of Gromov and Thurston states a sufficient condition for Dehn filling on a cusped hyperbolic 3-manifold to result in a negatively curved 3-manifold. Let M be a cusped hyperbolic 3-manifold. Disjoint horoball neighborhoods of each cusp can be selected. The boundaries of these neighborhoods are quotients of horospheres and thus have Euclidean metrics.
|
https://en.wikipedia.org/wiki/2π_theorem
|
A slope, i.e. unoriented isotopy class of simple closed curves on these boundaries, thus has a well-defined length by taking the minimal Euclidean length over all curves in the isotopy class. The 2π theorem states: a Dehn filling of M with each filling slope greater than 2π results in a 3-manifold with a complete metric of negative sectional curvature. In fact, this metric can be selected to be identical to the original hyperbolic metric outside the horoball neighborhoods.
|
https://en.wikipedia.org/wiki/2π_theorem
|
The basic idea of the proof is to explicitly construct a negatively curved metric inside each horoball neighborhood that matches the metric near the horospherical boundary. This construction, using cylindrical coordinates, works when the filling slope is greater than 2π. See Bleiler & Hodgson (1996) for complete details.
|
https://en.wikipedia.org/wiki/2π_theorem
|
According to the geometrization conjecture, these negatively curved 3-manifolds must actually admit a complete hyperbolic metric. A horoball packing argument due to Thurston shows that there are at most 48 slopes to avoid on each cusp to get a hyperbolic 3-manifold. For one-cusped hyperbolic 3-manifolds, an improvement due to Colin Adams gives 24 exceptional slopes.
|
https://en.wikipedia.org/wiki/2π_theorem
|
This result was later improved independently by Ian Agol (2000) and Marc Lackenby (2000) with the 6 theorem. The "6 theorem" states that Dehn filling along slopes of length greater than 6 results in a hyperbolike 3-manifold, i.e. an irreducible, atoroidal, non-Seifert-fibered 3-manifold with infinite word hyperbolic fundamental group. Yet again assuming the geometrization conjecture, these manifolds have a complete hyperbolic metric. An argument of Agol's shows that there are at most 12 exceptional slopes.
|
https://en.wikipedia.org/wiki/2π_theorem
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.