text
stringlengths 9
3.55k
| source
stringlengths 31
280
|
|---|---|
In mathematics, a skeleton of a category is a subcategory that, roughly speaking, does not contain any extraneous isomorphisms. In a certain sense, the skeleton of a category is the "smallest" equivalent category, which captures all "categorical properties" of the original. In fact, two categories are equivalent if and only if they have isomorphic skeletons. A category is called skeletal if isomorphic objects are necessarily identical.
|
https://en.wikipedia.org/wiki/Skeleton_(category_theory)
|
In mathematics, a skew gradient of a harmonic function over a simply connected domain with two real dimensions is a vector field that is everywhere orthogonal to the gradient of the function and that has the same magnitude as the gradient.
|
https://en.wikipedia.org/wiki/Skew_gradient
|
In mathematics, a slender group is a torsion-free abelian group that is "small" in a sense that is made precise in the definition below.
|
https://en.wikipedia.org/wiki/Slender_group
|
In mathematics, a smooth algebraic curve C {\displaystyle C} in the complex projective plane, of degree d {\displaystyle d} , has genus given by the genus–degree formula g = ( d − 1 ) ( d − 2 ) / 2 {\displaystyle g=(d-1)(d-2)/2} .The Thom conjecture, named after French mathematician René Thom, states that if Σ {\displaystyle \Sigma } is any smoothly embedded connected curve representing the same class in homology as C {\displaystyle C} , then the genus g {\displaystyle g} of Σ {\displaystyle \Sigma } satisfies the inequality g ≥ ( d − 1 ) ( d − 2 ) / 2 {\displaystyle g\geq (d-1)(d-2)/2} .In particular, C is known as a genus minimizing representative of its homology class. It was first proved by Peter Kronheimer and Tomasz Mrowka in October 1994, using the then-new Seiberg–Witten invariants. Assuming that Σ {\displaystyle \Sigma } has nonnegative self intersection number this was generalized to Kähler manifolds (an example being the complex projective plane) by John Morgan, Zoltán Szabó, and Clifford Taubes, also using the Seiberg–Witten invariants.
|
https://en.wikipedia.org/wiki/Thom_conjecture
|
There is at least one generalization of this conjecture, known as the symplectic Thom conjecture (which is now a theorem, as proved for example by Peter Ozsváth and Szabó in 2000). It states that a symplectic surface of a symplectic 4-manifold is genus minimizing within its homology class. This would imply the previous result because algebraic curves (complex dimension 1, real dimension 2) are symplectic surfaces within the complex projective plane, which is a symplectic 4-manifold.
|
https://en.wikipedia.org/wiki/Thom_conjecture
|
In mathematics, a smooth compact manifold M is called almost flat if for any ε > 0 {\displaystyle \varepsilon >0} there is a Riemannian metric g ε {\displaystyle g_{\varepsilon }} on M such that diam ( M , g ε ) ≤ 1 {\displaystyle {\mbox{diam}}(M,g_{\varepsilon })\leq 1} and g ε {\displaystyle g_{\varepsilon }} is ε {\displaystyle \varepsilon } -flat, i.e. for the sectional curvature of K g ε {\displaystyle K_{g_{\varepsilon }}} we have | K g ϵ | < ε {\displaystyle |K_{g_{\epsilon }}|<\varepsilon } . Given n, there is a positive number ε n > 0 {\displaystyle \varepsilon _{n}>0} such that if an n-dimensional manifold admits an ε n {\displaystyle \varepsilon _{n}} -flat metric with diameter ≤ 1 {\displaystyle \leq 1} then it is almost flat. On the other hand, one can fix the bound of sectional curvature and get the diameter going to zero, so the almost-flat manifold is a special case of a collapsing manifold, which is collapsing along all directions. According to the Gromov–Ruh theorem, M is almost flat if and only if it is infranil. In particular, it is a finite factor of a nilmanifold, which is the total space of a principal torus bundle over a principal torus bundle over a torus.
|
https://en.wikipedia.org/wiki/Gromov-Ruh_theorem
|
In mathematics, a smooth maximum of an indexed family x1, ..., xn of numbers is a smooth approximation to the maximum function max ( x 1 , … , x n ) , {\displaystyle \max(x_{1},\ldots ,x_{n}),} meaning a parametric family of functions m α ( x 1 , … , x n ) {\displaystyle m_{\alpha }(x_{1},\ldots ,x_{n})} such that for every α, the function m α {\displaystyle m_{\alpha }} is smooth, and the family converges to the maximum function m α → max {\displaystyle m_{\alpha }\to \max } as α → ∞ {\displaystyle \alpha \to \infty } . The concept of smooth minimum is similarly defined. In many cases, a single family approximates both: maximum as the parameter goes to positive infinity, minimum as the parameter goes to negative infinity; in symbols, m α → max {\displaystyle m_{\alpha }\to \max } as α → ∞ {\displaystyle \alpha \to \infty } and m α → min {\displaystyle m_{\alpha }\to \min } as α → − ∞ {\displaystyle \alpha \to -\infty } . The term can also be used loosely for a specific smooth function that behaves similarly to a maximum, without necessarily being part of a parametrized family.
|
https://en.wikipedia.org/wiki/Smooth_maximum
|
In mathematics, a smooth structure on a manifold allows for an unambiguous notion of smooth function. In particular, a smooth structure allows one to perform mathematical analysis on the manifold.
|
https://en.wikipedia.org/wiki/Smooth_structure
|
In mathematics, a sober space is a topological space X such that every (nonempty) irreducible closed subset of X is the closure of exactly one point of X: that is, every irreducible closed subset has a unique generic point.
|
https://en.wikipedia.org/wiki/Sober_space
|
In mathematics, a sofic group is a group whose Cayley graph is an initially subamenable graph, or equivalently a subgroup of an ultraproduct of finite-rank symmetric groups such that every two elements of the group have distance 1. They were introduced by Gromov (1999) as a common generalization of amenable and residually finite groups. The name "sofic", from the Hebrew word סופי meaning "finite", was later applied by Weiss (2000), following Weiss's earlier use of the same word to indicate a generalization of finiteness in sofic subshifts. The class of sofic groups is closed under the operations of taking subgroups, extensions by amenable groups, and free products.
|
https://en.wikipedia.org/wiki/Sofic_group
|
A finitely generated group is sofic if it is the limit of a sequence of sofic groups. The limit of a sequence of amenable groups (that is, an initially subamenable group) is necessarily sofic, but there exist sofic groups that are not initially subamenable groups.As Gromov proved, Sofic groups are surjunctive. That is, they obey a form of the Garden of Eden theorem for cellular automata defined over the group (dynamical systems whose states are mappings from the group to a finite set and whose state transitions are translation-invariant and continuous) stating that every injective automaton is surjective and therefore also reversible.
|
https://en.wikipedia.org/wiki/Sofic_group
|
In mathematics, a solid Klein bottle is a three-dimensional topological space (a 3-manifold) whose boundary is the Klein bottle.It is homeomorphic to the quotient space obtained by gluing the top disk of a cylinder D 2 × I {\displaystyle \scriptstyle D^{2}\times I} to the bottom disk by a reflection across a diameter of the disk. Alternatively, one can visualize the solid Klein bottle as the trivial product M o ¨ × I {\displaystyle \scriptstyle M{\ddot {o}}\times I} , of the möbius strip and an interval I = {\displaystyle \scriptstyle I=} . In this model one can see that the core central curve at 1/2 has a regular neighborhood which is again a trivial cartesian product: M o ¨ × {\displaystyle \scriptstyle M{\ddot {o}}\times } and whose boundary is a Klein bottle. == References ==
|
https://en.wikipedia.org/wiki/Solid_Klein_bottle
|
In mathematics, a solid torus is the topological space formed by sweeping a disk around a circle. It is homeomorphic to the Cartesian product S 1 × D 2 {\displaystyle S^{1}\times D^{2}} of the disk and the circle, endowed with the product topology. A standard way to visualize a solid torus is as a toroid, embedded in 3-space.
|
https://en.wikipedia.org/wiki/Solid_tori
|
However, it should be distinguished from a torus, which has the same visual appearance: the torus is the two-dimensional space on the boundary of a toroid, while the solid torus includes also the compact interior space enclosed by the torus. A solid torus is a torus plus the volume inside the torus. Real-world objects that approximate a solid torus include O-rings, non-inflatable lifebuoys, ring doughnuts, and bagels.
|
https://en.wikipedia.org/wiki/Solid_tori
|
In mathematics, a solution set is the set of values that satisfy a given set of equations or inequalities. For example, for a set { f i } {\displaystyle \{f_{i}\}} of polynomials over a ring R {\displaystyle R} , the solution set is the subset of R {\displaystyle R} on which the polynomials all vanish (evaluate to 0), formally { x ∈ R: ∀ i ∈ I , f i ( x ) = 0 } {\displaystyle \{x\in R:\forall i\in I,f_{i}(x)=0\}} The feasible region of a constrained optimization problem is the solution set of the constraints.
|
https://en.wikipedia.org/wiki/Solution_set
|
In mathematics, a solvmanifold is a homogeneous space of a connected solvable Lie group. It may also be characterized as a quotient of a connected solvable Lie group by a closed subgroup. (Some authors also require that the Lie group be simply-connected, or that the quotient be compact.) A special class of solvmanifolds, nilmanifolds, was introduced by Anatoly Maltsev, who proved the first structural theorems. Properties of general solvmanifolds are similar, but somewhat more complicated.
|
https://en.wikipedia.org/wiki/Sol_manifold
|
In mathematics, a source for the representation theory of the group of diffeomorphisms of a smooth manifold M is the initial observation that (for M connected) that group acts transitively on M.
|
https://en.wikipedia.org/wiki/Representation_theory_of_the_diffeomorphism_group
|
In mathematics, a space form is a complete Riemannian manifold M of constant sectional curvature K. The three most fundamental examples are Euclidean n-space, the n-dimensional sphere, and hyperbolic space, although a space form need not be simply connected.
|
https://en.wikipedia.org/wiki/Space_form
|
In mathematics, a space is a set (sometimes called a universe) with some added structure. While modern mathematics uses many types of spaces, such as Euclidean spaces, linear spaces, topological spaces, Hilbert spaces, or probability spaces, it does not define the notion of "space" itself. A space consists of selected mathematical objects that are treated as points, and selected relationships between these points. The nature of the points can vary widely: for example, the points can be elements of a set, functions on another space, or subspaces of another space.
|
https://en.wikipedia.org/wiki/Space_(mathematics)
|
It is the relationships that define the nature of the space. More precisely, isomorphic spaces are considered identical, where an isomorphism between two spaces is a one-to-one correspondence between their points that preserves the relationships. For example, the relationships between the points of a three-dimensional Euclidean space are uniquely determined by Euclid's axioms, and all three-dimensional Euclidean spaces are considered identical.
|
https://en.wikipedia.org/wiki/Space_(mathematics)
|
Topological notions such as continuity have natural definitions in every Euclidean space. However, topology does not distinguish straight lines from curved lines, and the relation between Euclidean and topological spaces is thus "forgetful".
|
https://en.wikipedia.org/wiki/Space_(mathematics)
|
Relations of this kind are treated in more detail in the Section "Types of spaces". It is not always clear whether a given mathematical object should be considered as a geometric "space", or an algebraic "structure". A general definition of "structure", proposed by Bourbaki, embraces all common types of spaces, provides a general definition of isomorphism, and justifies the transfer of properties between isomorphic structures.
|
https://en.wikipedia.org/wiki/Space_(mathematics)
|
In mathematics, a space of convolution quotients is a field of fractions of a convolution ring of functions: a convolution quotient is to the operation of convolution as a quotient of integers is to multiplication. The construction of convolution quotients allows easy algebraic representation of the Dirac delta function, integral operator, and differential operator without having to deal directly with integral transforms, which are often subject to technical difficulties with respect to whether they converge. Convolution quotients were introduced by Mikusiński (1949), and their theory is sometimes called Mikusiński's operational calculus.
|
https://en.wikipedia.org/wiki/Convolution_quotient
|
The kind of convolution ( f , g ) ↦ f ∗ g {\textstyle (f,g)\mapsto f*g} with which this theory is concerned is defined by ( f ∗ g ) ( x ) = ∫ 0 x f ( u ) g ( x − u ) d u . {\displaystyle (f*g)(x)=\int _{0}^{x}f(u)g(x-u)\,du.} It follows from the Titchmarsh convolution theorem that if the convolution f ∗ g {\textstyle f*g} of two functions f , g {\textstyle f,g} that are continuous on [ 0 , + ∞ ) {\textstyle [0,+\infty )} is equal to 0 everywhere on that interval, then at least one of f , g {\textstyle f,g} is 0 everywhere on that interval.
|
https://en.wikipedia.org/wiki/Convolution_quotient
|
A consequence is that if f , g , h {\textstyle f,g,h} are continuous on [ 0 , + ∞ ) {\textstyle [0,+\infty )} then h ∗ f = h ∗ g {\textstyle h*f=h*g} only if f = g . {\textstyle f=g.} This fact makes it possible to define convolution quotients by saying that for two functions ƒ, g, the pair (ƒ, g) has the same convolution quotient as the pair (h * ƒ,h * g).
|
https://en.wikipedia.org/wiki/Convolution_quotient
|
As with the construction of the rational numbers from the integers, the field of convolution quotients is a direct extension of the convolution ring from which it was built. Every "ordinary" function f {\displaystyle f} in the original space embeds canonically into the space of convolution quotients as the (equivalence class of the) pair ( f ∗ g , g ) {\displaystyle (f*g,g)} , in the same way that ordinary integers embed canonically into the rational numbers. Non-function elements of our new space can be thought of as "operators", or generalized functions, whose algebraic action on functions is always well-defined even if they have no representation in "ordinary" function space. If we start with convolution ring of positive half-line functions, the above construction is identical in behavior to the Laplace transform, and ordinary Laplace-space conversion charts can be used to map expressions involving non-function operators to ordinary functions (if they exist). Yet, as mentioned above, the algebraic approach to the construction of the space bypasses the need to explicitly define the transform or its inverse, sidestepping a number of technically challenging convergence problems with the "traditional" integral transform construction.
|
https://en.wikipedia.org/wiki/Convolution_quotient
|
In mathematics, a sparse polynomial (also lacunary polynomial or fewnomial) is a polynomial that has far fewer terms than its degree and number of variables would suggest. For example, x10 + 3x3 - 1 is a sparse polynomial as it is a trinomial with a degree of 10. The motivation for studying sparse polynomials is to concentrate on the structure of a polynomial's monomials instead of its degree, as one can see, for instance, by comparing Bernstein-Kushnirenko theorem with Bezout's theorem. Research on sparse polynomials has also included work on algorithms whose running time grows as a function of the number of terms rather than on the degree, for problems including polynomial multiplication, division, root-finding algorithms, and polynomial greatest common divisors.
|
https://en.wikipedia.org/wiki/Sparse_polynomial
|
Sparse polynomials have also been used in pure mathematics, especially in the study of Galois groups, because it has been easier to determine the Galois groups of certain families of sparse polynomials than it is for other polynomials.The algebraic varieties determined by sparse polynomials have a simple structure, which is also reflected in the structure of the solutions of certain related differential equations. Additionally, a sparse positivstellensatz exists for univariate sparse polynomials.
|
https://en.wikipedia.org/wiki/Sparse_polynomial
|
It states that the non-negativity of a polynomial can be certified by sos polynomials whose degree only depends on the number of monomials of the polynomial.Sparse polynomials oftentimes come up in sum or difference of powers equations. The sum of two cubes states that (a + b)(a2 - 2ab + b2) = a3 + b3. a3 + b3, here, is a sparse polynomial.
|
https://en.wikipedia.org/wiki/Sparse_polynomial
|
In mathematics, a sparsely totient number is a certain kind of natural number. A natural number, n, is sparsely totient if for all m > n, φ ( m ) > φ ( n ) {\displaystyle \varphi (m)>\varphi (n)} where φ {\displaystyle \varphi } is Euler's totient function. The first few sparsely totient numbers are: 2, 6, 12, 18, 30, 42, 60, 66, 90, 120, 126, 150, 210, 240, 270, 330, 420, 462, 510, 630, 660, 690, 840, 870, 1050, 1260, 1320, 1470, 1680, 1890, 2310, 2730, 2940, 3150, 3570, 3990, 4620, 4830, 5460, 5610, 5670, 6090, 6930, 7140, 7350, 8190, 9240, 9660, 9870, ... (sequence A036913 in the OEIS). The concept was introduced by David Masser and Peter Man-Kit Shiu in 1986. As they showed, every primorial is sparsely totient.
|
https://en.wikipedia.org/wiki/Sparsely_totient_number
|
In mathematics, a spectral space is a topological space that is homeomorphic to the spectrum of a commutative ring. It is sometimes also called a coherent space because of the connection to coherent topos.
|
https://en.wikipedia.org/wiki/Spectral_spaces
|
In mathematics, a spherical 3-manifold M is a 3-manifold of the form M = S 3 / Γ {\displaystyle M=S^{3}/\Gamma } where Γ {\displaystyle \Gamma } is a finite subgroup of SO(4) acting freely by rotations on the 3-sphere S 3 {\displaystyle S^{3}} . All such manifolds are prime, orientable, and closed. Spherical 3-manifolds are sometimes called elliptic 3-manifolds or Clifford-Klein manifolds.
|
https://en.wikipedia.org/wiki/Spherical_3-manifold
|
In mathematics, a spherical conic or sphero-conic is a curve on the sphere, the intersection of the sphere with a concentric elliptic cone. It is the spherical analog of a conic section (ellipse, parabola, or hyperbola) in the plane, and as in the planar case, a spherical conic can be defined as the locus of points the sum or difference of whose great-circle distances to two foci is constant. By taking the antipodal point to one focus, every spherical ellipse is also a spherical hyperbola, and vice versa. As a space curve, a spherical conic is a quartic, though its orthogonal projections in three principal axes are planar conics.
|
https://en.wikipedia.org/wiki/Spherical_conic
|
Like planar conics, spherical conics also satisfy a "reflection property": the great-circle arcs from the two foci to any point on the conic have the tangent and normal to the conic at that point as their angle bisectors. Many theorems about conics in the plane extend to spherical conics. For example, Graves's theorem and Ivory's theorem about confocal conics can also be proven on the sphere; see confocal conic sections about the planar versions.Just as the arc length of an ellipse is given by an incomplete elliptic integral of the second kind, the arc length of a spherical conic is given by an incomplete elliptic integral of the third kind.An orthogonal coordinate system in Euclidean space based on concentric spheres and quadratic cones is called a conical or sphero-conical coordinate system.
|
https://en.wikipedia.org/wiki/Spherical_conic
|
When restricted to the surface of a sphere, the remaining coordinates are confocal spherical conics. Sometimes this is called an elliptic coordinate system on the sphere, by analogy to a planar elliptic coordinate system. Such coordinates can be used in the computation of conformal maps from the sphere to the plane.
|
https://en.wikipedia.org/wiki/Spherical_conic
|
In mathematics, a spherical coordinate system is a coordinate system for three-dimensional space where the position of a point is specified by three numbers: the radial distance of that point from a fixed origin; its polar angle measured from a fixed polar axis or zenith direction; and the azimuthal angle of its orthogonal projection on a reference plane that passes through the origin and is orthogonal to the fixed axis, measured from another fixed reference direction on that plane. When radius is fixed, the two angular coordinates make a coordinate system on the sphere sometimes called spherical polar coordinates. The radial distance is also called the radius or radial coordinate. The polar angle may be called colatitude, zenith angle, normal angle, or inclination angle.
|
https://en.wikipedia.org/wiki/Spherical_coordinates
|
The polar angle is often replaced by the elevation angle measured from the reference plane towards the positive Z axis; the depression angle is the negative of the elevation angle. The use of symbols and the order of the coordinates differs among sources and disciplines. This article will use the ISO convention frequently encountered in physics: ( r , θ , φ ) {\displaystyle (r,\theta ,\varphi )} gives the radial distance, polar angle, and azimuthal angle.
|
https://en.wikipedia.org/wiki/Spherical_coordinates
|
By contrast, in many mathematics books, ( ρ , θ , φ ) {\displaystyle (\rho ,\theta ,\varphi )} or ( r , θ , φ ) {\displaystyle (r,\theta ,\varphi )} gives the radial distance, azimuthal angle, and polar angle, switching the meanings of θ and φ. Other conventions are also used, such as r for radius from the z-axis, so great care needs to be taken to check the meaning of the symbols. According to the conventions of geographical coordinate systems, positions are measured by latitude, longitude, and height (altitude).
|
https://en.wikipedia.org/wiki/Spherical_coordinates
|
There are a number of celestial coordinate systems based on different fundamental planes and with different terms for the various coordinates. The spherical coordinate systems used in mathematics normally use radians rather than degrees and measure the azimuthal angle counterclockwise from the x-axis to the y-axis rather than clockwise from north (0°) to east (+90°) like the horizontal coordinate system.The spherical coordinate system can be seen as one possible generalization of the polar coordinate system in three-dimensional space. It can also be further extended to higher-dimensional spaces and is then referred to as a hyperspherical coordinate system.
|
https://en.wikipedia.org/wiki/Spherical_coordinates
|
In mathematics, a spiral is a curve which emanates from a point, moving farther away as it revolves around the point. It is a subtype of whorled patterns, a broad group that also includes concentric objects.
|
https://en.wikipedia.org/wiki/Spherical_spiral
|
In mathematics, a spline is a special function defined piecewise by polynomials. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low degree polynomials, while avoiding Runge's phenomenon for higher degrees. In the computer science subfields of computer-aided design and computer graphics, the term spline more frequently refers to a piecewise polynomial (parametric) curve. Splines are popular curves in these subfields because of the simplicity of their construction, their ease and accuracy of evaluation, and their capacity to approximate complex shapes through curve fitting and interactive curve design. The term spline comes from the flexible spline devices used by shipbuilders and draftsmen to draw smooth shapes.
|
https://en.wikipedia.org/wiki/Spline_curve
|
In mathematics, a split exact sequence is a short exact sequence in which the middle term is built out of the two outer terms in the simplest possible way.
|
https://en.wikipedia.org/wiki/Split_exact_sequence
|
In mathematics, a split-biquaternion is a hypercomplex number of the form q = w + x i + y j + z k {\displaystyle q=w+xi+yj+zk} where w, x, y, and z are split-complex numbers and i, j, and k multiply as in the quaternion group. Since each coefficient w, x, y, z spans two real dimensions, the split-biquaternion is an element of an eight-dimensional vector space. Considering that it carries a multiplication, this vector space is an algebra over the real field, or an algebra over a ring where the split-complex numbers form the ring.
|
https://en.wikipedia.org/wiki/Split-biquaternion
|
This algebra was introduced by William Kingdon Clifford in an 1873 article for the London Mathematical Society. It has been repeatedly noted in mathematical literature since then, variously as a deviation in terminology, an illustration of the tensor product of algebras, and as an illustration of the direct sum of algebras. The split-biquaternions have been identified in various ways by algebraists; see § Synonyms below.
|
https://en.wikipedia.org/wiki/Split-biquaternion
|
In mathematics, a sporadic group is one of the 26 exceptional groups found in the classification of finite simple groups. A simple group is a group G that does not have any normal subgroups except for the trivial group and G itself. The classification theorem states that the list of finite simple groups consists of 18 countably infinite families plus 26 exceptions that do not follow such a systematic pattern.
|
https://en.wikipedia.org/wiki/Sporadic_groups
|
These 26 exceptions are the sporadic groups. They are also known as the sporadic simple groups, or the sporadic finite groups. Because it is not strictly a group of Lie type, the Tits group is sometimes regarded as a sporadic group, in which case there would be 27 sporadic groups. The monster group, or friendly giant, is the largest of the sporadic groups, and all but six of the other sporadic groups are subquotients of it.
|
https://en.wikipedia.org/wiki/Sporadic_groups
|
In mathematics, a square is the result of multiplying a number by itself. The verb "to square" is used to denote this operation. Squaring is the same as raising to the power 2, and is denoted by a superscript 2; for instance, the square of 3 may be written as 32, which is the number 9. In some cases when superscripts are not available, as for instance in programming languages or plain text files, the notations x^2 (caret) or x**2 may be used in place of x2.
|
https://en.wikipedia.org/wiki/Modulus_squared
|
The adjective which corresponds to squaring is quadratic. The square of an integer may also be called a square number or a perfect square. In algebra, the operation of squaring is often generalized to polynomials, other expressions, or values in systems of mathematical values other than the numbers.
|
https://en.wikipedia.org/wiki/Modulus_squared
|
For instance, the square of the linear polynomial x + 1 is the quadratic polynomial (x + 1)2 = x2 + 2x + 1. One of the important properties of squaring, for numbers as well as in many other mathematical systems, is that (for all numbers x), the square of x is the same as the square of its additive inverse −x. That is, the square function satisfies the identity x2 = (−x)2. This can also be expressed by saying that the square function is an even function.
|
https://en.wikipedia.org/wiki/Modulus_squared
|
In mathematics, a square matrix is a matrix with the same number of rows and columns. An n-by-n matrix is known as a square matrix of order n {\displaystyle n} . Any two square matrices of the same order can be added and multiplied.
|
https://en.wikipedia.org/wiki/Square_matrices
|
Square matrices are often used to represent simple linear transformations, such as shearing or rotation. For example, if R {\displaystyle R} is a square matrix representing a rotation (rotation matrix) and v {\displaystyle \mathbf {v} } is a column vector describing the position of a point in space, the product R v {\displaystyle R\mathbf {v} } yields another column vector describing the position of that point after that rotation. If v {\displaystyle \mathbf {v} } is a row vector, the same transformation can be obtained using v R T {\displaystyle \mathbf {v} R^{\mathsf {T}}} , where R T {\displaystyle R^{\mathsf {T}}} is the transpose of R {\displaystyle R} .
|
https://en.wikipedia.org/wiki/Square_matrices
|
In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if | a i i | ≥ ∑ j ≠ i | a i j | for all i {\displaystyle |a_{ii}|\geq \sum _{j\neq i}|a_{ij}|\quad {\text{for all }}i\,} where aij denotes the entry in the ith row and jth column. This definition uses a weak inequality, and is therefore sometimes called weak diagonal dominance. If a strict inequality (>) is used, this is called strict diagonal dominance. The unqualified term diagonal dominance can mean both strict and weak diagonal dominance, depending on the context.
|
https://en.wikipedia.org/wiki/Diagonally_dominant_matrix
|
In mathematics, a square number or perfect square is an integer that is the square of an integer; in other words, it is the product of some integer with itself. For example, 9 is a square number, since it equals 32 and can be written as 3 × 3. The usual notation for the square of a number n is not the product n × n, but the equivalent exponentiation n2, usually pronounced as "n squared". The name square number comes from the name of the shape.
|
https://en.wikipedia.org/wiki/Square_number
|
The unit of area is defined as the area of a unit square (1 × 1). Hence, a square with side length n has area n2. If a square number is represented by n points, the points can be arranged in rows as a square each side of which has the same number of points as the square root of n; thus, square numbers are a type of figurate numbers (other examples being cube numbers and triangular numbers).
|
https://en.wikipedia.org/wiki/Square_number
|
In the real number system, square numbers are non-negative. A non-negative integer is a square number when its square root is again an integer. For example, 9 = 3 , {\displaystyle {\sqrt {9}}=3,} so 9 is a square number.
|
https://en.wikipedia.org/wiki/Square_number
|
A positive integer that has no square divisors except 1 is called square-free. For a non-negative integer n, the nth square number is n2, with 02 = 0 being the zeroth one.
|
https://en.wikipedia.org/wiki/Square_number
|
The concept of square can be extended to some other number systems. If rational numbers are included, then a square is the ratio of two square integers, and, conversely, the ratio of two square integers is a square, for example, 4 9 = ( 2 3 ) 2 {\displaystyle \textstyle {\frac {4}{9}}=\left({\frac {2}{3}}\right)^{2}} . Starting with 1, there are ⌊ m ⌋ {\displaystyle \lfloor {\sqrt {m}}\rfloor } square numbers up to and including m, where the expression ⌊ x ⌋ {\displaystyle \lfloor x\rfloor } represents the floor of the number x.
|
https://en.wikipedia.org/wiki/Square_number
|
In mathematics, a square root of a number x is a number y such that y 2 = x {\displaystyle y^{2}=x} ; in other words, a number y whose square (the result of multiplying the number by itself, or y ⋅ y {\displaystyle y\cdot y} ) is x. For example, 4 and −4 are square roots of 16 because 4 2 = ( − 4 ) 2 = 16 {\displaystyle 4^{2}=(-4)^{2}=16} . Every nonnegative real number x has a unique nonnegative square root, called the principal square root or simply the square root (with a definite article, see below), which is denoted by x , {\displaystyle {\sqrt {x}},} where the symbol " {\displaystyle {\sqrt {~^{~}}}} " is called the radical sign or radix. For example, to express the fact that the principal square root of 9 is 3, we write 9 = 3 {\displaystyle {\sqrt {9}}=3} . The term (or number) whose square root is being considered is known as the radicand.
|
https://en.wikipedia.org/wiki/Square_root
|
The radicand is the number or expression underneath the radical sign, in this case, 9. For non-negative x, the principal square root can also be written in exponent notation, as x 1 / 2 {\displaystyle x^{1/2}} . Every positive number x has two square roots: x {\displaystyle {\sqrt {x}}} (which is positive) and − x {\displaystyle -{\sqrt {x}}} (which is negative).
|
https://en.wikipedia.org/wiki/Square_root
|
The two roots can be written more concisely using the ± sign as ± x {\displaystyle \pm {\sqrt {x}}} . Although the principal square root of a positive number is only one of its two square roots, the designation "the square root" is often used to refer to the principal square root.Square roots of negative numbers can be discussed within the framework of complex numbers. More generally, square roots can be considered in any context in which a notion of the "square" of a mathematical object is defined. These include function spaces and square matrices, among other mathematical structures.
|
https://en.wikipedia.org/wiki/Square_root
|
In mathematics, a square triangular number (or triangular square number) is a number which is both a triangular number and a square number. There are infinitely many square triangular numbers; the first few are: 0, 1, 36, 1225, 41616, 1413721, 48024900, 1631432881, 55420693056, 1882672131025 (sequence A001110 in the OEIS)
|
https://en.wikipedia.org/wiki/Square_triangular_number
|
In mathematics, a square-difference-free set is a set of natural numbers, no two of which differ by a square number. Hillel Furstenberg and András Sárközy proved in the late 1970s the Furstenberg–Sárközy theorem of additive number theory showing that, in a certain sense, these sets cannot be very large. In the game of subtract a square, the positions where the next player loses form a square-difference-free set.
|
https://en.wikipedia.org/wiki/Square-difference-free_set
|
Another square-difference-free set is obtained by doubling the Moser–de Bruijn sequence. The best known upper bound on the size of a square-difference-free set of numbers up to n {\displaystyle n} is only slightly sublinear, but the largest known sets of this form are significantly smaller, of size ≈ n 0.733412 {\displaystyle \approx n^{0.733412}} . Closing the gap between these upper and lower bounds remains an open problem. The sublinear size bounds on square-difference-free sets can be generalized to sets where certain other polynomials are forbidden as differences between pairs of elements.
|
https://en.wikipedia.org/wiki/Square-difference-free_set
|
In mathematics, a square-free element is an element r of a unique factorization domain R that is not divisible by a non-trivial square. This means that every s such that s 2 ∣ r {\displaystyle s^{2}\mid r} is a unit of R.
|
https://en.wikipedia.org/wiki/Square-free_element
|
In mathematics, a square-free integer (or squarefree integer) is an integer which is divisible by no square number other than 1. That is, its prime factorization has exactly one factor for each prime that appears in it. For example, 10 = 2 ⋅ 5 is square-free, but 18 = 2 ⋅ 3 ⋅ 3 is not, because 18 is divisible by 9 = 32. The smallest positive square-free numbers are
|
https://en.wikipedia.org/wiki/Squarefree_integer
|
In mathematics, a square-free polynomial is a polynomial defined over a field (or more generally, an integral domain) that does not have as a divisor any square of a non-constant polynomial. A univariate polynomial is square free if and only if it has no multiple root in an algebraically closed field containing its coefficients. This motivates that, in applications in physics and engineering, a square-free polynomial is commonly called a polynomial with no repeated roots.
|
https://en.wikipedia.org/wiki/Square-free_factorization
|
In the case of univariate polynomials, the product rule implies that, if p2 divides f, then p divides the formal derivative f ' of f. The converse is also true and hence, f {\displaystyle f} is square-free if and only if 1 {\displaystyle 1} is a greatest common divisor of the polynomial and its derivative.A square-free decomposition or square-free factorization of a polynomial is a factorization into powers of square-free polynomials f = a 1 a 2 2 a 3 3 ⋯ a n n = ∏ k = 1 n a k k {\displaystyle f=a_{1}a_{2}^{2}a_{3}^{3}\cdots a_{n}^{n}=\prod _{k=1}^{n}a_{k}^{k}\,} where those of the ak that are non-constant are pairwise coprime square-free polynomials (here, two polynomials are said coprime is their greatest common divisor is a constant; in other words that is the coprimality over the field of fractions of the coefficients that is considered). Every non-zero polynomial admits a square-free factorization, which is unique up to the multiplication and division of the factors by non-zero constants. The square-free factorization is much easier to compute than the complete factorization into irreducible factors, and is thus often preferred when the complete factorization is not really needed, as for the partial fraction decomposition and the symbolic integration of rational fractions.
|
https://en.wikipedia.org/wiki/Square-free_factorization
|
Square-free factorization is the first step of the polynomial factorization algorithms that are implemented in computer algebra systems. Therefore, the algorithm of square-free factorization is basic in computer algebra. Over a field of characteristic 0, the quotient of f {\displaystyle f} by its GCD with its derivative is the product of the a i {\displaystyle a_{i}} in the above square-free decomposition.
|
https://en.wikipedia.org/wiki/Square-free_factorization
|
Over a perfect field of non-zero characteristic p, this quotient is the product of the a i {\displaystyle a_{i}} such that i is not a multiple of p. Further GCD computations and exact divisions allow computing the square-free factorization (see square-free factorization over a finite field). In characteristic zero, a better algorithm is known, Yun's algorithm, which is described below. Its computational complexity is, at most, twice that of the GCD computation of the input polynomial and its derivative. More precisely, if T n {\displaystyle T_{n}} is the time needed to compute the GCD of two polynomials of degree n {\displaystyle n} and the quotient of these polynomial by the GCD, then 2 T n {\displaystyle 2T_{n}} is an upper bound for the time needed to compute the square free decomposition. There are also known algorithms for the computation of the square-free decomposition of multivariate polynomials, that proceed generally by considering a multivariate polynomial as a univariate polynomial with polynomial coefficients, and applying recursively a univariate algorithm.
|
https://en.wikipedia.org/wiki/Square-free_factorization
|
In mathematics, a square-integrable function, also called a quadratically integrable function or L 2 {\displaystyle L^{2}} function or square-summable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite. Thus, square-integrability on the real line ( − ∞ , + ∞ ) {\displaystyle (-\infty ,+\infty )} is defined as follows. One may also speak of quadratic integrability over bounded intervals such as {\displaystyle } for a ≤ b {\displaystyle a\leq b} . An equivalent definition is to say that the square of the function itself (rather than of its absolute value) is Lebesgue integrable.
|
https://en.wikipedia.org/wiki/Square_integrable
|
For this to be true, the integrals of the positive and negative portions of the real part must both be finite, as well as those for the imaginary part. The vector space of (equivalence classes of) square integrable functions (with respect to Lebesgue measure) forms the L p {\displaystyle L^{p}} space with p = 2. {\displaystyle p=2.}
|
https://en.wikipedia.org/wiki/Square_integrable
|
Among the L p {\displaystyle L^{p}} spaces, the class of square integrable functions is unique in being compatible with an inner product, which allows notions like angle and orthogonality to be defined. Along with this inner product, the square integrable functions form a Hilbert space, since all of the L p {\displaystyle L^{p}} spaces are complete under their respective p {\displaystyle p} -norms. Often the term is used not to refer to a specific function, but to equivalence classes of functions that are equal almost everywhere.
|
https://en.wikipedia.org/wiki/Square_integrable
|
In mathematics, a stable vector bundle is a (holomorphic or algebraic) vector bundle that is stable in the sense of geometric invariant theory. Any holomorphic vector bundle may be built from stable ones using Harder–Narasimhan filtration. Stable bundles were defined by David Mumford in Mumford (1963) and later built upon by David Gieseker, Fedor Bogomolov, Thomas Bridgeland and many others.
|
https://en.wikipedia.org/wiki/Stable_holomorphic_vector_bundle
|
In mathematics, a stably free module is a module which is close to being free.
|
https://en.wikipedia.org/wiki/Stably_free_module
|
In mathematics, a stacky curve is an object in algebraic geometry that is roughly an algebraic curve with potentially "fractional points" called stacky points. A stacky curve is a type of stack used in studying Gromov–Witten theory, enumerative geometry, and rings of modular forms. Stacky curves are deeply related to 1-dimensional orbifolds and therefore sometimes called orbifold curves or orbicurves.
|
https://en.wikipedia.org/wiki/Stacky_curve
|
In mathematics, a standard Borel space is the Borel space associated to a Polish space. Discounting Borel spaces of discrete Polish spaces, there is, up to isomorphism of measurable spaces, only one standard Borel space.
|
https://en.wikipedia.org/wiki/Standard_Borel_space
|
In mathematics, a statistical manifold is a Riemannian manifold, each of whose points is a probability distribution. Statistical manifolds provide a setting for the field of information geometry. The Fisher information metric provides a metric on these manifolds. Following this definition, the log-likelihood function is a differentiable map and the score is an inclusion.
|
https://en.wikipedia.org/wiki/Statistical_manifold
|
In mathematics, a stella octangula number is a figurate number based on the stella octangula, of the form n(2n2 − 1).The sequence of stella octangula numbers is 0, 1, 14, 51, 124, 245, 426, 679, 1016, 1449, 1990, ... (sequence A007588 in the OEIS)Only two of these numbers are square.
|
https://en.wikipedia.org/wiki/Stella_octangula_number
|
In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise definition of stiffness, but the main idea is that the equation includes some terms that can lead to rapid variation in the solution. When integrating a differential equation numerically, one would expect the requisite step size to be relatively small in a region where the solution curve displays much variation and to be relatively large where the solution curve straightens out to approach a line with slope nearly zero. For some problems this is not the case.
|
https://en.wikipedia.org/wiki/Stiff_equation
|
In order for a numerical method to give a reliable solution to the differential system sometimes the step size is required to be at an unacceptably small level in a region where the solution curve is very smooth. The phenomenon is known as stiffness.
|
https://en.wikipedia.org/wiki/Stiff_equation
|
In some cases there may be two different problems with the same solution, yet one is not stiff and the other is. The phenomenon cannot therefore be a property of the exact solution, since this is the same for both problems, and must be a property of the differential system itself. Such systems are thus known as stiff systems.
|
https://en.wikipedia.org/wiki/Stiff_equation
|
In mathematics, a stochastic matrix is a square matrix used to describe the transitions of a Markov chain. Each of its entries is a nonnegative real number representing a probability. : 9–11 It is also called a probability matrix, transition matrix, substitution matrix, or Markov matrix. : 9–11 The stochastic matrix was first developed by Andrey Markov at the beginning of the 20th century, and has found use throughout a wide variety of scientific fields, including probability theory, statistics, mathematical finance and linear algebra, as well as computer science and population genetics.
|
https://en.wikipedia.org/wiki/Right_stochastic_matrix
|
: 1–8 There are several different definitions and types of stochastic matrices:: 9–11 A right stochastic matrix is a real square matrix, with each row summing to 1. A left stochastic matrix is a real square matrix, with each column summing to 1. A doubly stochastic matrix is a square matrix of nonnegative real numbers with each row and column summing to 1.In the same vein, one may define a stochastic vector (also called probability vector) as a vector whose elements are nonnegative real numbers which sum to 1.
|
https://en.wikipedia.org/wiki/Right_stochastic_matrix
|
Thus, each row of a right stochastic matrix (or column of a left stochastic matrix) is a stochastic vector. : 9–11 A common convention in English language mathematics literature is to use row vectors of probabilities and right stochastic matrices rather than column vectors of probabilities and left stochastic matrices; this article follows that convention. : 1–8 In addition, a substochastic matrix is a real square matrix whose row sums are all ≤ 1. {\displaystyle \leq 1.}
|
https://en.wikipedia.org/wiki/Right_stochastic_matrix
|
In mathematics, a strange nonchaotic attractor (SNA) is a form of attractor which, while converging to a limit, is strange, because it is not piecewise differentiable, and also non-chaotic, in that its Lyapunov exponents are non-positive. SNAs were introduced as a topic of study by Grebogi et al. in 1984. SNAs can be distinguished from periodic, quasiperiodic and chaotic attractors using the 0-1 test for chaos.Periodically driven damped nonlinear systems can exhibit complex dynamics characterized by strange chaotic attractors, where strange refers to the fractal geometry of the attractor and chaotic refers to the exponential sensitivity of orbits on the attractor. Quasiperiodically driven systems forced by incommensurate frequencies are natural extensions of periodically driven ones and are phenomenologically richer.
|
https://en.wikipedia.org/wiki/Strange_nonchaotic_attractor
|
In addition to periodic or quasiperiodic motion, they can exhibit chaotic or nonchaotic motion on strange attractors. Although quasiperiodic forcing is not necessary for strange nonchaotic dynamics (e.g., the period doubling accumulation point of a period doubling cascade), if quasiperiodic driving is not present, strange nonchaotic attractors are typically not robust and not expected to occur naturally because they exist only when the system is carefully tuned to a precise critical parameter value. On the other hand, it was shown in the paper of Grebogi et al. that SNAs can be robust when the system is quasiperiodically driven.
|
https://en.wikipedia.org/wiki/Strange_nonchaotic_attractor
|
The first experiment to demonstrate a robust strange nonchaotic attractor involved the buckling of a magnetoelastic ribbon driven quasiperiodically by two incommensurate frequencies in the golden ratio. Strange nonchaotic attractors have been robustly observed in laboratory experiments involving magnetoelastic ribbons, electrochemical cells, electronic circuits, and a neon glow discharge. In 2015, strange nonchaotic dynamics were identified for the pulsating RR Lyrae variable KIC 5520878, as well as three similar stars observed by the Kepler space telescope, which oscillate in two frequency modes that are nearly in the golden ratio. == References ==
|
https://en.wikipedia.org/wiki/Strange_nonchaotic_attractor
|
In mathematics, a strictly convex space is a normed vector space (X, || ||) for which the closed unit ball is a strictly convex set. Put another way, a strictly convex space is one for which, given any two distinct points x and y on the unit sphere ∂B (i.e. the boundary of the unit ball B of X), the segment joining x and y meets ∂B only at x and y. Strict convexity is somewhere between an inner product space (all inner product spaces being strictly convex) and a general normed space in terms of structure. It also guarantees the uniqueness of a best approximation to an element in X (strictly convex) out of a convex subspace Y, provided that such an approximation exists. If the normed space X is complete and satisfies the slightly stronger property of being uniformly convex (which implies strict convexity), then it is also reflexive by Milman-Pettis theorem.
|
https://en.wikipedia.org/wiki/Strictly_convex_space
|
In mathematics, a strong prime is a prime number with certain special properties. The definitions of strong primes are different in cryptography and number theory.
|
https://en.wikipedia.org/wiki/Strong_prime
|
In mathematics, a strong topology is a topology which is stronger than some other "default" topology. This term is used to describe different topologies depending on context, and it may refer to: the final topology on the disjoint union the topology arising from a norm the strong operator topology the strong topology (polar topology), which subsumes all topologies above.A topology τ is stronger than a topology σ (is a finer topology) if τ contains all the open sets of σ. In algebraic geometry, it usually means the topology of an algebraic variety as complex manifold or subspace of complex projective space, as opposed to the Zariski topology (which is rarely even a Hausdorff space).
|
https://en.wikipedia.org/wiki/Strong_topology
|
In mathematics, a structure is a set endowed with some additional features on the set (e.g. an operation, relation, metric, or topology). Often, the additional features are attached or related to the set, so as to provide it with some additional meaning or significance. A partial list of possible structures are measures, algebraic structures (groups, fields, etc.), topologies, metric structures (geometries), orders, events, equivalence relations, differential structures, and categories.
|
https://en.wikipedia.org/wiki/Mathematical_structure
|
Sometimes, a set is endowed with more than one feature simultaneously, which allows mathematicians to study the interaction between the different structures more richly. For example, an ordering imposes a rigid form, shape, or topology on the set, and if a set has both a topology feature and a group feature, such that these two features are related in a certain way, then the structure becomes a topological group.Mappings between sets which preserve structures (i.e., structures in the domain are mapped to equivalent structures in the codomain) are of special interest in many fields of mathematics. Examples are homomorphisms, which preserve algebraic structures; homeomorphisms, which preserve topological structures; and diffeomorphisms, which preserve differential structures.
|
https://en.wikipedia.org/wiki/Mathematical_structure
|
In mathematics, a stuck unknot is a closed polygonal chain in three-dimensional space (a skew polygon) that is topologically equal to the unknot but cannot be deformed to a simple polygon when interpreted as a mechanical linkage, by rigid length-preserving and non-self-intersecting motions of its segments. Similarly a stuck open chain is an open polygonal chain such that the segments may not be aligned by moving rigidly its segments. Topologically such a chain can be unknotted, but the limitation of using only rigid motions of the segments can create nontrivial knots in such a chain. Consideration of such "stuck" configurations arises in the study of molecular chains in biochemistry. == References ==
|
https://en.wikipedia.org/wiki/Stuck_unknot
|
In mathematics, a stunted projective space is a construction on a projective space of importance in homotopy theory, introduced by James (1959). Part of a conventional projective space is collapsed down to a point. More concretely, in a real projective space, complex projective space or quaternionic projective space KPn,where K stands for the real numbers, complex numbers or quaternions, one can find (in many ways) copies of KPm,where m < n. The corresponding stunted projective space is then KPn,m = KPn/KPm,where the notation implies that the KPm has been identified to a point.
|
https://en.wikipedia.org/wiki/Stunted_projective_space
|
This makes a topological space that is no longer a manifold. The importance of this construction was realised when it was shown that real stunted projective spaces arose as Spanier–Whitehead duals of spaces of Ioan James, so-called quasi-projective spaces, constructed from Stiefel manifolds. Their properties were therefore linked to the construction of frame fields on spheres.
|
https://en.wikipedia.org/wiki/Stunted_projective_space
|
In this way the vector fields on spheres question was reduced to a question on stunted projective spaces: for RPn,m, is there a degree one mapping on the 'next cell up' (of the first dimension not collapsed in the 'stunting') that extends to the whole space? Frank Adams showed that this could not happen, completing the proof. In later developments spaces KP∞,m and stunted lens spaces have also been used.
|
https://en.wikipedia.org/wiki/Stunted_projective_space
|
In mathematics, a sub-Riemannian manifold is a certain type of generalization of a Riemannian manifold. Roughly speaking, to measure distances in a sub-Riemannian manifold, you are allowed to go only along curves tangent to so-called horizontal subspaces. Sub-Riemannian manifolds (and so, a fortiori, Riemannian manifolds) carry a natural intrinsic metric called the metric of Carnot–Carathéodory.
|
https://en.wikipedia.org/wiki/Sub-Riemannian_manifold
|
The Hausdorff dimension of such metric spaces is always an integer and larger than its topological dimension (unless it is actually a Riemannian manifold). Sub-Riemannian manifolds often occur in the study of constrained systems in classical mechanics, such as the motion of vehicles on a surface, the motion of robot arms, and the orbital dynamics of satellites. Geometric quantities such as the Berry phase may be understood in the language of sub-Riemannian geometry. The Heisenberg group, important to quantum mechanics, carries a natural sub-Riemannian structure.
|
https://en.wikipedia.org/wiki/Sub-Riemannian_manifold
|
In mathematics, a subadditive set function is a set function whose value, informally, has the property that the value of function on the union of two sets is at most the sum of values of the function on each of the sets. This is thematically related to the subadditivity property of real-valued functions.
|
https://en.wikipedia.org/wiki/Subadditive_utility
|
In mathematics, a subalgebra is a subset of an algebra, closed under all its operations, and carrying the induced operations. "Algebra", when referring to a structure, often means a vector space or module equipped with an additional bilinear operation. Algebras in universal algebra are far more general: they are a common generalisation of all algebraic structures. "Subalgebra" can refer to either case.
|
https://en.wikipedia.org/wiki/Subalgebra
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.