text
stringlengths
9
3.55k
source
stringlengths
31
280
In mathematics, a subbundle U {\displaystyle U} of a vector bundle V {\displaystyle V} on a topological space X {\displaystyle X} is a collection of linear subspaces U x {\displaystyle U_{x}} of the fibers V x {\displaystyle V_{x}} of V {\displaystyle V} at x {\displaystyle x} in X , {\displaystyle X,} that make up a vector bundle in their own right. In connection with foliation theory, a subbundle of the tangent bundle of a smooth manifold may be called a distribution (of tangent vectors). If a set of vector fields Y k {\displaystyle Y_{k}} span the vector space U , {\displaystyle U,} and all Lie commutators {\displaystyle \left} are linear combinations of the Y k , {\displaystyle Y_{k},} then one says that U {\displaystyle U} is an involutive distribution.
https://en.wikipedia.org/wiki/Subbundle
In mathematics, a subcompact cardinal is a certain kind of large cardinal number. A cardinal number κ is subcompact if and only if for every A ⊂ H(κ+) there is a non-trivial elementary embedding j:(H(μ+), B) → (H(κ+), A) (where H(κ+) is the set of all sets of cardinality hereditarily less than κ+) with critical point μ and j(μ) = κ. Analogously, κ is a quasicompact cardinal if and only if for every A ⊂ H(κ+) there is a non-trivial elementary embedding j:(H(κ+), A) → (H(μ+), B) with critical point κ and j(κ) = μ. H(λ) consists of all sets whose transitive closure has cardinality less than λ. Every quasicompact cardinal is subcompact. Quasicompactness is a strengthening of subcompactness in that it projects large cardinal properties upwards.
https://en.wikipedia.org/wiki/Subcompact_cardinal
The relationship is analogous to that of extendible versus supercompact cardinals. Quasicompactness may be viewed as a strengthened or "boldface" version of 1-extendibility. Existence of subcompact cardinals implies existence of many 1-extendible cardinals, and hence many superstrong cardinals.
https://en.wikipedia.org/wiki/Subcompact_cardinal
Existence of a 2κ-supercompact cardinal κ implies existence of many quasicompact cardinals. Subcompact cardinals are noteworthy as the least large cardinals implying a failure of the square principle. If κ is subcompact, then the square principle fails at κ. Canonical inner models at the level of subcompact cardinals satisfy the square principle at all but subcompact cardinals.
https://en.wikipedia.org/wiki/Subcompact_cardinal
(Existence of such models has not yet been proved, but in any case the square principle can be forced for weaker cardinals.) Quasicompactness is one of the strongest large cardinal properties that can be witnessed by current inner models that do not use long extenders. For current inner models, the elementary embeddings included are determined by their effect on P(κ) (as computed at the stage the embedding is included), where κ is the critical point. This prevents them from witnessing even a κ+ strongly compact cardinal κ. Subcompact and quasicompact cardinals were defined by Ronald Jensen.
https://en.wikipedia.org/wiki/Subcompact_cardinal
In mathematics, a submanifold of a manifold M is a subset S which itself has the structure of a manifold, and for which the inclusion map S → M satisfies certain properties. There are different types of submanifolds depending on exactly which properties are required. Different authors often have different definitions.
https://en.wikipedia.org/wiki/Slice_chart
In mathematics, a submodular set function (also known as a submodular function) is a set function that, informally, describes the relationship between a set of inputs and an output, where adding more of one input has a decreasing additional benefit (diminishing returns). The natural diminishing returns property which makes them suitable for many applications, including approximation algorithms, game theory (as functions modeling user preferences) and electrical networks. Recently, submodular functions have also found immense utility in several real world problems in machine learning and artificial intelligence, including automatic summarization, multi-document summarization, feature selection, active learning, sensor placement, image collection summarization and many other domains.
https://en.wikipedia.org/wiki/Submodular_valuation
In mathematics, a subpaving is a set of nonoverlapping boxes of R⁺. A subset X of Rⁿ can be approximated by two subpavings X⁻ and X⁺ such that X⁻ ⊂ X ⊂ X⁺. In R¹ the boxes are line segments, in R² rectangles and in Rⁿ hyperrectangles. A R² subpaving can be also a "non-regular tiling by rectangles", when it has no holes.
https://en.wikipedia.org/wiki/Subpaving
Boxes present the advantage of being very easily manipulated by computers, as they form the heart of interval analysis. Many interval algorithms naturally provide solutions that are regular subpavings.In computation, a well-known application of subpaving in R² is the Quadtree data structure. In image tracing context and other applications is important to see X⁻ as topological interior, as illustrated.
https://en.wikipedia.org/wiki/Subpaving
In mathematics, a subring of R is a subset of a ring that is itself a ring when binary operations of addition and multiplication on R are restricted to the subset, and which shares the same multiplicative identity as R. For those who define rings without requiring the existence of a multiplicative identity, a subring of R is just a subset of R that is a ring for the operations of R (this does imply it contains the additive identity of R). The latter gives a strictly weaker condition, even for rings that do have a multiplicative identity, so that for instance all ideals become subrings (and they may have a multiplicative identity that differs from the one of R). With definition requiring a multiplicative identity (which is used in this article), the only ideal of R that is a subring of R is R itself.
https://en.wikipedia.org/wiki/Algebra_of_dual_numbers
In mathematics, a subsequence of a given sequence is a sequence that can be derived from the given sequence by deleting some or no elements without changing the order of the remaining elements. For example, the sequence ⟨ A , B , D ⟩ {\displaystyle \langle A,B,D\rangle } is a subsequence of ⟨ A , B , C , D , E , F ⟩ {\displaystyle \langle A,B,C,D,E,F\rangle } obtained after removal of elements C , {\displaystyle C,} E , {\displaystyle E,} and F . {\displaystyle F.} The relation of one sequence being the subsequence of another is a preorder.
https://en.wikipedia.org/wiki/Subsequence
Subsequences can contain consecutive elements which were not consecutive in the original sequence. A subsequence which consists of a consecutive run of elements from the original sequence, such as ⟨ B , C , D ⟩ , {\displaystyle \langle B,C,D\rangle ,} from ⟨ A , B , C , D , E , F ⟩ , {\displaystyle \langle A,B,C,D,E,F\rangle ,} is a substring. The substring is a refinement of the subsequence. The list of all subsequences for the word "apple" would be "a", "ap", "al", "ae", "app", "apl", "ape", "ale", "appl", "appe", "aple", "apple", "p", "pp", "pl", "pe", "ppl", "ppe", "ple", "pple", "l", "le", "e", "" (empty string).
https://en.wikipedia.org/wiki/Subsequence
In mathematics, a subsequential limit of a sequence is the limit of some subsequence. Every subsequential limit is a cluster point, but not conversely. In first-countable spaces, the two concepts coincide.
https://en.wikipedia.org/wiki/Subsequential_limit
In a topological space, if every subsequence has a subsequential limit to the same point, then the original sequence also converges to that limit. This need not hold in more generalized notions of convergence, such as the space of almost everywhere convergence. The supremum of the set of all subsequential limits of some sequence is called the limit superior, or limsup.
https://en.wikipedia.org/wiki/Subsequential_limit
Similarly, the infimum of such a set is called the limit inferior, or liminf. See limit superior and limit inferior.If ( X , d ) {\displaystyle (X,d)} is a metric space and there is a Cauchy sequence such that there is a subsequence converging to some x , {\displaystyle x,} then the sequence also converges to x . {\displaystyle x.}
https://en.wikipedia.org/wiki/Subsequential_limit
In mathematics, a subset A {\displaystyle A} of a Polish space X {\displaystyle X} is universally measurable if it is measurable with respect to every complete probability measure on X {\displaystyle X} that measures all Borel subsets of X {\displaystyle X} . In particular, a universally measurable set of reals is necessarily Lebesgue measurable (see § Finiteness condition below). Every analytic set is universally measurable. It follows from projective determinacy, which in turn follows from sufficient large cardinals, that every projective set is universally measurable.
https://en.wikipedia.org/wiki/Universally_measurable_set
In mathematics, a subset A ⊆ X {\displaystyle A\subseteq X} of a linear space X {\displaystyle X} is radial at a given point a 0 ∈ A {\displaystyle a_{0}\in A} if for every x ∈ X {\displaystyle x\in X} there exists a real t x > 0 {\displaystyle t_{x}>0} such that for every t ∈ , {\displaystyle t\in ,} a 0 + t x ∈ A . {\displaystyle a_{0}+tx\in A.} Geometrically, this means A {\displaystyle A} is radial at a 0 {\displaystyle a_{0}} if for every x ∈ X , {\displaystyle x\in X,} there is some (non-degenerate) line segment (depend on x {\displaystyle x} ) emanating from a 0 {\displaystyle a_{0}} in the direction of x {\displaystyle x} that lies entirely in A . {\displaystyle A.} Every radial set is a star domain although not conversely.
https://en.wikipedia.org/wiki/Radial_set
In mathematics, a subset B ⊆ A {\displaystyle B\subseteq A} of a preordered set ( A , ≤ ) {\displaystyle (A,\leq )} is said to be cofinal or frequent in A {\displaystyle A} if for every a ∈ A , {\displaystyle a\in A,} it is possible to find an element b {\displaystyle b} in B {\displaystyle B} that is "larger than a {\displaystyle a} " (explicitly, "larger than a {\displaystyle a} " means a ≤ b {\displaystyle a\leq b} ). Cofinal subsets are very important in the theory of directed sets and nets, where “cofinal subnet” is the appropriate generalization of "subsequence". They are also important in order theory, including the theory of cardinal numbers, where the minimum possible cardinality of a cofinal subset of A {\displaystyle A} is referred to as the cofinality of A . {\displaystyle A.}
https://en.wikipedia.org/wiki/Cofinal_sequence
In mathematics, a subset C of a real or complex vector space is said to be absolutely convex or disked if it is convex and balanced (some people use the term "circled" instead of "balanced"), in which case it is called a disk. The disked hull or the absolute convex hull of a set is the intersection of all disks containing that set.
https://en.wikipedia.org/wiki/Absolutely_convex_set
In mathematics, a subset R of the integers is called a reduced residue system modulo n if: gcd(r, n) = 1 for each r in R, R contains φ(n) elements, no two elements of R are congruent modulo n.Here φ denotes Euler's totient function. A reduced residue system modulo n can be formed from a complete residue system modulo n by removing all integers not relatively prime to n. For example, a complete residue system modulo 12 is {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}. The so-called totatives 1, 5, 7 and 11 are the only integers in this set which are relatively prime to 12, and so the corresponding reduced residue system modulo 12 is {1, 5, 7, 11}. The cardinality of this set can be calculated with the totient function: φ(12) = 4. Some other reduced residue systems modulo 12 are: {13,17,19,23} {−11,−7,−5,−1} {−7,−13,13,31} {35,43,53,61}
https://en.wikipedia.org/wiki/Reduced_residue_system
In mathematics, a subset of a given set is closed under an operation of the larger set if performing that operation on members of the subset always produces a member of that subset. For example, the natural numbers are closed under addition, but not under subtraction: 1 − 2 is not a natural number, although both 1 and 2 are. Similarly, a subset is said to be closed under a collection of operations if it is closed under each of the operations individually.
https://en.wikipedia.org/wiki/Reflexive_transitive_symmetric_closure
The closure of a subset is the result of a closure operator applied to the subset. The closure of a subset under some operations is the smallest superset that is closed under these operations. It is often called the span (for example linear span) or the generated set.
https://en.wikipedia.org/wiki/Reflexive_transitive_symmetric_closure
In mathematics, a subset of a topological space is called nowhere dense or rare if its closure has empty interior. In a very loose sense, it is a set whose elements are not tightly clustered (as defined by the topology on the space) anywhere. For example, the integers are nowhere dense among the reals, whereas the interval (0, 1) is not nowhere dense. A countable union of nowhere dense sets is called a meagre set. Meagre sets play an important role in the formulation of the Baire category theorem, which is used in the proof of several fundamental results of functional analysis.
https://en.wikipedia.org/wiki/Nowhere-dense_set
In mathematics, a sum-free sequence is an increasing sequence of positive integers, a 1 , a 2 , a 3 , … , {\displaystyle a_{1},a_{2},a_{3},\ldots ,} such that no term a n {\displaystyle a_{n}} can be represented as a sum of any subset of the preceding elements of the sequence. This differs from a sum-free set, where only pairs of sums must be avoided, but where those sums may come from the whole set rather than just the preceding terms.
https://en.wikipedia.org/wiki/Sum-free_sequence
In mathematics, a summability kernel is a family or sequence of periodic integrable functions satisfying a certain set of properties, listed below. Certain kernels, such as the Fejér kernel, are particularly useful in Fourier analysis. Summability kernels are related to approximation of the identity; definitions of an approximation of identity vary, but sometimes the definition of an approximation of the identity is taken to be the same as for a summability kernel.
https://en.wikipedia.org/wiki/Summability_kernel
In mathematics, a summation equation or discrete integral equation is an equation in which an unknown function appears under a summation sign. The theories of summation equations and integral equations can be unified as integral equations on time scales using time scale calculus. A summation equation compares to a difference equation as an integral equation compares to a differential equation. The Volterra summation equation is: x ( t ) = f ( t ) + ∑ i = m n k ( t , s , x ( s ) ) {\displaystyle x(t)=f(t)+\sum _{i=m}^{n}k(t,s,x(s))} where x is the unknown function, and s, a, t are integers, and f, k are known functions.
https://en.wikipedia.org/wiki/Summation_equation
In mathematics, a super vector space is a Z 2 {\displaystyle \mathbb {Z} _{2}} -graded vector space, that is, a vector space over a field K {\displaystyle \mathbb {K} } with a given decomposition of subspaces of grade 0 {\displaystyle 0} and grade 1 {\displaystyle 1} . The study of super vector spaces and their generalizations is sometimes called super linear algebra. These objects find their principal application in theoretical physics where they are used to describe the various algebraic aspects of supersymmetry.
https://en.wikipedia.org/wiki/Supervector_space
In mathematics, a superabundant number (sometimes abbreviated as SA) is a certain kind of natural number. A natural number n is called superabundant precisely when, for all m < n σ ( m ) m < σ ( n ) n {\displaystyle {\frac {\sigma (m)}{m}}<{\frac {\sigma (n)}{n}}} where σ denotes the sum-of-divisors function (i.e., the sum of all positive divisors of n, including n itself). The first few superabundant numbers are 1, 2, 4, 6, 12, 24, 36, 48, 60, 120, ... (sequence A004394 in the OEIS). For example, the number 5 is not a superabundant number because for 1, 2, 3, 4, and 5, the sigma is 1, 3, 4, 7, 6, and 7/4 > 6/5.
https://en.wikipedia.org/wiki/Superabundant_number
Superabundant numbers were defined by Leonidas Alaoglu and Paul Erdős (1944). Unknown to Alaoglu and Erdős, about 30 pages of Ramanujan's 1915 paper "Highly Composite Numbers" were suppressed. Those pages were finally published in The Ramanujan Journal 1 (1997), 119–153. In section 59 of that paper, Ramanujan defines generalized highly composite numbers, which include the superabundant numbers.
https://en.wikipedia.org/wiki/Superabundant_number
In mathematics, a superadditive set function is a set function whose value when applied to the union of two disjoint sets is greater than or equal to the sum of values of the function applied to each of the sets separately. This definition is analogous to the notion of superadditivity for real-valued functions. It is contrasted to subadditive set function.
https://en.wikipedia.org/wiki/Superadditive_set_function
In mathematics, a supercommutative (associative) algebra is a superalgebra (i.e. a Z2-graded algebra) such that for any two homogeneous elements x, y we have y x = ( − 1 ) | x | | y | x y , {\displaystyle yx=(-1)^{|x||y|}xy,} where |x| denotes the grade of the element and is 0 or 1 (in Z2) according to whether the grade is even or odd, respectively. Equivalently, it is a superalgebra where the supercommutator = x y − ( − 1 ) | x | | y | y x {\displaystyle =xy-(-1)^{|x||y|}yx} always vanishes. Algebraic structures which supercommute in the above sense are sometimes referred to as skew-commutative associative algebras to emphasize the anti-commutation, or, to emphasize the grading, graded-commutative or, if the supercommutativity is understood, simply commutative. Any commutative algebra is a supercommutative algebra if given the trivial gradation (i.e. all elements are even).
https://en.wikipedia.org/wiki/Supercommutative_algebra
Grassmann algebras (also known as exterior algebras) are the most common examples of nontrivial supercommutative algebras. The supercenter of any superalgebra is the set of elements that supercommute with all elements, and is a supercommutative algebra. The even subalgebra of a supercommutative algebra is always a commutative algebra.
https://en.wikipedia.org/wiki/Supercommutative_algebra
That is, even elements always commute. Odd elements, on the other hand, always anticommute.
https://en.wikipedia.org/wiki/Supercommutative_algebra
That is, x y + y x = 0 {\displaystyle xy+yx=0\,} for odd x and y. In particular, the square of any odd element x vanishes whenever 2 is invertible: x 2 = 0. {\displaystyle x^{2}=0.} Thus a commutative superalgebra (with 2 invertible and nonzero degree one component) always contains nilpotent elements. A Z-graded anticommutative algebra with the property that x2 = 0 for every element x of odd grade (irrespective of whether 2 is invertible) is called an alternating algebra.
https://en.wikipedia.org/wiki/Supercommutative_algebra
In mathematics, a superellipsoid (or super-ellipsoid) is a solid whose horizontal sections are superellipses (Lamé curves) with the same squareness parameter ϵ 2 {\displaystyle \epsilon _{2}} , and whose vertical sections through the center are superellipses with the squareness parameter ϵ 1 {\displaystyle \epsilon _{1}} . It is a generalization of an ellipsoid, which is a special case when ϵ 1 = ϵ 2 = 1 {\displaystyle \epsilon _{1}=\epsilon _{2}=1} .Superellipsoids as computer graphics primitives were popularized by Alan H. Barr (who used the name "superquadrics" to refer to both superellipsoids and supertoroids). In modern computer vision and robotics literatures, superquadrics and superellipsoids are used interchangeably, since superellipsoids are the most representative and widely utilized shape among all the superquadrics.Superellipsoids have an rich shape vocabulary, including cuboids, cylinders, ellipsoids, octahedra and their intermediates. It becomes an important geometric primitive widely used in computer vision, robotics, and physical simulation.
https://en.wikipedia.org/wiki/Superellipsoid
The main advantage of describing objects and envirionment with superellipsoids is its conciseness and expressiveness in shape. Furthermore, a closed-form expression of the Minkowski sum between two superellipsoids is available. This makes it a desirable geometric primitive for robot grasping, collision detection, and motion planning. Useful tools and algorithms for superquadric visualization, sampling, and recovery are open-sourced here.
https://en.wikipedia.org/wiki/Superellipsoid
In mathematics, a superelliptic curve is an algebraic curve defined by an equation of the form y m = f ( x ) , {\displaystyle y^{m}=f(x),} where m ≥ 2 {\displaystyle m\geq 2} is an integer and f is a polynomial of degree d ≥ 3 {\displaystyle d\geq 3} with coefficients in a field k {\displaystyle k} ; more precisely, it is the smooth projective curve whose function field defined by this equation. The case m = 2 {\displaystyle m=2} and d = 3 {\displaystyle d=3} is an elliptic curve, the case m = 2 {\displaystyle m=2} and d ≥ 5 {\displaystyle d\geq 5} is a hyperelliptic curve, and the case m = 3 {\displaystyle m=3} and d ≥ 4 {\displaystyle d\geq 4} is an example of a trigonal curve. Some authors impose additional restrictions, for example, that the integer m {\displaystyle m} should not be divisible by the characteristic of k {\displaystyle k} , that the polynomial f {\displaystyle f} should be square free, that the integers m and d should be coprime, or some combination of these.The Diophantine problem of finding integer points on a superelliptic curve can be solved by a method similar to one used for the resolution of hyperelliptic equations: a Siegel identity is used to reduce to a Thue equation.
https://en.wikipedia.org/wiki/Superelliptic_curve
In mathematics, a superintegrable Hamiltonian system is a Hamiltonian system on a 2 n {\displaystyle 2n} -dimensional symplectic manifold for which the following conditions hold: (i) There exist k > n {\displaystyle k>n} independent integrals F i {\displaystyle F_{i}} of motion. Their level surfaces (invariant submanifolds) form a fibered manifold F: Z → N = F ( Z ) {\displaystyle F:Z\to N=F(Z)} over a connected open subset N ⊂ R k {\displaystyle N\subset \mathbb {R} ^{k}} . (ii) There exist smooth real functions s i j {\displaystyle s_{ij}} on N {\displaystyle N} such that the Poisson bracket of integrals of motion reads { F i , F j } = s i j ∘ F {\displaystyle \{F_{i},F_{j}\}=s_{ij}\circ F} .
https://en.wikipedia.org/wiki/Superintegrable_Hamiltonian_system
(iii) The matrix function s i j {\displaystyle s_{ij}} is of constant corank m = 2 n − k {\displaystyle m=2n-k} on N {\displaystyle N} . If k = n {\displaystyle k=n} , this is the case of a completely integrable Hamiltonian system. The Mishchenko-Fomenko theorem for superintegrable Hamiltonian systems generalizes the Liouville-Arnold theorem on action-angle coordinates of completely integrable Hamiltonian system as follows.
https://en.wikipedia.org/wiki/Superintegrable_Hamiltonian_system
Let invariant submanifolds of a superintegrable Hamiltonian system be connected compact and mutually diffeomorphic. Then the fibered manifold F {\displaystyle F} is a fiber bundle in tori T m {\displaystyle T^{m}} . There exists an open neighbourhood U {\displaystyle U} of F {\displaystyle F} which is a trivial fiber bundle provided with the bundle (generalized action-angle) coordinates ( I A , p i , q i , ϕ A ) {\displaystyle (I_{A},p_{i},q^{i},\phi ^{A})} , A = 1 , … , m {\displaystyle A=1,\ldots ,m} , i = 1 , … , n − m {\displaystyle i=1,\ldots ,n-m} such that ( ϕ A ) {\displaystyle (\phi ^{A})} are coordinates on T m {\displaystyle T^{m}} .
https://en.wikipedia.org/wiki/Superintegrable_Hamiltonian_system
These coordinates are the Darboux coordinates on a symplectic manifold U {\displaystyle U} . A Hamiltonian of a superintegrable system depends only on the action variables I A {\displaystyle I_{A}} which are the Casimir functions of the coinduced Poisson structure on F ( U ) {\displaystyle F(U)} . The Liouville-Arnold theorem for completely integrable systems and the Mishchenko-Fomenko theorem for the superintegrable ones are generalized to the case of non-compact invariant submanifolds. They are diffeomorphic to a toroidal cylinder T m − r × R r {\displaystyle T^{m-r}\times \mathbb {R} ^{r}} .
https://en.wikipedia.org/wiki/Superintegrable_Hamiltonian_system
In mathematics, a superior highly composite number is a natural number which, in a particular rigorous sense, has many divisors. Particularly, it's defined by a ratio between the number of divisors an integer has and that integer raised to some positive power. For any possible exponent, whichever integer has the highest ratio is a superior highly composite number. It is a stronger restriction than that of a highly composite number, which is defined as having more divisors than any smaller positive integer.
https://en.wikipedia.org/wiki/Superior_highly_composite_number
The first 10 superior highly composite numbers and their factorization are listed. For a superior highly composite number n there exists a positive real number ε such that for all natural numbers k smaller than n we have and for all natural numbers k larger than n we have where d(n), the divisor function, denotes the number of divisors of n. The term was coined by Ramanujan (1915).For example, the number with the most divisors per square root of the number itself is 12; this can be demonstrated using some highly composites near 12. 120 is another superior highly composite number because it has the highest ratio of divisors to itself raised to the .4 power. The first 15 superior highly composite numbers, 2, 6, 12, 60, 120, 360, 2520, 5040, 55440, 720720, 1441440, 4324320, 21621600, 367567200, 6983776800 (sequence A002201 in the OEIS) are also the first 15 colossally abundant numbers, which meet a similar condition based on the sum-of-divisors function rather than the number of divisors. Neither set, however, is a subset of the other.
https://en.wikipedia.org/wiki/Superior_highly_composite_number
In mathematics, a supermodule is a Z2-graded module over a superring or superalgebra. Supermodules arise in super linear algebra which is a mathematical framework for studying the concept supersymmetry in theoretical physics. Supermodules over a commutative superalgebra can be viewed as generalizations of super vector spaces over a (purely even) field K. Supermodules often play a more prominent role in super linear algebra than do super vector spaces.
https://en.wikipedia.org/wiki/Supermodule
These reason is that it is often necessary or useful to extend the field of scalars to include odd variables. In doing so one moves from fields to commutative superalgebras and from vector spaces to modules. In this article, all superalgebras are assumed be associative and unital unless stated otherwise.
https://en.wikipedia.org/wiki/Supermodule
In mathematics, a superparticular ratio, also called a superparticular number or epimoric ratio, is the ratio of two consecutive integer numbers. More particularly, the ratio takes the form: n + 1 n = 1 + 1 n {\displaystyle {\frac {n+1}{n}}=1+{\frac {1}{n}}} where n is a positive integer.Thus: A superparticular number is when a great number contains a lesser number, to which it is compared, and at the same time one part of it. For example, when 3 and 2 are compared, they contain 2, plus the 3 has another 1, which is half of two.
https://en.wikipedia.org/wiki/Superparticular_number
When 3 and 4 are compared, they each contain a 3, and the 4 has another 1, which is a third part of 3. Again, when 5, and 4 are compared, they contain the number 4, and the 5 has another 1, which is the fourth part of the number 4, etc. Superparticular ratios were written about by Nicomachus in his treatise Introduction to Arithmetic. Although these numbers have applications in modern pure mathematics, the areas of study that most frequently refer to the superparticular ratios by this name are music theory and the history of mathematics.
https://en.wikipedia.org/wiki/Superparticular_number
In mathematics, a superpartient ratio, also called superpartient number or epimeric ratio, is a rational number that is greater than one and is not superparticular. The term has fallen out of use in modern pure mathematics, but continues to be used in music theory and in the historical study of mathematics. Superpartient ratios were written about by Nicomachus in his treatise Introduction to Arithmetic.
https://en.wikipedia.org/wiki/Superpartient_ratio
In mathematics, a superperfect number is a positive integer n that satisfies σ 2 ( n ) = σ ( σ ( n ) ) = 2 n , {\displaystyle \sigma ^{2}(n)=\sigma (\sigma (n))=2n\,,} where σ is the divisor summatory function. Superperfect numbers are not a generalization of perfect numbers, but have a common generalization. The term was coined by D. Suryanarayana (1969).The first few superperfect numbers are: 2, 4, 16, 64, 4096, 65536, 262144, 1073741824, ... (sequence A019279 in the OEIS).To illustrate: it can be seen that 16 is a superperfect number as σ(16) = 1 + 2 + 4 + 8 + 16 = 31, and σ(31) = 1 + 31 = 32, thus σ(σ(16)) = 32 = 2 × 16. If n is an even superperfect number, then n must be a power of 2, 2k, such that 2k+1 − 1 is a Mersenne prime.It is not known whether there are any odd superperfect numbers. An odd superperfect number n would have to be a square number such that either n or σ(n) is divisible by at least three distinct primes. There are no odd superperfect numbers below 7×1024.
https://en.wikipedia.org/wiki/Superperfect_number
In mathematics, a supersingular variety is (usually) a smooth projective variety in nonzero characteristic such that for all n the slopes of the Newton polygon of the nth crystalline cohomology are all n/2 (de Jong 2014). For special classes of varieties such as elliptic curves it is common to use various ad hoc definitions of "supersingular", which are (usually) equivalent to the one given above. The term "singular elliptic curve" (or "singular j-invariant") was at one times used to refer to complex elliptic curves whose ring of endomorphisms has rank 2, the maximum possible. Helmut Hasse discovered that, in finite characteristic, elliptic curves can have larger rings of endomorphisms of rank 4, and these were called "supersingular elliptic curves".
https://en.wikipedia.org/wiki/Supersingular_Abelian_variety
Supersingular elliptic curves can also be characterized by the slopes of their crystalline cohomology, and the term "supersingular" was later extended to other varieties whose cohomology has similar properties. The terms "supersingular" or "singular" do not mean that the variety has singularities. Examples include: Supersingular elliptic curve.
https://en.wikipedia.org/wiki/Supersingular_Abelian_variety
Elliptic curves in non-zero characteristic with an unusually large ring of endomorphisms of rank 4. Supersingular Abelian variety Sometimes defined to be an abelian variety isogenous to a product of supersingular elliptic curves, and sometimes defined to be an abelian variety of some rank g whose endomorphism ring has rank (2g)2. Supersingular K3 surface.
https://en.wikipedia.org/wiki/Supersingular_Abelian_variety
Certain K3 surfaces in non-zero characteristic. Supersingular Enriques surface. Certain Enriques surfaces in characteristic 2. A surface is called Shioda supersingular if the rank of its Néron–Severi group is equal to its second Betti number. A surface is called Artin supersingular if its formal Brauer group has infinite height.
https://en.wikipedia.org/wiki/Supersingular_Abelian_variety
In mathematics, a supersolvable arrangement is a hyperplane arrangement which has a maximal flag with only modular elements. Equivalently, the intersection semilattice of the arrangement is a supersolvable lattice, in the sense of Richard P. Stanley. As shown by Hiroaki Terao, a complex hyperplane arrangement is supersolvable if and only if its complement is fiber-type.Examples include arrangements associated with Coxeter groups of type A and B. It is known that the Orlik–Solomon algebra of a supersolvable arrangement is a Koszul algebra; whether the converse is true is an open problem. == References ==
https://en.wikipedia.org/wiki/Supersolvable_arrangement
In mathematics, a surface bundle is a bundle in which the fiber is a surface. When the base space is a circle the total space is three-dimensional and is often called a surface bundle over the circle.
https://en.wikipedia.org/wiki/Surface_bundle
In mathematics, a surface bundle over the circle is a fiber bundle with base space a circle, and with fiber space a surface. Therefore the total space has dimension 2 + 1 = 3. In general, fiber bundles over the circle are a special case of mapping tori.
https://en.wikipedia.org/wiki/Surface_bundle_over_the_circle
Here is the construction: take the Cartesian product of a surface with the unit interval. Glue the two copies of the surface, on the boundary, by some homeomorphism. This homeomorphism is called the monodromy of the surface bundle.
https://en.wikipedia.org/wiki/Surface_bundle_over_the_circle
It is possible to show that the homeomorphism type of the bundle obtained depends only on the conjugacy class, in the mapping class group, of the gluing homeomorphism chosen. This construction is an important source of examples both in the field of low-dimensional topology as well as in geometric group theory. In the former we find that the geometry of the three-manifold is determined by the dynamics of the homeomorphism.
https://en.wikipedia.org/wiki/Surface_bundle_over_the_circle
This is the fibered part of William Thurston's geometrization theorem for Haken manifolds, whose proof requires the Nielsen–Thurston classification for surface homeomorphisms as well as deep results in the theory of Kleinian groups. In geometric group theory the fundamental groups of such bundles give an important class of HNN-extensions: that is, extensions of the fundamental group of the fiber (a surface) by the integers. A simple special case of this construction (considered in Henri Poincaré's foundational paper) is that of a torus bundle.
https://en.wikipedia.org/wiki/Surface_bundle_over_the_circle
In mathematics, a surface is a geometrical shape that resembles a deformed plane. The most familiar examples arise as boundaries of solid objects in ordinary three-dimensional Euclidean space R3, such as spheres. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not.
https://en.wikipedia.org/wiki/Topological_surface
A surface is a two-dimensional space; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a two-dimensional sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian). The concept of surface is widely used in physics, engineering, computer graphics, and many other disciplines, primarily in representing the surfaces of physical objects. For example, in analyzing the aerodynamic properties of an airplane, the central consideration is the flow of air along its surface.
https://en.wikipedia.org/wiki/Topological_surface
In mathematics, a surface is a mathematical model of the common concept of a surface. It is a generalization of a plane, but, unlike a plane, it may be curved; this is analogous to a curve generalizing a straight line. There are several more precise definitions, depending on the context and the mathematical tools that are used for the study. The simplest mathematical surfaces are planes and spheres in the Euclidean 3-space.
https://en.wikipedia.org/wiki/Surface_(geometry)
The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not. A surface is a topological space of dimension two; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a two-dimensional sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian).
https://en.wikipedia.org/wiki/Surface_(geometry)
In mathematics, a surjective function (also known as surjection, or onto function ) is a function f such that every element y can be mapped from some element x such that f(x) = y. In other words, every element of the function's codomain is the image of at least one element of its domain. It is not required that x be unique; the function f may map one or more elements of X to the same element of Y. The term surjective and the related terms injective and bijective were introduced by Nicolas Bourbaki, a group of mainly French 20th-century mathematicians who, under this pseudonym, wrote a series of books presenting an exposition of modern advanced mathematics, beginning in 1935. The French word sur means over or above, and relates to the fact that the image of the domain of a surjective function completely covers the function's codomain. Any function induces a surjection by restricting its codomain to the image of its domain.
https://en.wikipedia.org/wiki/Surjective_map
Every surjective function has a right inverse assuming the axiom of choice, and every function with a right inverse is necessarily a surjection. The composition of surjective functions is always surjective. Any function can be decomposed into a surjection and an injection.
https://en.wikipedia.org/wiki/Surjective_map
In mathematics, a surjunctive group is a group such that every injective cellular automaton with the group elements as its cells is also surjective. Surjunctive groups were introduced by Gottschalk (1973). It is unknown whether every group is surjunctive.
https://en.wikipedia.org/wiki/Surjunctive_group
In mathematics, a symbolic language is a language that uses characters or symbols to represent concepts, such as mathematical operations, expressions, and statements, and the entities or operands on which the operations are performed.
https://en.wikipedia.org/wiki/Symbolic_language_(mathematics)
In mathematics, a symmetric Boolean function is a Boolean function whose value does not depend on the order of its input bits, i.e., it depends only on the number of ones (or zeros) in the input. For this reason they are also known as Boolean counting functions.There are 2n+1 symmetric n-ary Boolean functions. Instead of the truth table, traditionally used to represent Boolean functions, one may use a more compact representation for an n-variable symmetric Boolean function: the (n + 1)-vector, whose i-th entry (i = 0, ..., n) is the value of the function on an input vector with i ones. Mathematically, the symmetric Boolean functions correspond one-to-one with the functions that map n+1 elements to two elements, f: { 0 , 1 , .
https://en.wikipedia.org/wiki/Symmetric_Boolean_function
. . , n } → { 0 , 1 } {\displaystyle f:\{0,1,...,n\}\rightarrow \{0,1\}} . Symmetric Boolean functions are used to classify Boolean satisfiability problems.
https://en.wikipedia.org/wiki/Symmetric_Boolean_function
In mathematics, a symmetric bilinear form on a vector space is a bilinear map from two copies of the vector space to the field of scalars such that the order of the two vectors does not affect the value of the map. In other words, it is a bilinear function B {\displaystyle B} that maps every pair ( u , v ) {\displaystyle (u,v)} of elements of the vector space V {\displaystyle V} to the underlying field such that B ( u , v ) = B ( v , u ) {\displaystyle B(u,v)=B(v,u)} for every u {\displaystyle u} and v {\displaystyle v} in V {\displaystyle V} . They are also referred to more briefly as just symmetric forms when "bilinear" is understood.
https://en.wikipedia.org/wiki/Symmetric_bilinear_form
Symmetric bilinear forms on finite-dimensional vector spaces precisely correspond to symmetric matrices given a basis for V. Among bilinear forms, the symmetric ones are important because they are the ones for which the vector space admits a particularly simple kind of basis known as an orthogonal basis (at least when the characteristic of the field is not 2). Given a symmetric bilinear form B, the function q(x) = B(x, x) is the associated quadratic form on the vector space. Moreover, if the characteristic of the field is not 2, B is the unique symmetric bilinear form associated with q.
https://en.wikipedia.org/wiki/Symmetric_bilinear_form
In mathematics, a symmetric matrix M {\displaystyle M} with real entries is positive-definite if the real number z T M z {\displaystyle z^{\textsf {T}}Mz} is positive for every nonzero real column vector z , {\displaystyle z,} where z T {\displaystyle z^{\textsf {T}}} is the transpose of z {\displaystyle z} . More generally, a Hermitian matrix (that is, a complex matrix equal to its conjugate transpose) is positive-definite if the real number z ∗ M z {\displaystyle z^{*}Mz} is positive for every nonzero complex column vector z , {\displaystyle z,} where z ∗ {\displaystyle z^{*}} denotes the conjugate transpose of z . {\displaystyle z.} Positive semi-definite matrices are defined similarly, except that the scalars z T M z {\displaystyle z^{\textsf {T}}Mz} and z ∗ M z {\displaystyle z^{*}Mz} are required to be positive or zero (that is, nonnegative).
https://en.wikipedia.org/wiki/Positive_semidefinite_matrices
Negative-definite and negative semi-definite matrices are defined analogously. A matrix that is not positive semi-definite and not negative semi-definite is sometimes called indefinite. A matrix is thus positive-definite if and only if it is the matrix of a positive-definite quadratic form or Hermitian form.
https://en.wikipedia.org/wiki/Positive_semidefinite_matrices
In other words, a matrix is positive-definite if and only if it defines an inner product. Positive-definite and positive-semidefinite matrices can be characterized in many ways, which may explain the importance of the concept in various parts of mathematics.
https://en.wikipedia.org/wiki/Positive_semidefinite_matrices
A matrix M is positive-definite if and only if it satisfies any of the following equivalent conditions. M is congruent with a diagonal matrix with positive real entries. M is symmetric or Hermitian, and all its eigenvalues are real and positive.
https://en.wikipedia.org/wiki/Positive_semidefinite_matrices
M is symmetric or Hermitian, and all its leading principal minors are positive. There exists an invertible matrix B {\displaystyle B} with conjugate transpose B ∗ {\displaystyle B^{*}} such that M = B ∗ B .
https://en.wikipedia.org/wiki/Positive_semidefinite_matrices
{\displaystyle M=B^{*}B.} A matrix is positive semi-definite if it satisfies similar equivalent conditions where "positive" is replaced by "nonnegative", "invertible matrix" is replaced by "matrix", and the word "leading" is removed. Positive-definite and positive-semidefinite real matrices are at the basis of convex optimization, since, given a function of several real variables that is twice differentiable, then if its Hessian matrix (matrix of its second partial derivatives) is positive-definite at a point p, then the function is convex near p, and, conversely, if the function is convex near p, then the Hessian matrix is positive-semidefinite at p. Some authors use more general definitions of definiteness, including some non-symmetric real matrices, or non-Hermitian complex ones.
https://en.wikipedia.org/wiki/Positive_semidefinite_matrices
In mathematics, a symmetric polynomial is a polynomial P(X1, X2, …, Xn) in n variables, such that if any of the variables are interchanged, one obtains the same polynomial. Formally, P is a symmetric polynomial if for any permutation σ of the subscripts 1, 2, ..., n one has P(Xσ(1), Xσ(2), …, Xσ(n)) = P(X1, X2, …, Xn). Symmetric polynomials arise naturally in the study of the relation between the roots of a polynomial in one variable and its coefficients, since the coefficients can be given by polynomial expressions in the roots, and all roots play a similar role in this setting. From this point of view the elementary symmetric polynomials are the most fundamental symmetric polynomials.
https://en.wikipedia.org/wiki/Monomial_symmetric_polynomial
Indeed, a theorem called the fundamental theorem of symmetric polynomials states that any symmetric polynomial can be expressed in terms of elementary symmetric polynomials. This implies that every symmetric polynomial expression in the roots of a monic polynomial can alternatively be given as a polynomial expression in the coefficients of the polynomial.
https://en.wikipedia.org/wiki/Monomial_symmetric_polynomial
Symmetric polynomials also form an interesting structure by themselves, independently of any relation to the roots of a polynomial. In this context other collections of specific symmetric polynomials, such as complete homogeneous, power sum, and Schur polynomials play important roles alongside the elementary ones. The resulting structures, and in particular the ring of symmetric functions, are of great importance in combinatorics and in representation theory.
https://en.wikipedia.org/wiki/Monomial_symmetric_polynomial
In mathematics, a symmetric space is a Riemannian manifold (or more generally, a pseudo-Riemannian manifold) whose group of symmetries contains an inversion symmetry about every point. This can be studied with the tools of Riemannian geometry, leading to consequences in the theory of holonomy; or algebraically through Lie theory, which allowed Cartan to give a complete classification. Symmetric spaces commonly occur in differential geometry, representation theory and harmonic analysis. In geometric terms, a complete, simply connected Riemannian manifold is a symmetric space if and only if its curvature tensor is invariant under parallel transport.
https://en.wikipedia.org/wiki/Riemann_symmetric_space
More generally, a Riemannian manifold (M, g) is said to be symmetric if and only if, for each point p of M, there exists an isometry of M fixing p and acting on the tangent space T p M {\displaystyle T_{p}M} as minus the identity (every symmetric space is complete, since any geodesic can be extended indefinitely via symmetries about the endpoints). Both descriptions can also naturally be extended to the setting of pseudo-Riemannian manifolds. From the point of view of Lie theory, a symmetric space is the quotient G/H of a connected Lie group G by a Lie subgroup H which is (a connected component of) the invariant group of an involution of G. This definition includes more than the Riemannian definition, and reduces to it when H is compact.
https://en.wikipedia.org/wiki/Riemann_symmetric_space
Riemannian symmetric spaces arise in a wide variety of situations in both mathematics and physics. Their central role in the theory of holonomy was discovered by Marcel Berger. They are important objects of study in representation theory and harmonic analysis as well as in differential geometry.
https://en.wikipedia.org/wiki/Riemann_symmetric_space
In mathematics, a symmetric tensor is a tensor that is invariant under a permutation of its vector arguments: T ( v 1 , v 2 , … , v r ) = T ( v σ 1 , v σ 2 , … , v σ r ) {\displaystyle T(v_{1},v_{2},\ldots ,v_{r})=T(v_{\sigma 1},v_{\sigma 2},\ldots ,v_{\sigma r})} for every permutation σ of the symbols {1, 2, ..., r}. Alternatively, a symmetric tensor of order r represented in coordinates as a quantity with r indices satisfies T i 1 i 2 ⋯ i r = T i σ 1 i σ 2 ⋯ i σ r . {\displaystyle T_{i_{1}i_{2}\cdots i_{r}}=T_{i_{\sigma 1}i_{\sigma 2}\cdots i_{\sigma r}}.} The space of symmetric tensors of order r on a finite-dimensional vector space V is naturally isomorphic to the dual of the space of homogeneous polynomials of degree r on V. Over fields of characteristic zero, the graded vector space of all symmetric tensors can be naturally identified with the symmetric algebra on V. A related concept is that of the antisymmetric tensor or alternating form. Symmetric tensors occur widely in engineering, physics and mathematics.
https://en.wikipedia.org/wiki/Symmetric_tensor
In mathematics, a symmetric tensor is tensor that is invariant under a permutation of its vector arguments: T ( v 1 , v 2 , … , v r ) = T ( v σ 1 , v σ 2 , … , v σ r ) {\displaystyle T(v_{1},v_{2},\dots ,v_{r})=T(v_{\sigma 1},v_{\sigma 2},\dots ,v_{\sigma r})} for every permutation σ of the symbols {1,2,...,r}. Alternatively, an rth order symmetric tensor represented in coordinates as a quantity with r indices satisfies T i 1 i 2 … i r = T i σ 1 i σ 2 … i σ r . {\displaystyle T_{i_{1}i_{2}\dots i_{r}}=T_{i_{\sigma 1}i_{\sigma 2}\dots i_{\sigma r}}.} The space of symmetric tensors of rank r on a finite-dimensional vector space is naturally isomorphic to the dual of the space of homogeneous polynomials of degree r on V. Over fields of characteristic zero, the graded vector space of all symmetric tensors can be naturally identified with the symmetric algebra on V. A related concept is that of the antisymmetric tensor or alternating form. Symmetric tensors occur widely in engineering, physics and mathematics.
https://en.wikipedia.org/wiki/Symmetry_in_mathematics
In mathematics, a symmetrizable compact operator is a compact operator on a Hilbert space that can be composed with a positive operator with trivial kernel to produce a self-adjoint operator. Such operators arose naturally in the work on integral operators of Hilbert, Korn, Lichtenstein and Marty required to solve elliptic boundary value problems on bounded domains in Euclidean space. Between the late 1940s and early 1960s the techniques, previously developed as part of classical potential theory, were abstracted within operator theory by various mathematicians, including M. G. Krein, William T. Reid, Peter Lax and Jean Dieudonné. Fredholm theory already implies that any element of the spectrum is an eigenvalue. The main results assert that the spectral theory of these operators is similar to that of compact self-adjoint operators: any spectral value is real; they form a sequence tending to zero; any generalized eigenvector is an eigenvector; and the eigenvectors span a dense subspace of the Hilbert space.
https://en.wikipedia.org/wiki/Symmetrizable_compact_operator
In mathematics, a symplectic integrator (SI) is a numerical integration scheme for Hamiltonian systems. Symplectic integrators form the subclass of geometric integrators which, by definition, are canonical transformations. They are widely used in nonlinear dynamics, molecular dynamics, discrete element methods, accelerator physics, plasma physics, quantum physics, and celestial mechanics.
https://en.wikipedia.org/wiki/Symplectic_integrator
In mathematics, a symplectic matrix is a 2 n × 2 n {\displaystyle 2n\times 2n} matrix M {\displaystyle M} with real entries that satisfies the condition where M T {\displaystyle M^{\text{T}}} denotes the transpose of M {\displaystyle M} and Ω {\displaystyle \Omega } is a fixed 2 n × 2 n {\displaystyle 2n\times 2n} nonsingular, skew-symmetric matrix. This definition can be extended to 2 n × 2 n {\displaystyle 2n\times 2n} matrices with entries in other fields, such as the complex numbers, finite fields, p-adic numbers, and function fields. Typically Ω {\displaystyle \Omega } is chosen to be the block matrix where I n {\displaystyle I_{n}} is the n × n {\displaystyle n\times n} identity matrix. The matrix Ω {\displaystyle \Omega } has determinant + 1 {\displaystyle +1} and its inverse is Ω − 1 = Ω T = − Ω {\displaystyle \Omega ^{-1}=\Omega ^{\text{T}}=-\Omega } .
https://en.wikipedia.org/wiki/Symplectic_matrix
In mathematics, a symplectic vector space is a vector space V over a field F (for example the real numbers R) equipped with a symplectic bilinear form. A symplectic bilinear form is a mapping ω: V × V → F that is Bilinear Linear in each argument separately; Alternating ω(v, v) = 0 holds for all v ∈ V; and Non-degenerate ω(u, v) = 0 for all v ∈ V implies that u = 0.If the underlying field has characteristic not 2, alternation is equivalent to skew-symmetry. If the characteristic is 2, the skew-symmetry is implied by, but does not imply alternation. In this case every symplectic form is a symmetric form, but not vice versa.
https://en.wikipedia.org/wiki/Lagrangian_subspace
Working in a fixed basis, ω can be represented by a matrix. The conditions above are equivalent to this matrix being skew-symmetric, nonsingular, and hollow (all diagonal entries are zero). This should not be confused with a symplectic matrix, which represents a symplectic transformation of the space.
https://en.wikipedia.org/wiki/Lagrangian_subspace
If V is finite-dimensional, then its dimension must necessarily be even since every skew-symmetric, hollow matrix of odd size has determinant zero. Notice that the condition that the matrix be hollow is not redundant if the characteristic of the field is 2. A symplectic form behaves quite differently from a symmetric form, for example, the scalar product on Euclidean vector spaces.
https://en.wikipedia.org/wiki/Lagrangian_subspace
In mathematics, a symplectomorphism or symplectic map is an isomorphism in the category of symplectic manifolds. In classical mechanics, a symplectomorphism represents a transformation of phase space that is volume-preserving and preserves the symplectic structure of phase space, and is called a canonical transformation.
https://en.wikipedia.org/wiki/Hamiltonian_isotopy
In mathematics, a syndetic set is a subset of the natural numbers having the property of "bounded gaps": that the sizes of the gaps in the sequence of natural numbers is bounded.
https://en.wikipedia.org/wiki/Syndetic_set
In mathematics, a system of bilinear equations is a special sort of system of polynomial equations, where each equation equates a bilinear form with a constant (possibly zero). More precisely, given two sets of variables represented as coordinate vectors x and y, then each equation of the system can be written where, i is an integer whose value ranges from 1 to the number of equations, each A i {\displaystyle A_{i}} is a matrix, and each g i {\displaystyle g_{i}} is a real number. Systems of bilinear equations arise in many subjects including engineering, biology, and statistics.
https://en.wikipedia.org/wiki/System_of_bilinear_equations
In mathematics, a system of differential equations is a finite set of differential equations. Such a system can be either linear or non-linear. Also, such a system can be either a system of ordinary differential equations or a system of partial differential equations.
https://en.wikipedia.org/wiki/System_of_differential_equations
In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns. An overdetermined system is almost always inconsistent (it has no solution) when constructed with random coefficients. However, an overdetermined system will have solutions in some cases, for example if some equation occurs several times in the system, or if some equations are linear combinations of the others. The terminology can be described in terms of the concept of constraint counting.
https://en.wikipedia.org/wiki/Over-determined_system
Each unknown can be seen as an available degree of freedom. Each equation introduced into the system can be viewed as a constraint that restricts one degree of freedom. Therefore, the critical case occurs when the number of equations and the number of free variables are equal.
https://en.wikipedia.org/wiki/Over-determined_system
For every variable giving a degree of freedom, there exists a corresponding constraint. The overdetermined case occurs when the system has been overconstrained — that is, when the equations outnumber the unknowns. In contrast, the underdetermined case occurs when the system has been underconstrained — that is, when the number of equations is fewer than the number of unknowns. Such systems usually have an infinite number of solutions.
https://en.wikipedia.org/wiki/Over-determined_system
In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables. For example, { 3 x + 2 y − z = 1 2 x − 2 y + 4 z = − 2 − x + 1 2 y − z = 0 {\displaystyle {\begin{cases}3x+2y-z=1\\2x-2y+4z=-2\\-x+{\frac {1}{2}}y-z=0\end{cases}}} is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by the ordered triple ( x , y , z ) = ( 1 , − 2 , − 2 ) , {\displaystyle (x,y,z)=(1,-2,-2),} since it makes all three equations valid. The word "system" indicates that the equations should be considered collectively, rather than individually.
https://en.wikipedia.org/wiki/Homogeneous_equation
In mathematics, the theory of linear systems is the basis and a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.
https://en.wikipedia.org/wiki/Homogeneous_equation