text
stringlengths 9
3.55k
| source
stringlengths 31
280
|
|---|---|
Often, the term linear equation refers implicitly to this particular case, in which the variable is sensibly called the unknown. In the case of two variables, each solution may be interpreted as the Cartesian coordinates of a point of the Euclidean plane. The solutions of a linear equation form a line in the Euclidean plane, and, conversely, every line can be viewed as the set of all solutions of a linear equation in two variables.
|
https://en.wikipedia.org/wiki/First_degree_equation
|
This is the origin of the term linear for describing this type of equations. More generally, the solutions of a linear equation in n variables form a hyperplane (a subspace of dimension n − 1) in the Euclidean space of dimension n. Linear equations occur frequently in all mathematics and their applications in physics and engineering, partly because non-linear systems are often well approximated by linear equations. This article considers the case of a single equation with coefficients from the field of real numbers, for which one studies the real solutions. All of its content applies to complex solutions and, more generally, for linear equations with coefficients and solutions in any field. For the case of several simultaneous linear equations, see system of linear equations.
|
https://en.wikipedia.org/wiki/First_degree_equation
|
In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers). If V is a vector space over a field k, the set of all linear functionals from V to k is itself a vector space over k with addition and scalar multiplication defined pointwise. This space is called the dual space of V, or sometimes the algebraic dual space, when a topological dual space is also considered.
|
https://en.wikipedia.org/wiki/Dual_vector
|
It is often denoted Hom(V, k), or, when the field k is understood, V ∗ {\displaystyle V^{*}} ; other notations are also used, such as V ′ {\displaystyle V'} , V # {\displaystyle V^{\#}} or V ∨ . {\displaystyle V^{\vee }.} When vectors are represented by column vectors (as is common when a basis is fixed), then linear functionals are represented as row vectors, and their values on specific vectors are given by matrix products (with the row vector on the left).
|
https://en.wikipedia.org/wiki/Dual_vector
|
In mathematics, a linear fractional transformation is, roughly speaking, an invertible transformation of the form z ↦ a z + b c z + d . {\displaystyle z\mapsto {\frac {az+b}{cz+d}}.} The precise definition depends on the nature of a, b, c, d, and z. In other words, a linear fractional transformation is a transformation that is represented by a fraction whose numerator and denominator are linear. In the most basic setting, a, b, c, d, and z are complex numbers (in which case the transformation is also called a Möbius transformation), or more generally elements of a field.
|
https://en.wikipedia.org/wiki/Linear_fractional_transformations
|
The invertibility condition is then ad – bc ≠ 0. Over a field, a linear fractional transformation is the restriction to the field of a projective transformation or homography of the projective line. When a, b, c, d are integer (or, more generally, belong to an integral domain), z is supposed to be a rational number (or to belong to the field of fractions of the integral domain.
|
https://en.wikipedia.org/wiki/Linear_fractional_transformations
|
In this case, the invertibility condition is that ad – bc must be a unit of the domain (that is 1 or −1 in the case of integers).In the most general setting, the a, b, c, d and z are elements of a ring, such as square matrices. An example of such linear fractional transformation is the Cayley transform, which was originally defined on the 3 × 3 real matrix ring. Linear fractional transformations are widely used in various areas of mathematics and its applications to engineering, such as classical geometry, number theory (they are used, for example, in Wiles's proof of Fermat's Last Theorem), group theory, control theory.
|
https://en.wikipedia.org/wiki/Linear_fractional_transformations
|
In mathematics, a linear map (or linear function) f ( x ) {\displaystyle f(x)} is one which satisfies both of the following properties: Additivity or superposition principle: f ( x + y ) = f ( x ) + f ( y ) ; {\displaystyle \textstyle f(x+y)=f(x)+f(y);} Homogeneity: f ( α x ) = α f ( x ) . {\displaystyle \textstyle f(\alpha x)=\alpha f(x).} Additivity implies homogeneity for any rational α, and, for continuous functions, for any real α. For a complex α, homogeneity does not follow from additivity. For example, an antilinear map is additive but not homogeneous.
|
https://en.wikipedia.org/wiki/Nonlinear_science
|
The conditions of additivity and homogeneity are often combined in the superposition principle f ( α x + β y ) = α f ( x ) + β f ( y ) {\displaystyle f(\alpha x+\beta y)=\alpha f(x)+\beta f(y)} An equation written as f ( x ) = C {\displaystyle f(x)=C} is called linear if f ( x ) {\displaystyle f(x)} is a linear map (as defined above) and nonlinear otherwise. The equation is called homogeneous if C = 0 {\displaystyle C=0} and f ( x ) {\displaystyle f(x)} is a homogeneous function. The definition f ( x ) = C {\displaystyle f(x)=C} is very general in that x {\displaystyle x} can be any sensible mathematical object (number, vector, function, etc.), and the function f ( x ) {\displaystyle f(x)} can literally be any mapping, including integration or differentiation with associated constraints (such as boundary values). If f ( x ) {\displaystyle f(x)} contains differentiation with respect to x {\displaystyle x} , the result will be a differential equation.
|
https://en.wikipedia.org/wiki/Nonlinear_science
|
In mathematics, a linear map or linear function f(x) is a function that satisfies the two properties: Additivity: f(x + y) = f(x) + f(y). Homogeneity of degree 1: f(αx) = α f(x) for all α.These properties are known as the superposition principle. In this definition, x is not necessarily a real number, but can in general be an element of any vector space. A more special definition of linear function, not coinciding with the definition of linear map, is used in elementary mathematics (see below).
|
https://en.wikipedia.org/wiki/Linearity
|
Additivity alone implies homogeneity for rational α, since f ( x + x ) = f ( x ) + f ( x ) {\displaystyle f(x+x)=f(x)+f(x)} implies f ( n x ) = n f ( x ) {\displaystyle f(nx)=nf(x)} for any natural number n by mathematical induction, and then n f ( x ) = f ( n x ) = f ( m n m x ) = m f ( n m x ) {\displaystyle nf(x)=f(nx)=f(m{\tfrac {n}{m}}x)=mf({\tfrac {n}{m}}x)} implies f ( n m x ) = n m f ( x ) {\displaystyle f({\tfrac {n}{m}}x)={\tfrac {n}{m}}f(x)} . The density of the rational numbers in the reals implies that any additive continuous function is homogeneous for any real number α, and is therefore linear. The concept of linearity can be extended to linear operators. Important examples of linear operators include the derivative considered as a differential operator, and other operators constructed from it, such as del and the Laplacian. When a differential equation can be expressed in linear form, it can generally be solved by breaking the equation up into smaller pieces, solving each of those pieces, and summing the solutions.
|
https://en.wikipedia.org/wiki/Linearity
|
In mathematics, a linear operator T on a vector space is semisimple if every T-invariant subspace has a complementary T-invariant subspace; in other words, the vector space is a semisimple representation of the operator T. Equivalently, a linear operator is semisimple if the minimal polynomial of it is a product of distinct irreducible polynomials.A linear operator on a finite dimensional vector space over an algebraically closed field is semisimple if and only if it is diagonalizable.Over a perfect field, the Jordan–Chevalley decomposition expresses an endomorphism x: V → V {\displaystyle x:V\to V} as a sum of a semisimple endomorphism s and a nilpotent endomorphism n such that both s and n are polynomials in x.
|
https://en.wikipedia.org/wiki/Semi-simple_operator
|
In mathematics, a linear operator f: V → V {\displaystyle f:V\to V} is called locally finite if the space V {\displaystyle V} is the union of a family of finite-dimensional f {\displaystyle f} -invariant subspaces. : 40 In other words, there exists a family { V i | i ∈ I } {\displaystyle \{V_{i}\vert i\in I\}} of linear subspaces of V {\displaystyle V} , such that we have the following: ⋃ i ∈ I V i = V {\displaystyle \bigcup _{i\in I}V_{i}=V} ( ∀ i ∈ I ) f ⊆ V i {\displaystyle (\forall i\in I)f\subseteq V_{i}} Each V i {\displaystyle V_{i}} is finite-dimensional.An equivalent condition only requires V {\displaystyle V} to be the spanned by finite-dimensional f {\displaystyle f} -invariant subspaces. If V {\displaystyle V} is also a Hilbert space, sometimes an operator is called locally finite when the sum of the { V i | i ∈ I } {\displaystyle \{V_{i}\vert i\in I\}} is only dense in V {\displaystyle V} . : 78–79
|
https://en.wikipedia.org/wiki/Locally_finite_operator
|
In mathematics, a linearised polynomial (or q-polynomial) is a polynomial for which the exponents of all the constituent monomials are powers of q and the coefficients come from some extension field of the finite field of order q. We write a typical example as where each a i {\displaystyle a_{i}} is in F q m ( = GF ( q m ) ) {\displaystyle F_{q^{m}}(=\operatorname {GF} (q^{m}))} for some fixed positive integer m {\displaystyle m} . This special class of polynomials is important from both a theoretical and an applications viewpoint. The highly structured nature of their roots makes these roots easy to determine.
|
https://en.wikipedia.org/wiki/Linearized_polynomial
|
In mathematics, a linked field is a field for which the quadratic forms attached to quaternion algebras have a common property.
|
https://en.wikipedia.org/wiki/Linked_field
|
In mathematics, a local language is a formal language for which membership of a word in the language can be determined by looking at the first and last symbol and each two-symbol substring of the word. Equivalently, it is a language recognised by a local automaton, a particular kind of deterministic finite automaton.Formally, a language L over an alphabet A is defined to be local if there are subsets R and S of A and a subset F of A×A such that a word w is in L if and only if the first letter of w is in R, the last letter of w is in S and no factor of length 2 in w is in F. This corresponds to the regular expression ( R A ∗ ∩ A ∗ S ) ∖ A ∗ F A ∗ . {\displaystyle (RA^{*}\cap A^{*}S)\setminus A^{*}FA^{*}\ .} More generally, a k-testable language L is one for which membership of a word w in L depends only on the prefix, suffix and the set of factors of w of length k; a language is locally testable if it is k-testable for some k. A local language is 2-testable.
|
https://en.wikipedia.org/wiki/Local_language_(formal_language)
|
In mathematics, a local martingale is a type of stochastic process, satisfying the localized version of the martingale property. Every martingale is a local martingale; every bounded local martingale is a martingale; in particular, every local martingale that is bounded from below is a supermartingale, and every local martingale that is bounded from above is a submartingale; however, in general a local martingale is not a martingale, because its expectation can be distorted by large values of small probability. In particular, a driftless diffusion process is a local martingale, but not necessarily a martingale. Local martingales are essential in stochastic analysis (see Itō calculus, semimartingale, and Girsanov theorem).
|
https://en.wikipedia.org/wiki/Local_martingale
|
In mathematics, a local system (or a system of local coefficients) on a topological space X is a tool from algebraic topology which interpolates between cohomology with coefficients in a fixed abelian group A, and general sheaf cohomology in which coefficients vary from point to point. Local coefficient systems were introduced by Norman Steenrod in 1943.The category of perverse sheaves on a manifold is equivalent to the category of local systems on the manifold.
|
https://en.wikipedia.org/wiki/Local_coefficients
|
In mathematics, a locally catenative sequence is a sequence of words in which each word can be constructed as the concatenation of previous words in the sequence.Formally, an infinite sequence of words w(n) is locally catenative if, for some positive integers k and i1,...ik: w ( n ) = w ( n − i 1 ) w ( n − i 2 ) … w ( n − i k ) for n ≥ max { i 1 , … , i k } . {\displaystyle w(n)=w(n-i_{1})w(n-i_{2})\ldots w(n-i_{k}){\text{ for }}n\geq \max\{i_{1},\ldots ,i_{k}\}\,.} Some authors use a slightly different definition in which encodings of previous words are allowed in the concatenation.
|
https://en.wikipedia.org/wiki/Locally_catenative_sequence
|
In mathematics, a locally compact group is a topological group G for which the underlying topology is locally compact and Hausdorff. Locally compact groups are important because many examples of groups that arise throughout mathematics are locally compact and such groups have a natural measure called the Haar measure. This allows one to define integrals of Borel measurable functions on G so that standard analysis notions such as the Fourier transform and L p {\displaystyle L^{p}} spaces can be generalized. Many of the results of finite group representation theory are proved by averaging over the group.
|
https://en.wikipedia.org/wiki/Locally_compact_group
|
For compact groups, modifications of these proofs yields similar results by averaging with respect to the normalized Haar integral. In the general locally compact setting, such techniques need not hold. The resulting theory is a central part of harmonic analysis. The representation theory for locally compact abelian groups is described by Pontryagin duality.
|
https://en.wikipedia.org/wiki/Locally_compact_group
|
In mathematics, a locally compact topological group G has property (T) if the trivial representation is an isolated point in its unitary dual equipped with the Fell topology. Informally, this means that if G acts unitarily on a Hilbert space and has "almost invariant vectors", then it has a nonzero invariant vector. The formal definition, introduced by David Kazhdan (1967), gives this a precise, quantitative meaning. Although originally defined in terms of irreducible representations, property (T) can often be checked even when there is little or no explicit knowledge of the unitary dual. Property (T) has important applications to group representation theory, lattices in algebraic groups over local fields, ergodic theory, geometric group theory, expanders, operator algebras and the theory of networks.
|
https://en.wikipedia.org/wiki/Kazhdan's_property_(T)
|
In mathematics, a locally constant function is a function from a topological space into a set with the property that around every point of its domain, there exists some neighborhood of that point on which it restricts to a constant function.
|
https://en.wikipedia.org/wiki/Locally_constant_function
|
In mathematics, a locally cyclic group is a group (G, *) in which every finitely generated subgroup is cyclic.
|
https://en.wikipedia.org/wiki/Locally_cyclic_group
|
In mathematics, a locally finite measure is a measure for which every point of the measure space has a neighbourhood of finite measure.
|
https://en.wikipedia.org/wiki/Locally_finite_measure
|
In mathematics, a locally finite poset is a partially ordered set P such that for all x, y ∈ P, the interval consists of finitely many elements. Given a locally finite poset P we can define its incidence algebra. Elements of the incidence algebra are functions ƒ that assign to each interval of P a real number ƒ(x, y). These functions form an associative algebra with a product defined by ( f ∗ g ) ( x , y ) := ∑ x ≤ z ≤ y f ( x , z ) g ( z , y ) .
|
https://en.wikipedia.org/wiki/Locally_finite_partially_ordered_set
|
{\displaystyle (f*g)(x,y):=\sum _{x\leq z\leq y}f(x,z)g(z,y).} There is also a definition of incidence coalgebra. In theoretical physics a locally finite poset is also called a causal set and has been used as a model for spacetime.
|
https://en.wikipedia.org/wiki/Locally_finite_partially_ordered_set
|
In mathematics, a locally integrable function (sometimes also called locally summable function) is a function which is integrable (so its integral is finite) on every compact subset of its domain of definition. The importance of such functions lies in the fact that their function space is similar to Lp spaces, but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain (at infinity if the domain is unbounded): in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions.
|
https://en.wikipedia.org/wiki/Locally_integrable_function
|
In mathematics, a locally profinite group is a Hausdorff topological group in which every neighborhood of the identity element contains a compact open subgroup. Equivalently, a locally profinite group is a topological group that is Hausdorff, locally compact, and totally disconnected. Moreover, a locally profinite group is compact if and only if it is profinite; this explains the terminology. Basic examples of locally profinite groups are discrete groups and the p-adic Lie groups. Non-examples are real Lie groups, which have the no small subgroup property. In a locally profinite group, a closed subgroup is locally profinite, and every compact subgroup is contained in an open compact subgroup.
|
https://en.wikipedia.org/wiki/Locally_profinite_group
|
In mathematics, a locally simply connected space is a topological space that admits a basis of simply connected sets. Every locally simply connected space is also locally path-connected and locally connected. The circle is an example of a locally simply connected space which is not simply connected. The Hawaiian earring is a space which is neither locally simply connected nor simply connected.
|
https://en.wikipedia.org/wiki/Locally_simply_connected_space
|
The cone on the Hawaiian earring is contractible and therefore simply connected, but still not locally simply connected. All topological manifolds and CW complexes are locally simply connected. In fact, these satisfy the much stronger property of being locally contractible.
|
https://en.wikipedia.org/wiki/Locally_simply_connected_space
|
A strictly weaker condition is that of being semi-locally simply connected. Both locally simply connected spaces and simply connected spaces are semi-locally simply connected, but neither converse holds. == References ==
|
https://en.wikipedia.org/wiki/Locally_simply_connected_space
|
In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exponential. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to Lie theory since when a matrix has a logarithm then it is in an element of a Lie group and the logarithm is the corresponding element of the vector space of the Lie algebra.
|
https://en.wikipedia.org/wiki/Logarithm_of_a_matrix
|
In mathematics, a logical matrix may be described as d-disjunct and/or d-separable. These concepts play a pivotal role in the mathematical area of non-adaptive group testing. In the mathematical literature, d-disjunct matrices may also be called super-imposed codes or d-cover-free families.According to Chen and Hwang (2006), A matrix is said to be d-separable if no two sets of d columns have the same boolean sum. A matrix is said to be d ¯ {\displaystyle {\overline {d}}} -separable (that's d with an overline) if no two sets of d-or-fewer columns have the same boolean sum.
|
https://en.wikipedia.org/wiki/Disjunct_matrix
|
A matrix is said to be d-disjunct if no set of d columns has a boolean sum which is a superset of any other single column.The following relationships are "well-known": Every d + 1 ¯ {\displaystyle {\overline {d+1}}} -separable matrix is also d {\displaystyle d} -disjunct. Every d {\displaystyle d} -disjunct matrix is also d ¯ {\displaystyle {\overline {d}}} -separable. Every d ¯ {\displaystyle {\overline {d}}} -separable matrix is also d {\displaystyle d} -separable (by definition).
|
https://en.wikipedia.org/wiki/Disjunct_matrix
|
In mathematics, a loop group is a group of loops in a topological group G with multiplication defined pointwise.
|
https://en.wikipedia.org/wiki/Loop_group
|
In mathematics, a loop in a topological space X is a continuous function f from the unit interval I = to X such that f(0) = f(1). In other words, it is a path whose initial point is equal to its terminal point.A loop may also be seen as a continuous map f from the pointed unit circle S1 into X, because S1 may be regarded as a quotient of I under the identification of 0 with 1. The set of all loops in X forms a space called the loop space of X.
|
https://en.wikipedia.org/wiki/Loop_(topology)
|
In mathematics, a low-discrepancy sequence is a sequence with the property that for all values of N, its subsequence x1, ..., xN has a low discrepancy. Roughly speaking, the discrepancy of a sequence is low if the proportion of points in the sequence falling into an arbitrary set B is close to proportional to the measure of B, as would happen on average (but not for particular samples) in the case of an equidistributed sequence. Specific definitions of discrepancy differ regarding the choice of B (hyperspheres, hypercubes, etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value). Low-discrepancy sequences are also called quasirandom sequences, due to their common use as a replacement of uniformly distributed random numbers. The "quasi" modifier is used to denote more clearly that the values of a low-discrepancy sequence are neither random nor pseudorandom, but such sequences share some properties of random variables and in certain applications such as the quasi-Monte Carlo method their lower discrepancy is an important advantage.
|
https://en.wikipedia.org/wiki/Quasi-random_sequence
|
In mathematics, a magic cube is the 3-dimensional equivalent of a magic square, that is, a collection of integers arranged in an n × n × n pattern such that the sums of the numbers on each row, on each column, on each pillar and on each of the four main space diagonals are equal, the so-called magic constant of the cube, denoted M3(n). It can be shown that if a magic cube consists of the numbers 1, 2, ..., n3, then it has magic constant (sequence A027441 in the OEIS) M 3 ( n ) = n ( n 3 + 1 ) 2 . {\displaystyle M_{3}(n)={\frac {n(n^{3}+1)}{2}}.} If, in addition, the numbers on every cross section diagonal also sum up to the cube's magic number, the cube is called a perfect magic cube; otherwise, it is called a semiperfect magic cube. The number n is called the order of the magic cube. If the sums of numbers on a magic cube's broken space diagonals also equal the cube's magic number, the cube is called a pandiagonal magic cube.
|
https://en.wikipedia.org/wiki/Magic_cube
|
In mathematics, a magic hypercube is the k-dimensional generalization of magic squares and magic cubes, that is, an n × n × n × ... × n array of integers such that the sums of the numbers on each pillar (along any axis) as well as on the main space diagonals are all the same. The common sum is called the magic constant of the hypercube, and is sometimes denoted Mk(n). If a magic hypercube consists of the numbers 1, 2, ..., nk, then it has magic number M k ( n ) = n ( n k + 1 ) 2 {\displaystyle M_{k}(n)={\frac {n(n^{k}+1)}{2}}} .For k = 4, a magic hypercube may be called a magic tesseract, with sequence of magic numbers given by OEIS: A021003. The side-length n of the magic hypercube is called its order.
|
https://en.wikipedia.org/wiki/Nasik_magic_hypercube
|
Four-, five-, six-, seven- and eight-dimensional magic hypercubes of order three have been constructed by J. R. Hendricks. Marian Trenkler proved the following theorem: A p-dimensional magic hypercube of order n exists if and only if p > 1 and n is different from 2 or p = 1. A construction of a magic hypercube follows from the proof. The R programming language includes a module, library(magic), that will create magic hypercubes of any dimension with n a multiple of 4.
|
https://en.wikipedia.org/wiki/Nasik_magic_hypercube
|
In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an n {\displaystyle n} -dimensional manifold, or n {\displaystyle n} -manifold for short, is a topological space with the property that each point has a neighborhood that is homeomorphic to an open subset of n {\displaystyle n} -dimensional Euclidean space. One-dimensional manifolds include lines and circles, but not lemniscates. Two-dimensional manifolds are also called surfaces.
|
https://en.wikipedia.org/wiki/Manifold_theory
|
Examples include the plane, the sphere, and the torus, and also the Klein bottle and real projective plane. The concept of a manifold is central to many parts of geometry and modern mathematical physics because it allows complicated structures to be described in terms of well-understood topological properties of simpler spaces. Manifolds naturally arise as solution sets of systems of equations and as graphs of functions.
|
https://en.wikipedia.org/wiki/Manifold_theory
|
The concept has applications in computer-graphics given the need to associate pictures with coordinates (e.g. CT scans). Manifolds can be equipped with additional structure. One important class of manifolds are differentiable manifolds; their differentiable structure allows calculus to be done.
|
https://en.wikipedia.org/wiki/Manifold_theory
|
A Riemannian metric on a manifold allows distances and angles to be measured. Symplectic manifolds serve as the phase spaces in the Hamiltonian formalism of classical mechanics, while four-dimensional Lorentzian manifolds model spacetime in general relativity. The study of manifolds requires working knowledge of calculus and topology.
|
https://en.wikipedia.org/wiki/Manifold_theory
|
In mathematics, a map or mapping is a function in its general sense. These terms may have originated as from the process of making a geographical map: mapping the Earth surface to a sheet of paper.The term map may be used to distinguish some special types of functions, such as homomorphisms. For example, a linear map is a homomorphism of vector spaces, while the term linear function may have this meaning or it may mean a linear polynomial.
|
https://en.wikipedia.org/wiki/Map_(mathematics)
|
In category theory, a map may refer to a morphism. The term transformation can be used interchangeably, but transformation often refers to a function from a set to itself. There are also a few less common uses in logic and graph theory.
|
https://en.wikipedia.org/wiki/Map_(mathematics)
|
In mathematics, a mathematical object is said to satisfy a property locally, if the property is satisfied on some limited, immediate portions of the object (e.g., on some sufficiently small or arbitrarily small neighborhoods of points).
|
https://en.wikipedia.org/wiki/P-local_subgroup
|
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a " 2 × 3 {\displaystyle 2\times 3} matrix", or a matrix of dimension 2 × 3 {\displaystyle 2\times 3} . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra.
|
https://en.wikipedia.org/wiki/Principal_submatrix
|
Therefore, the study of matrices is a large part of linear algebra, and most properties and operations of abstract linear algebra can be expressed in terms of matrices. For example, matrix multiplication represents the composition of linear maps. Not all matrices are related to linear algebra.
|
https://en.wikipedia.org/wiki/Principal_submatrix
|
This is, in particular, the case in graph theory, of incidence matrices, and adjacency matrices. This article focuses on matrices related to linear algebra, and, unless otherwise specified, all matrices represent linear maps or may be viewed as such. Square matrices, matrices with the same number of rows and columns, play a major role in matrix theory.
|
https://en.wikipedia.org/wiki/Principal_submatrix
|
Square matrices of a given dimension form a noncommutative ring, which is one of the most common examples of a noncommutative ring. The determinant of a square matrix is a number associated to the matrix, which is fundamental for the study of a square matrix; for example, a square matrix is invertible if and only if it has a nonzero determinant, and the eigenvalues of a square matrix are the roots of a polynomial determinant. In geometry, matrices are widely used for specifying and representing geometric transformations (for example rotations) and coordinate changes.
|
https://en.wikipedia.org/wiki/Principal_submatrix
|
In numerical analysis, many computational problems are solved by reducing them to a matrix computation, and this often involves computing with matrices of huge dimension. Matrices are used in most areas of mathematics and most scientific fields, either directly, or through their use in geometry and numerical analysis. Matrix theory is the branch of mathematics that focuses on the study of matrices. It was initially a sub-branch of linear algebra, but soon grew to include subjects related to graph theory, algebra, combinatorics and statistics.
|
https://en.wikipedia.org/wiki/Principal_submatrix
|
In mathematics, a matrix coefficient (or matrix element) is a function on a group of a special form, which depends on a linear representation of the group and additional data. Precisely, it is a function on a compact topological group G obtained by composing a representation of G on a vector space V with a linear map from the endomorphisms of V into V's underlying field. It is also called a representative function.
|
https://en.wikipedia.org/wiki/Matrix_coefficient
|
They arise naturally from finite-dimensional representations of G as the matrix-entry functions of the corresponding matrix representations. The Peter–Weyl theorem says that the matrix coefficients on G are dense in the Hilbert space of square-integrable functions on G. Matrix coefficients of representations of Lie groups turned out to be intimately related with the theory of special functions, providing a unifying approach to large parts of this theory. Growth properties of matrix coefficients play a key role in the classification of irreducible representations of locally compact groups, in particular, reductive real and p-adic groups. The formalism of matrix coefficients leads to a generalization of the notion of a modular form. In a different direction, mixing properties of certain dynamical systems are controlled by the properties of suitable matrix coefficients.
|
https://en.wikipedia.org/wiki/Matrix_coefficient
|
In mathematics, a matrix factorization of a polynomial is a technique for factoring irreducible polynomials with matrices. David Eisenbud proved that every multivariate real-valued polynomial p without linear terms can be written as a AB = pI, where A and B are square matrices and I is the identity matrix. Given the polynomial p, the matrices A and B can be found by elementary methods. Example:The polynomial x2 + y2 is irreducible over R, but can be written as = ( x 2 + y 2 ) {\displaystyle \left\left=(x^{2}+y^{2})\left}
|
https://en.wikipedia.org/wiki/Matrix_factorization_of_a_polynomial
|
In mathematics, a matrix group is a group G consisting of invertible matrices over a specified field K, with the operation of matrix multiplication. A linear group is a group that is isomorphic to a matrix group (that is, admitting a faithful, finite-dimensional representation over K). Any finite group is linear, because it can be realized by permutation matrices using Cayley's theorem. Among infinite groups, linear groups form an interesting and tractable class. Examples of groups that are not linear include groups which are "too big" (for example, the group of permutations of an infinite set), or which exhibit some pathological behavior (for example, finitely generated infinite torsion groups).
|
https://en.wikipedia.org/wiki/Matrix_group
|
In mathematics, a matrix is a rectangular array of numbers or other data. In physics, a matrix model is a particular kind of physical theory whose mathematical formulation involves the notion of a matrix in an important way. A matrix model describes the behavior of a set of matrices within the framework of quantum mechanics.One important example of a matrix model is the BFSS matrix model proposed by Tom Banks, Willy Fischler, Stephen Shenker, and Leonard Susskind in 1997. This theory describes the behavior of a set of nine large matrices.
|
https://en.wikipedia.org/wiki/M-theory
|
In their original paper, these authors showed, among other things, that the low energy limit of this matrix model is described by eleven-dimensional supergravity. These calculations led them to propose that the BFSS matrix model is exactly equivalent to M-theory. The BFSS matrix model can therefore be used as a prototype for a correct formulation of M-theory and a tool for investigating the properties of M-theory in a relatively simple setting.
|
https://en.wikipedia.org/wiki/M-theory
|
In mathematics, a matrix is a rectangular array of numbers or other data. In physics, a matrix model is a particular kind of physical theory whose mathematical formulation involves the notion of a matrix in an important way. A matrix model describes the behavior of a set of matrices within the framework of quantum mechanics.One important example of a matrix model is the BFSS matrix model proposed by Tom Banks, Willy Fischler, Stephen Shenker, and Leonard Susskind in 1997. This theory describes the behavior of a set of nine large matrices.
|
https://en.wikipedia.org/wiki/Gauge–gravity_duality
|
In their original paper, these authors showed, among other things, that the low energy limit of this matrix model is described by eleven-dimensional supergravity. These calculations led them to propose that the BFSS matrix model is exactly equivalent to M-theory. The BFSS matrix model can therefore be used as a prototype for a correct formulation of M-theory and a tool for investigating the properties of M-theory in a relatively simple setting.The development of the matrix model formulation of M-theory has led physicists to consider various connections between string theory and a branch of mathematics called noncommutative geometry.
|
https://en.wikipedia.org/wiki/Gauge–gravity_duality
|
This subject is a generalization of ordinary geometry in which mathematicians define new geometric notions using tools from noncommutative algebra. In a paper from 1998, Alain Connes, Michael R. Douglas, and Albert Schwarz showed that some aspects of matrix models and M-theory are described by a noncommutative quantum field theory, a special kind of physical theory in which spacetime is described mathematically using noncommutative geometry. This established a link between matrix models and M-theory on the one hand, and noncommutative geometry on the other hand. It quickly led to the discovery of other important links between noncommutative geometry and various physical theories.
|
https://en.wikipedia.org/wiki/Gauge–gravity_duality
|
In mathematics, a matrix is conformable if its dimensions are suitable for defining some operation (e.g. addition, multiplication, etc.).
|
https://en.wikipedia.org/wiki/Conformable_matrix
|
In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions).
|
https://en.wikipedia.org/wiki/Spectral_norm
|
In mathematics, a matrix of ones or all-ones matrix is a matrix where every entry is equal to one. Examples of standard notation are given below: J 2 = ( 1 1 1 1 ) ; J 3 = ( 1 1 1 1 1 1 1 1 1 ) ; J 2 , 5 = ( 1 1 1 1 1 1 1 1 1 1 ) ; J 1 , 2 = ( 1 1 ) . {\displaystyle J_{2}={\begin{pmatrix}1&1\\1&1\end{pmatrix}};\quad J_{3}={\begin{pmatrix}1&1&1\\1&1&1\\1&1&1\end{pmatrix}};\quad J_{2,5}={\begin{pmatrix}1&1&1&1&1\\1&1&1&1&1\end{pmatrix}};\quad J_{1,2}={\begin{pmatrix}1&1\end{pmatrix}}.\quad } Some sources call the all-ones matrix the unit matrix, but that term may also refer to the identity matrix, a different type of matrix. A vector of ones or all-ones vector is matrix of ones having row or column form; it should not be confused with unit vectors.
|
https://en.wikipedia.org/wiki/Matrix_of_ones
|
In mathematics, a matrix polynomial is a polynomial with square matrices as variables. Given an ordinary, scalar-valued polynomial P ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + ⋯ + a n x n , {\displaystyle P(x)=\sum _{i=0}^{n}{a_{i}x^{i}}=a_{0}+a_{1}x+a_{2}x^{2}+\cdots +a_{n}x^{n},} this polynomial evaluated at a matrix A is P ( A ) = ∑ i = 0 n a i A i = a 0 I + a 1 A + a 2 A 2 + ⋯ + a n A n , {\displaystyle P(A)=\sum _{i=0}^{n}{a_{i}A^{i}}=a_{0}I+a_{1}A+a_{2}A^{2}+\cdots +a_{n}A^{n},} where I is the identity matrix.A matrix polynomial equation is an equality between two matrix polynomials, which holds for the specific matrices in question. A matrix polynomial identity is a matrix polynomial equation which holds for all matrices A in a specified matrix ring Mn(R).
|
https://en.wikipedia.org/wiki/Matrix_geometrical_series
|
In mathematics, a matroid polytope, also called a matroid basis polytope (or basis matroid polytope) to distinguish it from other polytopes derived from a matroid, is a polytope constructed via the bases of a matroid. Given a matroid M {\displaystyle M} , the matroid polytope P M {\displaystyle P_{M}} is the convex hull of the indicator vectors of the bases of M {\displaystyle M} .
|
https://en.wikipedia.org/wiki/Matroid_polytope
|
In mathematics, a maximal compact subgroup K of a topological group G is a subgroup K that is a compact space, in the subspace topology, and maximal amongst such subgroups. Maximal compact subgroups play an important role in the classification of Lie groups and especially semi-simple Lie groups. Maximal compact subgroups of Lie groups are not in general unique, but are unique up to conjugation – they are essentially unique.
|
https://en.wikipedia.org/wiki/Maximal_compact_subgroup
|
In mathematics, a meander or closed meander is a self-avoiding closed curve which crosses a given line a number of times, meaning that it intersects the line while passing from one side to the other. Intuitively, a meander can be viewed as a meandering river with a straight road crossing the river over a number of bridges. The points where the line and the curve cross are therefore referred to as "bridges".
|
https://en.wikipedia.org/wiki/Meander_(mathematics)
|
In mathematics, a measurable acting group is a special group that acts on some space in a way that is compatible with structures of measure theory. Measurable acting groups are found in the intersection of measure theory and group theory, two sub-disciplines of mathematics. Measurable acting groups are the basis for the study of invariant measures in abstract settings, most famously the Haar measure, and the study of stationary random measures.
|
https://en.wikipedia.org/wiki/Measurable_group_action
|
In mathematics, a measurable cardinal is a certain kind of large cardinal number. In order to define the concept, one introduces a two-valued measure on a cardinal κ, or more generally on any set. For a cardinal κ, it can be described as a subdivision of all of its subsets into large and small sets such that κ itself is large, ∅ and all singletons {α}, α ∈ κ are small, complements of small sets are large and vice versa. The intersection of fewer than κ large sets is again large.It turns out that uncountable cardinals endowed with a two-valued measure are large cardinals whose existence cannot be proved from ZFC.The concept of a measurable cardinal was introduced by Stanislaw Ulam in 1930.
|
https://en.wikipedia.org/wiki/Measurable_cardinal
|
In mathematics, a measurable group is a special type of group in the intersection between group theory and measure theory. Measurable groups are used to study measures is an abstract setting and are often closely related to topological groups.
|
https://en.wikipedia.org/wiki/Measurable_group
|
In mathematics, a measurable space or Borel space is a basic object in measure theory. It consists of a set and a σ-algebra, which defines the subsets that will be measured.
|
https://en.wikipedia.org/wiki/Measurable_space
|
In mathematics, a measure algebra is a Boolean algebra with a countably additive positive measure. A probability measure on a measure space gives a measure algebra on the Boolean algebra of measurable sets modulo null sets.
|
https://en.wikipedia.org/wiki/Measure_algebra
|
In mathematics, a measure is said to be saturated if every locally measurable set is also measurable. A set E {\displaystyle E} , not necessarily measurable, is said to be a locally measurable set if for every measurable set A {\displaystyle A} of finite measure, E ∩ A {\displaystyle E\cap A} is measurable. σ {\displaystyle \sigma } -finite measures and measures arising as the restriction of outer measures are saturated. == References ==
|
https://en.wikipedia.org/wiki/Locally_measurable_set
|
In mathematics, a measure on a real vector space is said to be transverse to a given set if it assigns measure zero to every translate of that set, while assigning finite and positive (i.e. non-zero) measure to some compact set.
|
https://en.wikipedia.org/wiki/Transverse_measure
|
In mathematics, a measure-preserving dynamical system is an object of study in the abstract formulation of dynamical systems, and ergodic theory in particular. Measure-preserving systems obey the Poincaré recurrence theorem, and are a special case of conservative systems. They provide the formal, mathematical basis for a broad range of physical systems, and, in particular, many systems from classical mechanics (in particular, most non-dissipative systems) as well as systems in thermodynamic equilibrium.
|
https://en.wikipedia.org/wiki/Measure_preserving_dynamical_system
|
In mathematics, a metabelian group is a group whose commutator subgroup is abelian. Equivalently, a group G is metabelian if and only if there is an abelian normal subgroup A such that the quotient group G/A is abelian. Subgroups of metabelian groups are metabelian, as are images of metabelian groups over group homomorphisms. Metabelian groups are solvable. In fact, they are precisely the solvable groups of derived length at most 2.
|
https://en.wikipedia.org/wiki/Metabelian_group
|
In mathematics, a metasymplectic space, introduced by Freudenthal (1959) and Tits (1974, 10.13), is a Tits building of type F4 (a specific generalized incidence structure). The four types of vertices are called points, lines, planes, and symplecta.
|
https://en.wikipedia.org/wiki/Metasymplectic_space
|
In mathematics, a metric connection is a connection in a vector bundle E equipped with a bundle metric; that is, a metric for which the inner product of any two vectors will remain the same when those vectors are parallel transported along any curve. This is equivalent to: A connection for which the covariant derivatives of the metric on E vanish. A principal connection on the bundle of orthonormal frames of E.A special case of a metric connection is a Riemannian connection; there is a unique such which is torsion free, the Levi-Civita connection.
|
https://en.wikipedia.org/wiki/Riemannian_connection
|
In this case, the bundle E is the tangent bundle TM of a manifold, and the metric on E is induced by a Riemannian metric on M. Another special case of a metric connection is a Yang–Mills connection, which satisfies the Yang–Mills equations of motion. Most of the machinery of defining a connection and its curvature can go through without requiring any compatibility with the bundle metric. However, once one does require compatibility, this metric connection, defines an inner product, Hodge star (which additionally needs a choice of orientation), and Laplacian, which are required to formulate the Yang–Mills equations.
|
https://en.wikipedia.org/wiki/Riemannian_connection
|
In mathematics, a metric outer measure is an outer measure μ defined on the subsets of a given metric space (X, d) such that μ ( A ∪ B ) = μ ( A ) + μ ( B ) {\displaystyle \mu (A\cup B)=\mu (A)+\mu (B)} for every pair of positively separated subsets A and B of X.
|
https://en.wikipedia.org/wiki/Metric_outer_measure
|
In mathematics, a metric space X with metric d is said to be doubling if there is some doubling constant M > 0 such that for any x ∈ X and r > 0, it is possible to cover the ball B(x, r) = {y | d(x, y) < r} with the union of at most M balls of radius r/2. The base-2 logarithm of M is called the doubling dimension of X. Euclidean spaces R d {\displaystyle \mathbb {R} ^{d}} equipped with the usual Euclidean metric are examples of doubling spaces where the doubling constant M depends on the dimension d. For example, in one dimension, M = 3; and in two dimensions, M = 7. In general, Euclidean space R d {\displaystyle \mathbb {R} ^{d}} has doubling dimension Θ ( d ) {\displaystyle \Theta (d)} .
|
https://en.wikipedia.org/wiki/Doubling_dimension
|
In mathematics, a metric space aimed at its subspace is a categorical construction that has a direct geometric meaning. It is also a useful step toward the construction of the metric envelope, or tight span, which are basic (injective) objects of the category of metric spaces. Following (Holsztyński 1966), a notion of a metric space Y aimed at its subspace X is defined.
|
https://en.wikipedia.org/wiki/Metric_space_aimed_at_its_subspace
|
In mathematics, a metric space is a set together with a notion of distance between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are the most general setting for studying many of the concepts of mathematical analysis and geometry. The most familiar example of a metric space is 3-dimensional Euclidean space with its usual notion of distance.
|
https://en.wikipedia.org/wiki/Homogeneous_metric
|
Other well-known examples are a sphere equipped with the angular distance and the hyperbolic plane. A metric may correspond to a metaphorical, rather than physical, notion of distance: for example, the set of 100-character Unicode strings can be equipped with the Hamming distance, which measures the number of characters that need to be changed to get from one string to another.
|
https://en.wikipedia.org/wiki/Homogeneous_metric
|
Since they are very general, metric spaces are a tool used in many different branches of mathematics. Many types of mathematical objects have a natural notion of distance and therefore admit the structure of a metric space, including Riemannian manifolds, normed vector spaces, and graphs. In abstract algebra, the p-adic numbers arise as elements of the completion of a metric structure on the rational numbers. Metric spaces are also studied in their own right in metric geometry and analysis on metric spaces.Many of the basic notions of mathematical analysis, including balls, completeness, as well as uniform, Lipschitz, and Hölder continuity, can be defined in the setting of metric spaces. Other notions, such as continuity, compactness, and open and closed sets, can be defined for metric spaces, but also in the even more general setting of topological spaces.
|
https://en.wikipedia.org/wiki/Homogeneous_metric
|
In mathematics, a metric space is a set where a notion of distance (called a metric) between elements of the set is defined. Much of analysis happens in some metric space; the most commonly used are the real line, the complex plane, Euclidean space, other vector spaces, and the integers. Examples of analysis without a metric include measure theory (which describes size rather than distance) and functional analysis (which studies topological vector spaces that need not have any sense of distance). Formally, a metric space is an ordered pair ( M , d ) {\displaystyle (M,d)} where M {\displaystyle M} is a set and d {\displaystyle d} is a metric on M {\displaystyle M} , i.e., a function d: M × M → R {\displaystyle d\colon M\times M\rightarrow \mathbb {R} } such that for any x , y , z ∈ M {\displaystyle x,y,z\in M} , the following holds: d ( x , y ) ≥ 0 {\displaystyle d(x,y)\geq 0} , with equality if and only if x = y {\displaystyle x=y} (identity of indiscernibles), d ( x , y ) = d ( y , x ) {\displaystyle d(x,y)=d(y,x)} (symmetry), and d ( x , z ) ≤ d ( x , y ) + d ( y , z ) {\displaystyle d(x,z)\leq d(x,y)+d(y,z)} (triangle inequality).By taking the third property and letting z = x {\displaystyle z=x} , it can be shown that d ( x , y ) ≥ 0 {\displaystyle d(x,y)\geq 0} (non-negative).
|
https://en.wikipedia.org/wiki/Hard_analysis
|
In mathematics, a microbundle is a generalization of the concept of vector bundle, introduced by the American mathematician John Milnor in 1964. It allows the creation of bundle-like objects in situations where they would not ordinarily be thought to exist. For example, the tangent bundle is defined for a smooth manifold but not a topological manifold; use of microbundles allows the definition of a topological tangent bundle.
|
https://en.wikipedia.org/wiki/Microbundle
|
In mathematics, a minimal K-type is a representation of a maximal compact subgroup K of a semisimple Lie group G that is in some sense the smallest representation of K occurring in a Harish-Chandra module of G. Minimal K-types were introduced by Vogan (1979) as part of an algebraic description of the Langlands classification.
|
https://en.wikipedia.org/wiki/Minimal_K-type
|
In mathematics, a minimal counterexample is the smallest example which falsifies a claim, and a proof by minimal counterexample is a method of proof which combines the use of a minimal counterexample with the ideas of proof by induction and proof by contradiction. More specifically, in trying to prove a proposition P, one first assumes by contradiction that it is false, and that therefore there must be at least one counterexample. With respect to some idea of size (which may need to be chosen carefully), one then concludes that there is such a counterexample C that is minimal. In regard to the argument, C is generally something quite hypothetical (since the truth of P excludes the possibility of C), but it may be possible to argue that if C existed, then it would have some definite properties which, after applying some reasoning similar to that in an inductive proof, would lead to a contradiction, thereby showing that the proposition P is indeed true.If the form of the contradiction is that we can derive a further counterexample D, that is smaller than C in the sense of the working hypothesis of minimality, then this technique is traditionally called proof by infinite descent.
|
https://en.wikipedia.org/wiki/Minimal_counterexample
|
In which case, there may be multiple and more complex ways to structure the argument of the proof. The assumption that if there is a counterexample, there is a minimal counterexample, is based on a well-ordering of some kind. The usual ordering on the natural numbers is clearly possible, by the most usual formulation of mathematical induction; but the scope of the method can include well-ordered induction of any kind.
|
https://en.wikipedia.org/wiki/Minimal_counterexample
|
In mathematics, a minimal surface is a surface that locally minimizes its area. This is equivalent to having zero mean curvature (see definitions below). The term "minimal surface" is used because these surfaces originally arose as surfaces that minimized total surface area subject to some constraint. Physical models of area-minimizing minimal surfaces can be made by dipping a wire frame into a soap solution, forming a soap film, which is a minimal surface whose boundary is the wire frame. However, the term is used for more general surfaces that may self-intersect or do not have constraints. For a given constraint there may also exist several minimal surfaces with different areas (for example, see minimal surface of revolution): the standard definitions only relate to a local optimum, not a global optimum.
|
https://en.wikipedia.org/wiki/Minimal_surfaces
|
In mathematics, a minimal surface of revolution or minimum surface of revolution is a surface of revolution defined from two points in a half-plane, whose boundary is the axis of revolution of the surface. It is generated by a curve that lies in the half-plane and connects the two points; among all the surfaces that can be generated in this way, it is the one that minimizes the surface area. A basic problem in the calculus of variations is finding the curve between two points that produces this minimal surface of revolution.
|
https://en.wikipedia.org/wiki/Minimal_surfaces_of_revolution
|
In mathematics, a minimum bottleneck spanning tree (MBST) in an undirected graph is a spanning tree in which the most expensive edge is as cheap as possible. A bottleneck edge is the highest weighted edge in a spanning tree. A spanning tree is a minimum bottleneck spanning tree if the graph does not contain a spanning tree with a smaller bottleneck edge weight. For a directed graph, a similar problem is known as Minimum Bottleneck Spanning Arborescence (MBSA).
|
https://en.wikipedia.org/wiki/Minimum_bottleneck_spanning_tree
|
In mathematics, a mixed boundary condition for a partial differential equation defines a boundary value problem in which the solution of the given equation is required to satisfy different boundary conditions on disjoint parts of the boundary of the domain where the condition is stated. Precisely, in a mixed boundary value problem, the solution is required to satisfy a Dirichlet or a Neumann boundary condition in a mutually exclusive way on disjoint parts of the boundary. For example, given a solution u to a partial differential equation on a domain Ω with boundary ∂Ω, it is said to satisfy a mixed boundary condition if, consisting ∂Ω of two disjoint parts, Γ1 and Γ2, such that ∂Ω = Γ1 ∪ Γ2, u verifies the following equations: u | Γ 1 = u 0 {\displaystyle \left.u\right|_{\Gamma _{1}}=u_{0}} and ∂ u ∂ n | Γ 2 = g , {\displaystyle \left. {\frac {\partial u}{\partial n}}\right|_{\Gamma _{2}}=g,} where u0 and g are given functions defined on those portions of the boundary.The mixed boundary condition differs from the Robin boundary condition in that the latter requires a linear combination, possibly with pointwise variable coefficients, of the Dirichlet and the Neumann boundary value conditions to be satisfied on the whole boundary of a given domain.
|
https://en.wikipedia.org/wiki/Mixed_boundary_condition
|
In mathematics, a mock modular form is the holomorphic part of a harmonic weak Maass form, and a mock theta function is essentially a mock modular form of weight 1/2. The first examples of mock theta functions were described by Srinivasa Ramanujan in his last 1920 letter to G. H. Hardy and in his lost notebook. Sander Zwegers discovered that adding certain non-holomorphic functions to them turns them into harmonic weak Maass forms.
|
https://en.wikipedia.org/wiki/Mock_theta_functions
|
In mathematics, a modular Lie algebra is a Lie algebra over a field of positive characteristic. The theory of modular Lie algebras is significantly different from the theory of real and complex Lie algebras. This difference can be traced to the properties of Frobenius automorphism and to the failure of the exponential map to establish a tight connection between properties of a modular Lie algebra and the corresponding algebraic group. Although serious study of modular Lie algebras was initiated by Nathan Jacobson in 1950s, their representation theory in the semisimple case was advanced only recently due to the influential Lusztig conjectures, which as of 2007 have been partially proved.
|
https://en.wikipedia.org/wiki/Modular_Lie_algebra
|
In mathematics, a modular equation is an algebraic equation satisfied by moduli, in the sense of moduli problems. That is, given a number of functions on a moduli space, a modular equation is an equation holding between them, or in other words an identity for moduli. The most frequent use of the term modular equation is in relation to the moduli problem for elliptic curves. In that case the moduli space itself is of dimension one.
|
https://en.wikipedia.org/wiki/Modular_equation
|
That implies that any two rational functions F and G, in the function field of the modular curve, will satisfy a modular equation P(F,G) = 0 with P a non-zero polynomial of two variables over the complex numbers. For suitable non-degenerate choice of F and G, the equation P(X,Y) = 0 will actually define the modular curve. This can be qualified by saying that P, in the worst case, will be of high degree and the plane curve it defines will have singular points; and the coefficients of P may be very large numbers. Further, the 'cusps' of the moduli problem, which are the points of the modular curve not corresponding to honest elliptic curves but degenerate cases, may be difficult to read off from knowledge of P. In that sense a modular equation becomes the equation of a modular curve. Such equations first arose in the theory of multiplication of elliptic functions (geometrically, the n2-fold covering map from a 2-torus to itself given by the mapping x → n·x on the underlying group) expressed in terms of complex analysis.
|
https://en.wikipedia.org/wiki/Modular_equation
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.