text
stringlengths 9
3.55k
| source
stringlengths 31
280
|
|---|---|
The natural logarithm has the number e ≈ 2.718 as its base; its use is widespread in mathematics and physics, because of its very simple derivative. The binary logarithm uses base 2 and is frequently used in computer science.
|
https://en.wikipedia.org/wiki/Logarithm_function
|
Logarithms were introduced by John Napier in 1614 as a means of simplifying calculations. They were rapidly adopted by navigators, scientists, engineers, surveyors and others to perform high-accuracy computations more easily. Using logarithm tables, tedious multi-digit multiplication steps can be replaced by table look-ups and simpler addition.
|
https://en.wikipedia.org/wiki/Logarithm_function
|
This is possible because the logarithm of a product is the sum of the logarithms of the factors: provided that b, x and y are all positive and b ≠ 1. The slide rule, also based on logarithms, allows quick calculations without tables, but at lower precision. The present-day notion of logarithms comes from Leonhard Euler, who connected them to the exponential function in the 18th century, and who also introduced the letter e as the base of natural logarithms.Logarithmic scales reduce wide-ranging quantities to smaller scopes.
|
https://en.wikipedia.org/wiki/Logarithm_function
|
For example, the decibel (dB) is a unit used to express ratio as logarithms, mostly for signal power and amplitude (of which sound pressure is a common example). In chemistry, pH is a logarithmic measure for the acidity of an aqueous solution. Logarithms are commonplace in scientific formulae, and in measurements of the complexity of algorithms and of geometric objects called fractals.
|
https://en.wikipedia.org/wiki/Logarithm_function
|
They help to describe frequency ratios of musical intervals, appear in formulas counting prime numbers or approximating factorials, inform some models in psychophysics, and can aid in forensic accounting. The concept of logarithm as the inverse of exponentiation extends to other mathematical structures as well. However, in general settings, the logarithm tends to be a multi-valued function. For example, the complex logarithm is the multi-valued inverse of the complex exponential function. Similarly, the discrete logarithm is the multi-valued inverse of the exponential function in finite groups; it has uses in public-key cryptography.
|
https://en.wikipedia.org/wiki/Logarithm_function
|
In mathematics, the logarithmic integral function or integral logarithm li(x) is a special function. It is relevant in problems of physics and has number theoretic significance. In particular, according to the prime number theorem, it is a very good approximation to the prime-counting function, which is defined as the number of prime numbers less than or equal to a given value x {\displaystyle x} .
|
https://en.wikipedia.org/wiki/Offset_logarithmic_integral
|
In mathematics, the logarithmic mean is a function of two non-negative numbers which is equal to their difference divided by the logarithm of their quotient. This calculation is applicable in engineering problems involving heat and mass transfer.
|
https://en.wikipedia.org/wiki/Logarithmic_average
|
In mathematics, the logarithmic norm is a real-valued functional on operators, and is derived from either an inner product, a vector norm, or its induced operator norm. The logarithmic norm was independently introduced by Germund Dahlquist and Sergei Lozinskiĭ in 1958, for square matrices. It has since been extended to nonlinear operators and unbounded operators as well. The logarithmic norm has a wide range of applications, in particular in matrix theory, differential equations and numerical analysis. In the finite-dimensional setting, it is also referred to as the matrix measure or the Lozinskiĭ measure.
|
https://en.wikipedia.org/wiki/Logarithmic_norm
|
In mathematics, the longest element of a Coxeter group is the unique element of maximal length in a finite Coxeter group with respect to the chosen generating set consisting of simple reflections. It is often denoted by w0. See (Humphreys 1992, Section 1.8: Simple transitivity and the longest element, pp. 15–16) and (Davis 2007, Section 4.6, pp. 51–53).
|
https://en.wikipedia.org/wiki/Longest_element_of_a_Coxeter_group
|
In mathematics, the look-and-say sequence is the sequence of integers beginning as follows: 1, 11, 21, 1211, 111221, 312211, 13112221, 1113213211, 31131211131221, ... (sequence A005150 in the OEIS).To generate a member of the sequence from the previous member, read off the digits of the previous member, counting the number of digits in groups of the same digit. For example: 1 is read off as "one 1" or 11. 11 is read off as "two 1s" or 21. 21 is read off as "one 2, one 1" or 1211.
|
https://en.wikipedia.org/wiki/Look-and-say_sequence
|
1211 is read off as "one 1, one 2, two 1s" or 111221. 111221 is read off as "three 1s, two 2s, one 1" or 312211.The look-and-say sequence was analyzed by John Conway after he was introduced to it by one of his students at a party.The idea of the look-and-say sequence is similar to that of run-length encoding.
|
https://en.wikipedia.org/wiki/Look-and-say_sequence
|
If started with any digit d from 0 to 9 then d will remain indefinitely as the last digit of the sequence. For any d other than 1, the sequence starts as follows: d, 1d, 111d, 311d, 13211d, 111312211d, 31131122211d, …Ilan Vardi has called this sequence, starting with d = 3, the Conway sequence (sequence A006715 in the OEIS). (for d = 2, see OEIS: A006751)
|
https://en.wikipedia.org/wiki/Look-and-say_sequence
|
In mathematics, the lower convex envelope f ˘ {\displaystyle {\breve {f}}} of a function f {\displaystyle f} defined on an interval {\displaystyle } is defined at each point of the interval as the supremum of all convex functions that lie under that function, i.e. f ˘ ( x ) = sup { g ( x ) ∣ g is convex and g ≤ f over } . {\displaystyle {\breve {f}}(x)=\sup\{g(x)\mid g{\text{ is convex and }}g\leq f{\text{ over }}\}.}
|
https://en.wikipedia.org/wiki/Lower_convex_envelope
|
In mathematics, the lower envelope or pointwise minimum of a finite set of functions is the pointwise minimum of the functions, the function whose value at every point is the minimum of the values of the functions in the given set. The concept of a lower envelope can also be extended to partial functions by taking the minimum only among functions that have values at the point. The upper envelope or pointwise maximum is defined symmetrically. For an infinite set of functions, the same notions may be defined using the infimum in place of the minimum, and the supremum in place of the maximum.For continuous functions from a given class, the lower or upper envelope is a piecewise function whose pieces are from the same class.
|
https://en.wikipedia.org/wiki/Pointwise_maximum
|
For functions of a single real variable whose graphs have a bounded number of intersection points, the complexity of the lower or upper envelope can be bounded using Davenport–Schinzel sequences, and these envelopes can be computed efficiently by a divide-and-conquer algorithm that computes and then merges the envelopes of subsets of the functions.For convex functions or quasiconvex functions, the upper envelope is again convex or quasiconvex. The lower envelope is not, but can be replaced by the lower convex envelope to obtain an operation analogous to the lower envelope that maintains convexity.
|
https://en.wikipedia.org/wiki/Pointwise_maximum
|
The upper and lower envelopes of Lipschitz functions preserve the property of being Lipschitz. However, the lower and upper envelope operations do not necessarily preserve the property of being a continuous function. == References ==
|
https://en.wikipedia.org/wiki/Pointwise_maximum
|
In mathematics, the lower limit topology or right half-open interval topology is a topology defined on the set R {\displaystyle \mathbb {R} } of real numbers; it is different from the standard topology on R {\displaystyle \mathbb {R} } (generated by the open intervals) and has a number of interesting properties. It is the topology generated by the basis of all half-open intervals [a,b), where a and b are real numbers. The resulting topological space is called the Sorgenfrey line after Robert Sorgenfrey or the arrow and is sometimes written R l {\displaystyle \mathbb {R} _{l}} .
|
https://en.wikipedia.org/wiki/Sorgenfrey_line
|
Like the Cantor set and the long line, the Sorgenfrey line often serves as a useful counterexample to many otherwise plausible-sounding conjectures in general topology. The product of R l {\displaystyle \mathbb {R} _{l}} with itself is also a useful counterexample, known as the Sorgenfrey plane. In complete analogy, one can also define the upper limit topology, or left half-open interval topology.
|
https://en.wikipedia.org/wiki/Sorgenfrey_line
|
In mathematics, the lowest common denominator or least common denominator (abbreviated LCD) is the lowest common multiple of the denominators of a set of fractions. It simplifies adding, subtracting, and comparing fractions.
|
https://en.wikipedia.org/wiki/Common_denominator
|
In mathematics, the magnitude or size of a mathematical object is a property which determines whether the object is larger or smaller than other objects of the same kind. More formally, an object's magnitude is the displayed result of an ordering (or ranking) of the class of objects to which it belongs. In physics, magnitude can be defined as quantity or distance.
|
https://en.wikipedia.org/wiki/Magnitude_(mathematics)
|
In mathematics, the main conjecture of Iwasawa theory is a deep relationship between p-adic L-functions and ideal class groups of cyclotomic fields, proved by Kenkichi Iwasawa for primes satisfying the Kummer–Vandiver conjecture and proved for all primes by Mazur and Wiles (1984). The Herbrand–Ribet theorem and the Gras conjecture are both easy consequences of the main conjecture. There are several generalizations of the main conjecture, to totally real fields, CM fields, elliptic curves, and so on.
|
https://en.wikipedia.org/wiki/Iwasawa_main_conjecture
|
In mathematics, the main results concerning irreducible unitary representations of the Lie group SL(2,R) are due to Gelfand and Naimark (1946), V. Bargmann (1947), and Harish-Chandra (1952).
|
https://en.wikipedia.org/wiki/Representation_theory_of_SL2(R)
|
In mathematics, the mandelbox is a fractal with a boxlike shape found by Tom Lowe in 2010. It is defined in a similar way to the famous Mandelbrot set as the values of a parameter such that the origin does not escape to infinity under iteration of certain geometrical transformations. The mandelbox is defined as a map of continuous Julia sets, but, unlike the Mandelbrot set, can be defined in any number of dimensions. It is typically drawn in three dimensions for illustrative purposes.
|
https://en.wikipedia.org/wiki/Mandelbox
|
In mathematics, the map segmentation problem is a kind of optimization problem. It involves a certain geographic region that has to be partitioned into smaller sub-regions in order to achieve a certain goal. Typical optimization objectives include: Minimizing the workload of a fleet of vehicles assigned to the sub-regions; Balancing the consumption of a resource, as in fair cake-cutting. Determining the optimal locations of supply depots; Maximizing the surveillance coverage.Fair division of land has been an important issue since ancient times, e.g. in ancient Greece.
|
https://en.wikipedia.org/wiki/Map_segmentation
|
In mathematics, the mapping torus in topology of a homeomorphism f of some topological space X to itself is a particular geometric construction with f. Take the cartesian product of X with a closed interval I, and glue the boundary components together by the static homeomorphism: M f = ( I × X ) ( 1 , x ) ∼ ( 0 , f ( x ) ) {\displaystyle M_{f}={\frac {(I\times X)}{(1,x)\sim (0,f(x))}}} The result is a fiber bundle whose base is a circle and whose fiber is the original space X. If X is a manifold, Mf will be a manifold of dimension one higher, and it is said to "fiber over the circle". As a simple example, let X {\displaystyle X} be the circle, and f {\displaystyle f} be the inversion e i x ↦ e − i x {\displaystyle e^{ix}\mapsto e^{-ix}} , then the mapping torus is the Klein bottle. Mapping tori of surface homeomorphisms play a key role in the theory of 3-manifolds and have been intensely studied. If S is a closed surface of genus g ≥ 2 and if f is a self-homeomorphism of S, the mapping torus Mf is a closed 3-manifold that fibers over the circle with fiber S. A deep result of Thurston states that in this case the 3-manifold Mf is hyperbolic if and only if f is a pseudo-Anosov homeomorphism of S. == References ==
|
https://en.wikipedia.org/wiki/Mapping_torus
|
In mathematics, the master stability function is a tool used to analyse the stability of the synchronous state in a dynamical system consisting of many identical oscillators which are coupled together, such as the Kuramoto model. The setting is as follows. Consider a system with N {\displaystyle N} identical oscillators. Without the coupling, they evolve according to the same differential equation, say x ˙ i = f ( x i ) {\displaystyle {\dot {x}}_{i}=f(x_{i})} where x i {\displaystyle x_{i}} denotes the state of oscillator i {\displaystyle i} .
|
https://en.wikipedia.org/wiki/Master_stability_function
|
A synchronous state of the system of oscillators is where all the oscillators are in the same state. The coupling is defined by a coupling strength σ {\displaystyle \sigma } , a matrix A i j {\displaystyle A_{ij}} which describes how the oscillators are coupled together, and a function g {\displaystyle g} of the state of a single oscillator. Including the coupling leads to the following equation: x ˙ i = f ( x i ) + σ ∑ j = 1 N A i j g ( x j ) .
|
https://en.wikipedia.org/wiki/Master_stability_function
|
{\displaystyle {\dot {x}}_{i}=f(x_{i})+\sigma \sum _{j=1}^{N}A_{ij}g(x_{j}).} It is assumed that the row sums ∑ j A i j {\displaystyle \sum _{j}A_{ij}} vanish so that the manifold of synchronous states is neutrally stable. The master stability function is now defined as the function which maps the complex number γ {\displaystyle \gamma } to the greatest Lyapunov exponent of the equation y ˙ = ( D f + γ D g ) y . {\displaystyle {\dot {y}}=(Df+\gamma Dg)y.} The synchronous state of the system of coupled oscillators is stable if the master stability function is negative at σ λ k {\displaystyle \sigma \lambda _{k}} where λ k {\displaystyle \lambda _{k}} ranges over the eigenvalues of the coupling matrix A {\displaystyle A} .
|
https://en.wikipedia.org/wiki/Master_stability_function
|
In mathematics, the matching distance is a metric on the space of size functions. The core of the definition of matching distance is the observation that the information contained in a size function can be combinatorially stored in a formal series of lines and points of the plane, called respectively cornerlines and cornerpoints. Given two size functions ℓ 1 {\displaystyle \ell _{1}} and ℓ 2 {\displaystyle \ell _{2}} , let C 1 {\displaystyle C_{1}} (resp. C 2 {\displaystyle C_{2}} ) be the multiset of all cornerpoints and cornerlines for ℓ 1 {\displaystyle \ell _{1}} (resp.
|
https://en.wikipedia.org/wiki/Matching_distance
|
ℓ 2 {\displaystyle \ell _{2}} ) counted with their multiplicities, augmented by adding a countable infinity of points of the diagonal { ( x , y ) ∈ R 2: x = y } {\displaystyle \{(x,y)\in \mathbb {R} ^{2}:x=y\}} . The matching distance between ℓ 1 {\displaystyle \ell _{1}} and ℓ 2 {\displaystyle \ell _{2}} is given by d match ( ℓ 1 , ℓ 2 ) = min σ max p ∈ C 1 δ ( p , σ ( p ) ) {\displaystyle d_{\text{match}}(\ell _{1},\ell _{2})=\min _{\sigma }\max _{p\in C_{1}}\delta (p,\sigma (p))} where σ {\displaystyle \sigma } varies among all the bijections between C 1 {\displaystyle C_{1}} and C 2 {\displaystyle C_{2}} and δ ( ( x , y ) , ( x ′ , y ′ ) ) = min { max { | x − x ′ | , | y − y ′ | } , max { y − x 2 , y ′ − x ′ 2 } } .
|
https://en.wikipedia.org/wiki/Matching_distance
|
{\displaystyle \delta \left((x,y),(x',y')\right)=\min \left\{\max\{|x-x'|,|y-y'|\},\max \left\{{\frac {y-x}{2}},{\frac {y'-x'}{2}}\right\}\right\}.} Roughly speaking, the matching distance d match {\displaystyle d_{\text{match}}} between two size functions is the minimum, over all the matchings between the cornerpoints of the two size functions, of the maximum of the L ∞ {\displaystyle L_{\infty }} -distances between two matched cornerpoints. Since two size functions can have a different number of cornerpoints, these can be also matched to points of the diagonal Δ {\displaystyle \Delta } . Moreover, the definition of δ {\displaystyle \delta } implies that matching two points of the diagonal has no cost.
|
https://en.wikipedia.org/wiki/Matching_distance
|
In mathematics, the mathematician Sophus Lie ( LEE) initiated lines of study involving integration of differential equations, transformation groups, and contact of spheres that have come to be called Lie theory. For instance, the latter subject is Lie sphere geometry. This article addresses his approach to transformation groups, which is one of the areas of mathematics, and was worked out by Wilhelm Killing and Élie Cartan. The foundation of Lie theory is the exponential map relating Lie algebras to Lie groups which is called the Lie group–Lie algebra correspondence.
|
https://en.wikipedia.org/wiki/Lie_theory
|
The subject is part of differential geometry since Lie groups are differentiable manifolds. Lie groups evolve out of the identity (1) and the tangent vectors to one-parameter subgroups generate the Lie algebra. The structure of a Lie group is implicit in its algebra, and the structure of the Lie algebra is expressed by root systems and root data. Lie theory has been particularly useful in mathematical physics since it describes the standard transformation groups: the Galilean group, the Lorentz group, the Poincaré group and the conformal group of spacetime.
|
https://en.wikipedia.org/wiki/Lie_theory
|
In mathematics, the matrix ( 1 − 1 1 1 ) {\displaystyle {\begin{pmatrix}1&-1\\1&1\end{pmatrix}}} is sometimes called the quincunx matrix. It is a 2×2 Hadamard matrix, and its rows form the basis of a diagonal square lattice consisting of the integer points whose coordinates both have the same parity; this lattice is a two-dimensional analogue of the three-dimensional body-centered cubic lattice.
|
https://en.wikipedia.org/wiki/Quincunx_matrix
|
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix Lie algebra and the corresponding Lie group.
|
https://en.wikipedia.org/wiki/Exponential_of_a_matrix
|
Let X be an n×n real or complex matrix. The exponential of X, denoted by eX or exp(X), is the n×n matrix given by the power series where X 0 {\displaystyle X^{0}} is defined to be the identity matrix I {\displaystyle I} with the same dimensions as X {\displaystyle X} . The series always converges, so the exponential of X is well-defined. Equivalently, where I is the n×n identity matrix. If X is a 1×1 matrix the matrix exponential of X is a 1×1 matrix whose single element is the ordinary exponential of the single element of X.
|
https://en.wikipedia.org/wiki/Exponential_of_a_matrix
|
In mathematics, the matrix representation of conic sections permits the tools of linear algebra to be used in the study of conic sections. It provides easy ways to calculate a conic section's axis, vertices, tangents and the pole and polar relationship between points and lines of the plane determined by the conic. The technique does not require putting the equation of a conic section into a standard form, thus making it easier to investigate those conic sections whose axes are not parallel to the coordinate system. Conic sections (including degenerate ones) are the sets of points whose coordinates satisfy a second-degree polynomial equation in two variables, By an abuse of notation, this conic section will also be called Q when no confusion can arise.
|
https://en.wikipedia.org/wiki/Matrix_representation_of_conic_sections
|
This equation can be written in matrix notation, in terms of a symmetric matrix to simplify some subsequent formulae, as The sum of the first three terms of this equation, namely is the quadratic form associated with the equation, and the matrix is called the matrix of the quadratic form. The trace and determinant of A 33 {\displaystyle A_{33}} are both invariant with respect to rotation of axes and translation of the plane (movement of the origin).The quadratic equation can also be written as where x {\displaystyle \mathbf {x} } is the homogeneous coordinate vector in three variables restricted so that the last variable is 1, i.e., and where A Q {\displaystyle A_{Q}} is the matrix The matrix A Q {\displaystyle A_{Q}} is called the matrix of the quadratic equation. Like that of A 33 {\displaystyle A_{33}} , its determinant is invariant with respect to both rotation and translation.The 2 × 2 upper left submatrix (a matrix of order 2) of AQ, obtained by removing the third (last) row and third (last) column from AQ is the matrix of the quadratic form. The above notation A33 is used in this article to emphasize this relationship.
|
https://en.wikipedia.org/wiki/Matrix_representation_of_conic_sections
|
In mathematics, the matrix sign function is a matrix function on square matrices analogous to the complex sign function.It was introduced by J.D. Roberts in 1971 as a tool for model reduction and for solving Lyapunov and Algebraic Riccatia equation in a technical report of Cambridge University, which was later published in a journal in 1980.
|
https://en.wikipedia.org/wiki/Matrix_sign_function
|
In mathematics, the maximum modulus principle in complex analysis states that if f {\displaystyle f} is a holomorphic function, then the modulus | f | {\displaystyle |f|} cannot exhibit a strict local maximum that is properly within the domain of f {\displaystyle f} . In other words, either f {\displaystyle f} is locally a constant function, or, for any point z 0 {\displaystyle z_{0}} inside the domain of f {\displaystyle f} there exist other points arbitrarily close to z 0 {\displaystyle z_{0}} at which | f | {\displaystyle |f|} takes larger values.
|
https://en.wikipedia.org/wiki/Maximum_modulus_principle
|
In mathematics, the maximum-minimums identity is a relation between the maximum element of a set S of n numbers and the minima of the 2n − 1 non-empty subsets of S. Let S = {x1, x2, ..., xn}. The identity states that max { x 1 , x 2 , … , x n } = ∑ i = 1 n x i − ∑ i < j min { x i , x j } + ∑ i < j < k min { x i , x j , x k } − ⋯ ⋯ + ( − 1 ) n + 1 min { x 1 , x 2 , … , x n } , {\displaystyle {\begin{aligned}\max\{x_{1},x_{2},\ldots ,x_{n}\}&=\sum _{i=1}^{n}x_{i}-\sum _{i
|
https://en.wikipedia.org/wiki/Maximum-minimums_identity
|
In mathematics, the max–min inequality is as follows: For any function f: Z × W → R , {\displaystyle \ f:Z\times W\to \mathbb {R} \ ,} sup z ∈ Z inf w ∈ W f ( z , w ) ≤ inf w ∈ W sup z ∈ Z f ( z , w ) . {\displaystyle \sup _{z\in Z}\inf _{w\in W}f(z,w)\leq \inf _{w\in W}\sup _{z\in Z}f(z,w)\ .} When equality holds one says that f, W, and Z satisfies a strong max–min property (or a saddle-point property). The example function f ( z , w ) = sin ( z + w ) {\displaystyle \ f(z,w)=\sin(z+w)\ } illustrates that the equality does not hold for every function. A theorem giving conditions on f, W, and Z which guarantee the saddle point property is called a minimax theorem.
|
https://en.wikipedia.org/wiki/Max–min_inequality
|
In mathematics, the mean (topological) dimension of a topological dynamical system is a non-negative extended real number that is a measure of the complexity of the system. Mean dimension was first introduced in 1999 by Gromov. Shortly after it was developed and studied systematically by Lindenstrauss and Weiss. In particular they proved the following key fact: a system with finite topological entropy has zero mean dimension.
|
https://en.wikipedia.org/wiki/Mean_dimension
|
For various topological dynamical systems with infinite topological entropy, the mean dimension can be calculated or at least bounded from below and above. This allows mean dimension to be used to distinguish between systems with infinite topological entropy. Mean dimension is also related to the problem of embedding topological dynamical systems in shift spaces (over Euclidean cubes).
|
https://en.wikipedia.org/wiki/Mean_dimension
|
In mathematics, the mean curvature H {\displaystyle H} of a surface S {\displaystyle S} is an extrinsic measure of curvature that comes from differential geometry and that locally describes the curvature of an embedded surface in some ambient space such as Euclidean space. The concept was used by Sophie Germain in her work on elasticity theory. Jean Baptiste Marie Meusnier used it in 1776, in his studies of minimal surfaces. It is important in the analysis of minimal surfaces, which have mean curvature zero, and in the analysis of physical interfaces between fluids (such as soap films) which, for example, have constant mean curvature in static flows, by the Young–Laplace equation.
|
https://en.wikipedia.org/wiki/Mean_curvature
|
In mathematics, the mean value problem was posed by Stephen Smale in 1981. This problem is still open in full generality. The problem asks: For a given complex polynomial f {\displaystyle f} of degree d ≥ 2 {\displaystyle d\geq 2} A and a complex number z {\displaystyle z} , is there a critical point c {\displaystyle c} of f {\displaystyle f} (i.e. f ′ ( c ) = 0 {\displaystyle f'(c)=0} ) such that | f ( z ) − f ( c ) z − c | ≤ K | f ′ ( z ) | for K = 1 ? {\displaystyle \left|{\frac {f(z)-f(c)}{z-c}}\right|\leq K|f'(z)|{\text{ for }}K=1{\text{?}}} It was proved for K = 4 {\displaystyle K=4} . For a polynomial of degree d {\displaystyle d} the constant K {\displaystyle K} has to be at least d − 1 d {\displaystyle {\frac {d-1}{d}}} from the example f ( z ) = z d − d z {\displaystyle f(z)=z^{d}-dz} , therefore no bound better than K = 1 {\displaystyle K=1} can exist.
|
https://en.wikipedia.org/wiki/Mean_value_problem
|
In mathematics, the mean value theorem (or Lagrange theorem) states, roughly, that for a given planar arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant through its endpoints. It is one of the most important results in real analysis. This theorem is used to prove statements about a function on an interval starting from local hypotheses about derivatives at points of the interval. More precisely, the theorem states that if f {\displaystyle f} is a continuous function on the closed interval {\displaystyle } and differentiable on the open interval ( a , b ) {\displaystyle (a,b)} , then there exists a point c {\displaystyle c} in ( a , b ) {\displaystyle (a,b)} such that the tangent at c {\displaystyle c} is parallel to the secant line through the endpoints ( a , f ( a ) ) {\displaystyle {\big (}a,f(a){\big )}} and ( b , f ( b ) ) {\displaystyle {\big (}b,f(b){\big )}} , that is, f ′ ( c ) = f ( b ) − f ( a ) b − a . {\displaystyle f'(c)={\frac {f(b)-f(a)}{b-a}}.}
|
https://en.wikipedia.org/wiki/Mean-value_theorem
|
In mathematics, the measurable Riemann mapping theorem is a theorem proved in 1960 by Lars Ahlfors and Lipman Bers in complex analysis and geometric function theory. Contrary to its name, it is not a direct generalization of the Riemann mapping theorem, but instead a result concerning quasiconformal mappings and solutions of the Beltrami equation. The result was prefigured by earlier results of Charles Morrey from 1938 on quasi-linear elliptic partial differential equations. The theorem of Ahlfors and Bers states that if μ is a bounded measurable function on C with ‖ μ ‖ ∞ < 1 {\displaystyle \|\mu \|_{\infty }<1} , then there is a unique solution f of the Beltrami equation ∂ z ¯ f ( z ) = μ ( z ) ∂ z f ( z ) {\displaystyle \partial _{\overline {z}}f(z)=\mu (z)\partial _{z}f(z)} for which f is a quasiconformal homeomorphism of C fixing the points 0, 1 and ∞. A similar result is true with C replaced by the unit disk D. Their proof used the Beurling transform, a singular integral operator.
|
https://en.wikipedia.org/wiki/Measurable_Riemann_mapping_theorem
|
In mathematics, the mediant of two fractions, generally made up of four positive integers a c {\displaystyle {\frac {a}{c}}\quad } and b d {\displaystyle \quad {\frac {b}{d}}\quad } is defined as a + b c + d . {\displaystyle \quad {\frac {a+b}{c+d}}.} That is to say, the numerator and denominator of the mediant are the sums of the numerators and denominators of the given fractions, respectively.
|
https://en.wikipedia.org/wiki/Mediant_(mathematics)
|
It is sometimes called the freshman sum, as it is a common mistake in the early stages of learning about addition of fractions. Technically, this is a binary operation on valid fractions (nonzero denominator), considered as ordered pairs of appropriate integers, a priori disregarding the perspective on rational numbers as equivalence classes of fractions. For example, the mediant of the fractions 1/1 and 1/2 is 2/3.
|
https://en.wikipedia.org/wiki/Mediant_(mathematics)
|
However, if the fraction 1/1 is replaced by the fraction 2/2, which is an equivalent fraction denoting the same rational number 1, the mediant of the fractions 2/2 and 1/2 is 3/4. For a stronger connection to rational numbers the fractions may be required to be reduced to lowest terms, thereby selecting unique representatives from the respective equivalence classes. The Stern–Brocot tree provides an enumeration of all positive rational numbers via mediants in lowest terms, obtained purely by iterative computation of the mediant according to a simple algorithm.
|
https://en.wikipedia.org/wiki/Mediant_(mathematics)
|
In mathematics, the membership function of a fuzzy set is a generalization of the indicator function for classical sets. In fuzzy logic, it represents the degree of truth as an extension of valuation. Degrees of truth are often confused with probabilities, although they are conceptually distinct, because fuzzy truth represents membership in vaguely defined sets, not likelihood of some event or condition. Membership functions were introduced by Aliasker Zadeh in the first paper on fuzzy sets (1965). Aliasker Zadeh, in his theory of fuzzy sets, proposed using a membership function (with a range covering the interval (0,1)) operating on the domain of all possible values.
|
https://en.wikipedia.org/wiki/Fuzzy_membership
|
In mathematics, the metaplectic group Mp2n is a double cover of the symplectic group Sp2n. It can be defined over either real or p-adic numbers. The construction covers more generally the case of an arbitrary local or finite field, and even the ring of adeles. The metaplectic group has a particularly significant infinite-dimensional linear representation, the Weil representation. It was used by André Weil to give a representation-theoretic interpretation of theta functions, and is important in the theory of modular forms of half-integral weight and the theta correspondence.
|
https://en.wikipedia.org/wiki/Metaplectic_group
|
In mathematics, the method of Frobenius, named after Ferdinand Georg Frobenius, is a way to find an infinite series solution for a second-order ordinary differential equation of the form with u ′ ≡ d u d z {\textstyle u'\equiv {\frac {du}{dz}}} and u ″ ≡ d 2 u d z 2 {\textstyle u''\equiv {\frac {d^{2}u}{dz^{2}}}} . in the vicinity of the regular singular point z = 0 {\displaystyle z=0} . One can divide by z 2 {\displaystyle z^{2}} to obtain a differential equation of the form which will not be solvable with regular power series methods if either p(z)/z or q(z)/z2 are not analytic at z = 0. The Frobenius method enables one to create a power series solution to such a differential equation, provided that p(z) and q(z) are themselves analytic at 0 or, being analytic elsewhere, both their limits at 0 exist (and are finite).
|
https://en.wikipedia.org/wiki/Indicial_equation
|
In mathematics, the method of characteristics is a technique for solving partial differential equations. Typically, it applies to first-order equations, although more generally the method of characteristics is valid for any hyperbolic partial differential equation. The method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data given on a suitable hypersurface.
|
https://en.wikipedia.org/wiki/Charpit_method
|
In mathematics, the method of clearing denominators, also called clearing fractions, is a technique for simplifying an equation equating two expressions that each are a sum of rational expressions – which includes simple fractions.
|
https://en.wikipedia.org/wiki/Clearing_denominators
|
In mathematics, the method of descent is the term coined by the French mathematician Jacques Hadamard as a method for solving a partial differential equation in several real or complex variables, by regarding it as the specialisation of an equation in more variables, constant in the extra parameters. This method has been used to solve the wave equation, the heat equation and other versions of the Cauchy initial value problem. As Hadamard (1923) wrote: We thus have a first example of what I shall call a 'method of descent'. Creating a phrase for an idea which is merely childish and has been used since the first steps of the theory is, I must confess, rather ambitious; but we shall come across it rather frequently, so that it will be convenient to have a word to denote it. It consists in noticing that he who can do more can do less: if we can integrate equations with m variables, we can do the same for equations with (m – 1) variables.
|
https://en.wikipedia.org/wiki/Hadamard's_method_of_descent
|
In mathematics, the method of dominant balance is used to determine the asymptotic behavior of solutions to an ordinary differential equation without fully solving the equation. The process is iterative, in that the result obtained by performing the method once can be used as input when the method is repeated, to obtain as many terms in the asymptotic expansion as desired.The process goes as follows: Assume that the asymptotic behavior has the form y ( x ) ∼ e S ( x ) . {\displaystyle y(x)\sim e^{S(x)}.} Make an informed guess as to which terms in the ODE might be negligible in the limit of interest.
|
https://en.wikipedia.org/wiki/Method_of_dominant_balance
|
Drop these terms and solve the resulting simpler ODE. Check that the solution is consistent with step 2. If this is the case, then one has the controlling factor of the asymptotic behavior; otherwise, one needs try dropping different terms in step 2, instead. Repeat the process to higher orders, relying on the above result as the leading term in the solution.
|
https://en.wikipedia.org/wiki/Method_of_dominant_balance
|
In mathematics, the method of equating the coefficients is a way of solving a functional equation of two expressions such as polynomials for a number of unknown parameters. It relies on the fact that two expressions are identical precisely when corresponding coefficients are equal for each different type of term. The method is used to bring formulas into a desired form.
|
https://en.wikipedia.org/wiki/Equating_coefficients
|
In mathematics, the method of matched asymptotic expansions is a common approach to finding an accurate approximation to the solution to an equation, or system of equations. It is particularly used when solving singularly perturbed differential equations. It involves finding several different approximate solutions, each of which is valid (i.e. accurate) for part of the range of the independent variable, and then combining these different solutions together to give a single approximate solution that is valid for the whole range of values of the independent variable. In the Russian literature, these methods were known under the name of "intermediate asymptotics" and were introduced in the work of Yakov Zeldovich and Grigory Barenblatt.
|
https://en.wikipedia.org/wiki/Method_of_matched_asymptotic_expansions
|
In mathematics, the method of steepest descent or saddle-point method is an extension of Laplace's method for approximating an integral, where one deforms a contour integral in the complex plane to pass near a stationary point (saddle point), in roughly the direction of steepest descent or stationary phase. The saddle-point approximation is used with integrals in the complex plane, whereas Laplace’s method is used with real integrals. The integral to be estimated is often of the form ∫ C f ( z ) e λ g ( z ) d z , {\displaystyle \int _{C}f(z)e^{\lambda g(z)}\,dz,} where C is a contour, and λ is large.
|
https://en.wikipedia.org/wiki/Saddle-point_method
|
One version of the method of steepest descent deforms the contour of integration C into a new path integration C′ so that the following conditions hold: C′ passes through one or more zeros of the derivative g′(z), the imaginary part of g(z) is constant on C′.The method of steepest descent was first published by Debye (1909), who used it to estimate Bessel functions and pointed out that it occurred in the unpublished note by Riemann (1863) about hypergeometric functions. The contour of steepest descent has a minimax property, see Fedoryuk (2001). Siegel (1932) described some other unpublished notes of Riemann, where he used this method to derive the Riemann–Siegel formula.
|
https://en.wikipedia.org/wiki/Saddle-point_method
|
In mathematics, the method of undetermined coefficients is an approach to finding a particular solution to certain nonhomogeneous ordinary differential equations and recurrence relations. It is closely related to the annihilator method, but instead of using a particular kind of differential operator (the annihilator) in order to find the best possible form of the particular solution, an ansatz or 'guess' is made as to the appropriate form, which is then tested by differentiating the resulting equation. For complex equations, the annihilator method or variation of parameters is less time-consuming to perform. Undetermined coefficients is not as general a method as variation of parameters, since it only works for differential equations that follow certain forms.
|
https://en.wikipedia.org/wiki/Method_of_undetermined_coefficients
|
In mathematics, the metric derivative is a notion of derivative appropriate to parametrized paths in metric spaces. It generalizes the notion of "speed" or "absolute velocity" to spaces which have a notion of distance (i.e. metric spaces) but not direction (such as vector spaces).
|
https://en.wikipedia.org/wiki/Metric_derivative
|
In mathematics, the mex ("minimum excluded value") of a subset of a well-ordered set is the smallest value from the whole set that does not belong to the subset. That is, it is the minimum value of the complement set. Beyond sets, subclasses of well-ordered classes have minimum excluded values.
|
https://en.wikipedia.org/wiki/Mex_(mathematics)
|
Minimum excluded values of subclasses of the ordinal numbers are used in combinatorial game theory to assign nim-values to impartial games. According to the Sprague–Grundy theorem, the nim-value of a game position is the minimum excluded value of the class of values of the positions that can be reached in a single move from the given position.Minimum excluded values are also used in graph theory, in greedy coloring algorithms. These algorithms typically choose an ordering of the vertices of a graph and choose a numbering of the available vertex colors. They then consider the vertices in order, for each vertex choosing its color to be the minimum excluded value of the set of colors already assigned to its neighbors.
|
https://en.wikipedia.org/wiki/Mex_(mathematics)
|
In mathematics, the mice problem is a continuous pursuit–evasion problem in which a number of mice (or insects, dogs, missiles, etc.) are considered to be placed at the corners of a regular polygon. In the classic setup, each then begins to move towards its immediate neighbour (clockwise or anticlockwise). The goal is often to find out at what time the mice meet.
|
https://en.wikipedia.org/wiki/Mice_problem
|
The most common version has the mice starting at the corners of a unit square, moving at unit speed. In this case they meet after a time of one unit, because the distance between two neighboring mice always decreases at a speed of one unit. More generally, for a regular polygon of n {\displaystyle n} unit-length sides, the distance between neighboring mice decreases at a speed of 1 − cos ( 2 π / n ) {\displaystyle 1-\cos(2\pi /n)} , so they meet after a time of 1 / ( 1 − cos ( 2 π / n ) ) {\displaystyle 1/{\bigl (}1-\cos(2\pi /n){\bigr )}} .
|
https://en.wikipedia.org/wiki/Mice_problem
|
In mathematics, the minimum k-cut is a combinatorial optimization problem that requires finding a set of edges whose removal would partition the graph to at least k connected components. These edges are referred to as k-cut. The goal is to find the minimum-weight k-cut. This partitioning can have applications in VLSI design, data-mining, finite elements and communication in parallel computing.
|
https://en.wikipedia.org/wiki/Minimum_k-cut
|
In mathematics, the minimum rank is a graph parameter mr ( G ) {\displaystyle \operatorname {mr} (G)} for a graph G. It was motivated by the Colin de Verdière graph invariant.
|
https://en.wikipedia.org/wiki/Minimum_rank_of_a_graph
|
In mathematics, the modern component-free approach to the theory of a tensor views a tensor as an abstract object, expressing some definite type of multilinear concept. Their properties can be derived from their definitions, as linear maps or more generally; and the rules for manipulations of tensors arise as an extension of linear algebra to multilinear algebra. In differential geometry an intrinsic geometric statement may be described by a tensor field on a manifold, and then doesn't need to make reference to coordinates at all.
|
https://en.wikipedia.org/wiki/Tensor_(intrinsic_definition)
|
The same is true in general relativity, of tensor fields describing a physical property. The component-free approach is also used extensively in abstract algebra and homological algebra, where tensors arise naturally. Note: This article assumes an understanding of the tensor product of vector spaces without chosen bases. An overview of the subject can be found in the main tensor article.
|
https://en.wikipedia.org/wiki/Tensor_(intrinsic_definition)
|
In mathematics, the modular group is the projective special linear group PSL ( 2 , Z ) {\textstyle \operatorname {PSL} (2,\mathbb {Z} )} of 2 × 2 matrices with integer coefficients and determinant 1. The matrices A and −A are identified. The modular group acts on the upper-half of the complex plane by fractional linear transformations, and the name "modular group" comes from the relation to moduli spaces and not from modular arithmetic.
|
https://en.wikipedia.org/wiki/Hecke_group
|
In mathematics, the modular lambda function λ(τ) is a highly symmetric Holomorphic function on the complex upper half-plane. It is invariant under the fractional linear action of the congruence group Γ(2), and generates the function field of the corresponding quotient, i.e., it is a Hauptmodul for the modular curve X(2). Over any point τ, its value can be described as a cross ratio of the branch points of a ramified double cover of the projective line by the elliptic curve C / ⟨ 1 , τ ⟩ {\displaystyle \mathbb {C} /\langle 1,\tau \rangle } , where the map is defined as the quotient by the involution. The q-expansion, where q = e π i τ {\displaystyle q=e^{\pi i\tau }} is the nome, is given by: λ ( τ ) = 16 q − 128 q 2 + 704 q 3 − 3072 q 4 + 11488 q 5 − 38400 q 6 + … {\displaystyle \lambda (\tau )=16q-128q^{2}+704q^{3}-3072q^{4}+11488q^{5}-38400q^{6}+\dots } . OEIS: A115977By symmetrizing the lambda function under the canonical action of the symmetric group S3 on X(2), and then normalizing suitably, one obtains a function on the upper half-plane that is invariant under the full modular group SL 2 ( Z ) {\displaystyle \operatorname {SL} _{2}(\mathbb {Z} )} , and it is in fact Klein's modular j-invariant.
|
https://en.wikipedia.org/wiki/Elliptic_modulus
|
In mathematics, the moduli stack of elliptic curves, denoted as M 1 , 1 {\displaystyle {\mathcal {M}}_{1,1}} or M ell {\displaystyle {\mathcal {M}}_{\textrm {ell}}} , is an algebraic stack over Spec ( Z ) {\displaystyle {\text{Spec}}(\mathbb {Z} )} classifying elliptic curves. Note that it is a special case of the moduli stack of algebraic curves M g , n {\displaystyle {\mathcal {M}}_{g,n}} . In particular its points with values in some field correspond to elliptic curves over the field, and more generally morphisms from a scheme S {\displaystyle S} to it correspond to elliptic curves over S {\displaystyle S} . The construction of this space spans over a century because of the various generalizations of elliptic curves as the field has developed. All of these generalizations are contained in M 1 , 1 {\displaystyle {\mathcal {M}}_{1,1}} .
|
https://en.wikipedia.org/wiki/Moduli_stack_of_elliptic_curves
|
In mathematics, the modulus of convexity and the characteristic of convexity are measures of "how convex" the unit ball in a Banach space is. In some sense, the modulus of convexity has the same relationship to the ε-δ definition of uniform convexity as the modulus of continuity does to the ε-δ definition of continuity.
|
https://en.wikipedia.org/wiki/Modulus_and_characteristic_of_convexity
|
In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total mass) is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. The mathematical concept is closely related to the concept of moment in physics.
|
https://en.wikipedia.org/wiki/Moment_(statistics)
|
For a distribution of mass or probability on a bounded interval, the collection of all the moments (of all orders, from 0 to ∞) uniquely determines the distribution (Hausdorff moment problem). The same is not true on unbounded intervals (Hamburger moment problem). In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the moments of random variables.
|
https://en.wikipedia.org/wiki/Moment_(statistics)
|
In mathematics, the monkey saddle is the surface defined by the equation z = x 3 − 3 x y 2 , {\displaystyle z=x^{3}-3xy^{2},\,} or in cylindrical coordinates z = ρ 3 cos ( 3 φ ) . {\displaystyle z=\rho ^{3}\cos(3\varphi ).} It belongs to the class of saddle surfaces, and its name derives from the observation that a saddle for a monkey would require two depressions for the legs and one for the tail.
|
https://en.wikipedia.org/wiki/Monkey_saddle
|
The point ( 0 , 0 , 0 ) {\displaystyle (0,0,0)} on the monkey saddle corresponds to a degenerate critical point of the function z ( x , y ) {\displaystyle z(x,y)} at ( 0 , 0 ) {\displaystyle (0,0)} . The monkey saddle has an isolated umbilical point with zero Gaussian curvature at the origin, while the curvature is strictly negative at all other points. One can relate the rectangular and cylindrical equations using complex numbers x + i y = r e i φ: {\displaystyle x+iy=re^{i\varphi }:} z = x 3 − 3 x y 2 = Re = Re = r 3 cos ( 3 φ ) .
|
https://en.wikipedia.org/wiki/Monkey_saddle
|
{\displaystyle z=x^{3}-3xy^{2}=\operatorname {Re} =\operatorname {Re} =r^{3}\cos(3\varphi ).} By replacing 3 in the cylindrical equation with any integer k ≥ 1 , {\displaystyle k\geq 1,} one can create a saddle with k {\displaystyle k} depressions. Another orientation of the monkey saddle is the Smelt petal defined by x + y + z + x y z = 0 , {\displaystyle x+y+z+xyz=0,} so that the z-axis of the monkey saddle corresponds to the direction ( 1 , 1 , 1 ) {\displaystyle (1,1,1)} in the Smelt petal.
|
https://en.wikipedia.org/wiki/Monkey_saddle
|
In mathematics, the monopole moduli space is a space parametrizing monopoles (solutions of the Bogomolny equations). Atiyah and Hitchin (1988) studied the moduli space for 2 monopoles in detail and used it to describe the scattering of monopoles.
|
https://en.wikipedia.org/wiki/Monopole_moduli_space
|
In mathematics, the monster Lie algebra is an infinite-dimensional generalized Kac–Moody algebra acted on by the monster group, which was used to prove the monstrous moonshine conjectures.
|
https://en.wikipedia.org/wiki/Monster_Lie_algebra
|
In mathematics, the mountain climbing problem is a mathematical problem that considers a two-dimensional mountain range (represented as a continuous function), and asks whether it is possible for two mountain climbers starting at sea level on the left and right sides of the mountain to meet at the summit, while maintaining equal altitudes at all times. This problem was named and posed in this form by James V. Whittaker (1966), but its history goes back to Tatsuo Homma (1952), who solved a version of it. The problem has been repeatedly rediscovered and solved independently in different contexts by a number of people (see references below). Since the 1990s, the problem was shown to be connected to the weak Fréchet distance of curves in the plane, various planar motion planning problems in computational geometry, the inscribed square problem, semigroup of polynomials, etc. The problem was popularized in the article by Goodman, Pach & Yap (1989), which received the Mathematical Association of America's Lester R. Ford Award in 1990.
|
https://en.wikipedia.org/wiki/Mountain_climbing_problem
|
In mathematics, the moving sofa problem or sofa problem is a two-dimensional idealisation of real-life furniture-moving problems and asks for the rigid two-dimensional shape of largest area that can be maneuvered through an L-shaped planar region with legs of unit width. The area thus obtained is referred to as the sofa constant. The exact value of the sofa constant is an open problem. The currently leading solution, by Joseph L. Gerver, has a value of approximately 2.2195 and is thought to be close to the optimal, based upon subsequent study and theoretical bounds.
|
https://en.wikipedia.org/wiki/Moving_sofa_problem
|
In mathematics, the multicomplex number systems C n {\displaystyle \mathbb {C} _{n}} are defined inductively as follows: Let C0 be the real number system. For every n > 0 let in be a square root of −1, that is, an imaginary unit. Then C n + 1 = { z = x + y i n + 1: x , y ∈ C n } {\displaystyle \mathbb {C} _{n+1}=\lbrace z=x+yi_{n+1}:x,y\in \mathbb {C} _{n}\rbrace } .
|
https://en.wikipedia.org/wiki/Multicomplex_number
|
In the multicomplex number systems one also requires that i n i m = i m i n {\displaystyle i_{n}i_{m}=i_{m}i_{n}} (commutativity). Then C 1 {\displaystyle \mathbb {C} _{1}} is the complex number system, C 2 {\displaystyle \mathbb {C} _{2}} is the bicomplex number system, C 3 {\displaystyle \mathbb {C} _{3}} is the tricomplex number system of Corrado Segre, and C n {\displaystyle \mathbb {C} _{n}} is the multicomplex number system of order n. Each C n {\displaystyle \mathbb {C} _{n}} forms a Banach algebra. G. Bayley Price has written about the function theory of multicomplex systems, providing details for the bicomplex system C n .
|
https://en.wikipedia.org/wiki/Multicomplex_number
|
{\displaystyle \mathbb {C} _{n}.} The multicomplex number systems are not to be confused with Clifford numbers (elements of a Clifford algebra), since Clifford's square roots of −1 anti-commute ( i n i m + i m i n = 0 {\displaystyle i_{n}i_{m}+i_{m}i_{n}=0} when m ≠ n for Clifford). Because the multicomplex numbers have several square roots of –1 that commute, they also have zero divisors: ( i n − i m ) ( i n + i m ) = i n 2 − i m 2 = 0 {\displaystyle (i_{n}-i_{m})(i_{n}+i_{m})=i_{n}^{2}-i_{m}^{2}=0} despite i n − i m ≠ 0 {\displaystyle i_{n}-i_{m}\neq 0} and i n + i m ≠ 0 {\displaystyle i_{n}+i_{m}\neq 0} , and ( i n i m − 1 ) ( i n i m + 1 ) = i n 2 i m 2 − 1 = 0 {\displaystyle (i_{n}i_{m}-1)(i_{n}i_{m}+1)=i_{n}^{2}i_{m}^{2}-1=0} despite i n i m ≠ 1 {\displaystyle i_{n}i_{m}\neq 1} and i n i m ≠ − 1 {\displaystyle i_{n}i_{m}\neq -1} .
|
https://en.wikipedia.org/wiki/Multicomplex_number
|
Any product i n i m {\displaystyle i_{n}i_{m}} of two distinct multicomplex units behaves as the j {\displaystyle j} of the split-complex numbers, and therefore the multicomplex numbers contain a number of copies of the split-complex number plane. With respect to subalgebra C k {\displaystyle \mathbb {C} _{k}} , k = 0, 1, ..., n − 1, the multicomplex system C n {\displaystyle \mathbb {C} _{n}} is of dimension 2n − k over C k . {\displaystyle \mathbb {C} _{k}.}
|
https://en.wikipedia.org/wiki/Multicomplex_number
|
In mathematics, the multinomial theorem describes how to expand a power of a sum in terms of powers of the terms in that sum. It is the generalization of the binomial theorem from binomials to multinomials.
|
https://en.wikipedia.org/wiki/Multinomial_formula
|
In mathematics, the multiple gamma function Γ N {\displaystyle \Gamma _{N}} is a generalization of the Euler gamma function and the Barnes G-function. The double gamma function was studied by Barnes (1901). At the end of this paper he mentioned the existence of multiple gamma functions generalizing it, and studied these further in Barnes (1904). Double gamma functions Γ 2 {\displaystyle \Gamma _{2}} are closely related to the q-gamma function, and triple gamma functions Γ 3 {\displaystyle \Gamma _{3}} are related to the elliptic gamma function.
|
https://en.wikipedia.org/wiki/Multiple_gamma_function
|
In mathematics, the multiple orthogonal polynomials (MOPs) are orthogonal polynomials in one variable that are orthogonal with respect to a finite family of measures. The polynomials are divided into two classes named type 1 and type 2.In the literature, MOPs are also called d {\displaystyle d} -orthogonal polynomials, Hermite-Padé polynomials or polyorthogonal polynomials. MOPs should not be confused with multivariate orthogonal polynomials.
|
https://en.wikipedia.org/wiki/Multiple_orthogonal_polynomials
|
In mathematics, the multiple zeta functions are generalizations of the Riemann zeta function, defined by ζ ( s 1 , … , s k ) = ∑ n 1 > n 2 > ⋯ > n k > 0 1 n 1 s 1 ⋯ n k s k = ∑ n 1 > n 2 > ⋯ > n k > 0 ∏ i = 1 k 1 n i s i , {\displaystyle \zeta (s_{1},\ldots ,s_{k})=\sum _{n_{1}>n_{2}>\cdots >n_{k}>0}\ {\frac {1}{n_{1}^{s_{1}}\cdots n_{k}^{s_{k}}}}=\sum _{n_{1}>n_{2}>\cdots >n_{k}>0}\ \prod _{i=1}^{k}{\frac {1}{n_{i}^{s_{i}}}},\!} and converge when Re(s1) + ... + Re(si) > i for all i. Like the Riemann zeta function, the multiple zeta functions can be analytically continued to be meromorphic functions (see, for example, Zhao (1999)). When s1, ..., sk are all positive integers (with s1 > 1) these sums are often called multiple zeta values (MZVs) or Euler sums. These values can also be regarded as special values of the multiple polylogarithms.The k in the above definition is named the "depth" of a MZV, and the n = s1 + ... + sk is known as the "weight".The standard shorthand for writing multiple zeta functions is to place repeating strings of the argument within braces and use a superscript to indicate the number of repetitions. For example, ζ ( 2 , 1 , 2 , 1 , 3 ) = ζ ( { 2 , 1 } 2 , 3 ) . {\displaystyle \zeta (2,1,2,1,3)=\zeta (\{2,1\}^{2},3).}
|
https://en.wikipedia.org/wiki/Multiple_zeta_values
|
In mathematics, the multiplication theorem is a certain type of identity obeyed by many special functions related to the gamma function. For the explicit case of the gamma function, the identity is a product of values; thus the name. The various relations all stem from the same underlying principle; that is, the relation for one special function can be derived from that for the others, and is simply a manifestation of the same identity in different guises.
|
https://en.wikipedia.org/wiki/Multiplication_theorem
|
In mathematics, the multiplicative ergodic theorem, or Oseledets theorem provides the theoretical background for computation of Lyapunov exponents of a nonlinear dynamical system. It was proved by Valery Oseledets (also spelled "Oseledec") in 1965 and reported at the International Mathematical Congress in Moscow in 1966. A conceptually different proof of the multiplicative ergodic theorem was found by M. S. Raghunathan. The theorem has been extended to semisimple Lie groups by V. A. Kaimanovich and further generalized in the works of David Ruelle, Grigory Margulis, Anders Karlsson, and François Ledrappier.
|
https://en.wikipedia.org/wiki/Oseledets_theorem
|
In mathematics, the multiplicative semigroup, denoted by W0, generated by the set { 3 n + 2 2 n + 1: n ≥ 0 } {\displaystyle \left\{{\frac {3n+2}{2n+1}}:n\geq 0\right\}} is called the Wooley semigroup in honour of the American mathematician Trevor D. Wooley. The multiplicative semigroup, denoted by W, generated by the set { 1 2 } ∪ { 3 n + 2 2 n + 1: n ≥ 0 } {\displaystyle \left\{{\frac {1}{2}}\right\}\cup \left\{{\frac {3n+2}{2n+1}}:n\geq 0\right\}} is called the wild semigroup. The set of integers in W0 is itself a multiplicative semigroup.
|
https://en.wikipedia.org/wiki/Wild_number
|
It is called the Wooley integer semigroup and members of this semigroup are called Wooley integers. Similarly, the set of integers in W is itself a multiplicative semigroup. It is called the wild integer semigroup and members of this semigroup are called wild numbers.
|
https://en.wikipedia.org/wiki/Wild_number
|
In mathematics, the multiplicity of a member of a multiset is the number of times it appears in the multiset. For example, the number of times a given polynomial has a root at a given point is the multiplicity of that root. The notion of multiplicity is important to be able to count correctly without specifying exceptions (for example, double roots counted twice).
|
https://en.wikipedia.org/wiki/Multiple_roots_of_a_polynomial
|
Hence the expression, "counted with multiplicity". If multiplicity is ignored, this may be emphasized by counting the number of distinct elements, as in "the number of distinct roots". However, whenever a set (as opposed to multiset) is formed, multiplicity is automatically ignored, without requiring use of the term "distinct".
|
https://en.wikipedia.org/wiki/Multiple_roots_of_a_polynomial
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.