title stringlengths 1 113 | text stringlengths 9 3.55k | source stringclasses 1 value |
|---|---|---|
formula | In all cases, however, formulas form the basis for calculations. Expressions are distinct from formulas in that they cannot contain an equals sign (=). Expressions can be likened to phrases the same way formulas can be likened to grammatical sentences. | wikipedia |
fractal geometry | In mathematics, a fractal is a geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scales, as illustrated in successive magnifications of the Mandelbrot set. This exhibition of similar patterns at increasingly smaller scales is called self-similarity, also known as expanding symmetry or unfolding symmetry; if this replication is exactly the same at every scale, as in the Menger sponge, the shape is called affine self-similar. Fractal geometry lies within the mathematical branch of measure theory. | wikipedia |
fractal geometry | One way that fractals are different from finite geometric figures is how they scale. Doubling the edge lengths of a filled polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the conventional dimension of the filled polygon). Likewise, if the radius of a filled sphere is doubled, its volume scales by eight, which is two (the ratio of the new to the old radius) to the power of three (the conventional dimension of the filled sphere). | wikipedia |
fractal geometry | However, if a fractal's one-dimensional lengths are all doubled, the spatial content of the fractal scales by a power that is not necessarily an integer and is in general greater than its conventional dimension. This power is called the fractal dimension of the geometric object, to distinguish it from the conventional dimension (which is formally called the topological dimension).Analytically, many fractals are nowhere differentiable. An infinite fractal curve can be conceived of as winding through space differently from an ordinary line – although it is still topologically 1-dimensional, its fractal dimension indicates that it locally fills space more efficiently than an ordinary line. | wikipedia |
fractal geometry | Starting in the 17th century with notions of recursion, fractals have moved through increasingly rigorous mathematical treatment to the study of continuous but not differentiable functions in the 19th century by the seminal work of Bernard Bolzano, Bernhard Riemann, and Karl Weierstrass, and on to the coining of the word fractal in the 20th century with a subsequent burgeoning of interest in fractals and computer-based modelling in the 20th century.There is some disagreement among mathematicians about how the concept of a fractal should be formally defined. Mandelbrot himself summarized it as "beautiful, damn hard, increasingly useful. | wikipedia |
fractal geometry | That's fractals." More formally, in 1982 Mandelbrot defined fractal as follows: "A fractal is by definition a set for which the Hausdorff–Besicovitch dimension strictly exceeds the topological dimension." Later, seeing this as too restrictive, he simplified and expanded the definition to this: "A fractal is a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole." | wikipedia |
fractal geometry | Still later, Mandelbrot proposed "to use fractal without a pedantic definition, to use fractal dimension as a generic term applicable to all the variants".The consensus among mathematicians is that theoretical fractals are infinitely self-similar iterated and detailed mathematical constructs, of which many examples have been formulated and studied. Fractals are not limited to geometric patterns, but can also describe processes in time. Fractal patterns with various degrees of self-similarity have been rendered or studied in visual, physical, and aural media and found in nature, technology, art, architecture and law. Fractals are of particular relevance in the field of chaos theory because they show up in the geometric depictions of most chaotic processes (typically either as attractors or as boundaries between basins of attraction). | wikipedia |
free boolean algebra | In mathematics, a free Boolean algebra is a Boolean algebra with a distinguished set of elements, called generators, such that: Each element of the Boolean algebra can be expressed as a finite combination of generators, using the Boolean operations, and The generators are as independent as possible, in the sense that there are no relationships among them (again in terms of finite expressions using the Boolean operations) that do not hold in every Boolean algebra no matter which elements are chosen. | wikipedia |
free lie algebra | In mathematics, a free Lie algebra over a field K is a Lie algebra generated by a set X, without any imposed relations other than the defining relations of alternating K-bilinearity and the Jacobi identity. | wikipedia |
bounded type (mathematics) | In mathematics, a function defined on a region of the complex plane is said to be of bounded type if it is equal to the ratio of two analytic functions bounded in that region. But more generally, a function is of bounded type in a region Ω {\displaystyle \Omega } if and only if f {\displaystyle f} is analytic on Ω {\displaystyle \Omega } and log + | f ( z ) | {\displaystyle \log ^{+}|f(z)|} has a harmonic majorant on Ω , {\displaystyle \Omega ,} where log + ( x ) = max {\displaystyle \log ^{+}(x)=\max} . Being the ratio of two bounded analytic functions is a sufficient condition for a function to be of bounded type (defined in terms of a harmonic majorant), and if Ω {\displaystyle \Omega } is simply connected the condition is also necessary. The class of all such f {\displaystyle f} on Ω {\displaystyle \Omega } is commonly denoted N ( Ω ) {\displaystyle N(\Omega )} and is sometimes called the Nevanlinna class for Ω {\displaystyle \Omega } . | wikipedia |
bounded sequences | In mathematics, a function f defined on some set X with real or complex values is called bounded if the set of its values is bounded. In other words, there exists a real number M such that | f ( x ) | ≤ M {\displaystyle |f(x)|\leq M} for all x in X. A function that is not bounded is said to be unbounded.If f is real-valued and f(x) ≤ A for all x in X, then the function is said to be bounded (from) above by A. If f(x) ≥ B for all x in X, then the function is said to be bounded (from) below by B. A real-valued function is bounded if and only if it is bounded from above and below.An important special case is a bounded sequence, where X is taken to be the set N of natural numbers. Thus a sequence f = (a0, a1, a2, ...) is bounded if there exists a real number M such that | a n | ≤ M {\displaystyle |a_{n}|\leq M} for every natural number n. The set of all bounded sequences forms the sequence space l ∞ {\displaystyle l^{\infty }} .The definition of boundedness can be generalized to functions f: X → Y taking values in a more general space Y by requiring that the image f(X) is a bounded set in Y. | wikipedia |
cofunction | In mathematics, a function f is cofunction of a function g if f(A) = g(B) whenever A and B are complementary angles. This definition typically applies to trigonometric functions. The prefix "co-" can be found already in Edmund Gunter's Canon triangulorum (1620).For example, sine (Latin: sinus) and cosine (Latin: cosinus, sinus complementi) are cofunctions of each other (hence the "co" in "cosine"): The same is true of secant (Latin: secans) and cosecant (Latin: cosecans, secans complementi) as well as of tangent (Latin: tangens) and cotangent (Latin: cotangens, tangens complementi): These equations are also known as the cofunction identities.This also holds true for the versine (versed sine, ver) and coversine (coversed sine, cvs), the vercosine (versed cosine, vcs) and covercosine (coversed cosine, cvc), the haversine (half-versed sine, hav) and hacoversine (half-coversed sine, hcv), the havercosine (half-versed cosine, hvc) and hacovercosine (half-coversed cosine, hcc), as well as the exsecant (external secant, exs) and excosecant (external cosecant, exc): | wikipedia |
logarithmically convex function | In mathematics, a function f is logarithmically convex or superconvex if log ∘ f {\displaystyle {\log }\circ f} , the composition of the logarithm with f, is itself a convex function. | wikipedia |
supermodular function | In mathematics, a function f: R k → R {\displaystyle f\colon \mathbb {R} ^{k}\to \mathbb {R} } is supermodular if f ( x ↑ y ) + f ( x ↓ y ) ≥ f ( x ) + f ( y ) {\displaystyle f(x\uparrow y)+f(x\downarrow y)\geq f(x)+f(y)} for all x {\displaystyle x} , y ∈ R k {\displaystyle y\in \mathbb {R} ^{k}} , where x ↑ y {\displaystyle x\uparrow y} denotes the componentwise maximum and x ↓ y {\displaystyle x\downarrow y} the componentwise minimum of x {\displaystyle x} and y {\displaystyle y} . If −f is supermodular then f is called submodular, and if the inequality is changed to an equality the function is modular. If f is twice continuously differentiable, then supermodularity is equivalent to the condition ∂ 2 f ∂ z i ∂ z j ≥ 0 for all i ≠ j . {\displaystyle {\frac {\partial ^{2}f}{\partial z_{i}\,\partial z_{j}}}\geq 0{\mbox{ for all }}i\neq j.} | wikipedia |
closed convex function | In mathematics, a function f: R n → R {\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} } is said to be closed if for each α ∈ R {\displaystyle \alpha \in \mathbb {R} } , the sublevel set { x ∈ dom f | f ( x ) ≤ α } {\displaystyle \{x\in {\mbox{dom}}f\vert f(x)\leq \alpha \}} is a closed set. Equivalently, if the epigraph defined by epi f = { ( x , t ) ∈ R n + 1 | x ∈ dom f , f ( x ) ≤ t } {\displaystyle {\mbox{epi}}f=\{(x,t)\in \mathbb {R} ^{n+1}\vert x\in {\mbox{dom}}f,\;f(x)\leq t\}} is closed, then the function f {\displaystyle f} is closed. This definition is valid for any function, but most used for convex functions. A proper convex function is closed if and only if it is lower semi-continuous. For a convex function which is not proper there is disagreement as to the definition of the closure of the function. | wikipedia |
symmetrically continuous function | In mathematics, a function f: R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is symmetrically continuous at a point x if lim h → 0 f ( x + h ) − f ( x − h ) = 0. {\displaystyle \lim _{h\to 0}f(x+h)-f(x-h)=0.} The usual definition of continuity implies symmetric continuity, but the converse is not true. | wikipedia |
symmetrically continuous function | For example, the function x − 2 {\displaystyle x^{-2}} is symmetrically continuous at x = 0 {\displaystyle x=0} , but not continuous. Also, symmetric differentiability implies symmetric continuity, but the converse is not true just like usual continuity does not imply differentiability. The set of the symmetrically continuous functions, with the usual scalar multiplication can be easily shown to have the structure of a vector space over R {\displaystyle \mathbb {R} } , similarly to the usually continuous functions, which form a linear subspace within it. | wikipedia |
function evaluation | In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y. The set X is called the domain of the function and the set Y is called the codomain of the function.Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly enlarged the domains of application of the concept. | wikipedia |
function evaluation | A function is most often denoted by letters such as f, g and h, and the value of a function f at an element x of its domain is denoted by f(x); the numerical value resulting from the function evaluation at a particular input value is denoted by replacing x with this value; for example, the value of f at x = 4 is denoted by f(4). When the function is not named and is represented by an expression E, the value of the function at, say, x = 4 may be denoted by E|x=4. For example, the value at 4 of the function that maps x to ( x + 1 ) 2 {\displaystyle (x+1)^{2}} may be denoted by ( x + 1 ) 2 | x = 4 {\displaystyle \left. | wikipedia |
function evaluation | (x+1)^{2}\right\vert _{x=4}} (which results in 25).A function is uniquely represented by the set of all pairs (x, f (x)), called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane. Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics. | wikipedia |
response variable | In mathematics, a function is a rule for taking an input (in the simplest case, a number or set of numbers) and providing an output (which may also be a number). A symbol that stands for an arbitrary input is called an independent variable, while a symbol that stands for an arbitrary output is called a dependent variable. The most common symbol for the input is x, and the most common symbol for the output is y; the function itself is commonly written y = f(x).It is possible to have multiple independent variables or multiple dependent variables. For instance, in multivariable calculus, one often encounters functions of the form z = f(x,y), where z is a dependent variable and x and y are independent variables. Functions with multiple outputs are often referred to as vector-valued functions. | wikipedia |
rapidly decreasing function | In mathematics, a function is said to vanish at infinity if its values approach 0 as the input grows without bounds. There are two different ways to define this with one definition applying to functions defined on normed vector spaces and the other applying to functions defined on locally compact spaces. Aside from this difference, both of these notions correspond to the intuitive notion of adding a point at infinity, and requiring the values of the function to get arbitrarily close to zero as one approaches it. This definition can be formalized in many cases by adding an (actual) point at infinity. | wikipedia |
complete symmetric function | In mathematics, a function of n {\displaystyle n} variables is symmetric if its value is the same no matter the order of its arguments. For example, a function f ( x 1 , x 2 ) {\displaystyle f\left(x_{1},x_{2}\right)} of two arguments is a symmetric function if and only if f ( x 1 , x 2 ) = f ( x 2 , x 1 ) {\displaystyle f\left(x_{1},x_{2}\right)=f\left(x_{2},x_{1}\right)} for all x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} such that ( x 1 , x 2 ) {\displaystyle \left(x_{1},x_{2}\right)} and ( x 2 , x 1 ) {\displaystyle \left(x_{2},x_{1}\right)} are in the domain of f . {\displaystyle f.} The most commonly encountered symmetric functions are polynomial functions, which are given by the symmetric polynomials. | wikipedia |
complete symmetric function | A related notion is alternating polynomials, which change sign under an interchange of variables. Aside from polynomial functions, tensors that act as functions of several vectors can be symmetric, and in fact the space of symmetric k {\displaystyle k} -tensors on a vector space V {\displaystyle V} is isomorphic to the space of homogeneous polynomials of degree k {\displaystyle k} on V . {\displaystyle V.} Symmetric functions should not be confused with even and odd functions, which have a different sort of symmetry. | wikipedia |
step function | In mathematics, a function on the real numbers is called a step function if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having only finitely many pieces. | wikipedia |
function space | In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set X into a vector space has a natural vector space structure given by pointwise addition and scalar multiplication. In other scenarios, the function space might inherit a topological or metric structure, hence the name function space. | wikipedia |
functional (mathematics) | In mathematics, a functional (as a noun) is a certain type of function. The exact definition of the term varies depending on the subfield (and sometimes even the author). In linear algebra, it is synonymous with linear forms, which are linear mappings from a vector space V {\displaystyle V} into its field of scalars (that is, they are elements of the dual space V ∗ {\displaystyle V^{*}} ) In functional analysis and related fields, it refers more generally to a mapping from a space X {\displaystyle X} into the field of real or complex numbers. In functional analysis, the term linear functional is a synonym of linear form; that is, it is a scalar-valued linear map. | wikipedia |
functional (mathematics) | Depending on the author, such mappings may or may not be assumed to be linear, or to be defined on the whole space X . {\displaystyle X.} In computer science, it is synonymous with higher-order functions, that is, functions that take functions as arguments or return them.This article is mainly concerned with the second concept, which arose in the early 18th century as part of the calculus of variations. | wikipedia |
functional (mathematics) | The first concept, which is more modern and abstract, is discussed in detail in a separate article, under the name linear form. The third concept is detailed in the computer science article on higher-order functions. In the case where the space X {\displaystyle X} is a space of functions, the functional is a "function of a function", and some older authors actually define the term "functional" to mean "function of a function". However, the fact that X {\displaystyle X} is a space of functions is not mathematically essential, so this older definition is no longer prevalent.The term originates from the calculus of variations, where one searches for a function that minimizes (or maximizes) a given functional. A particularly important application in physics is search for a state of a system that minimizes (or maximizes) the action, or in other words the time integral of the Lagrangian. | wikipedia |
functional calculus | In mathematics, a functional calculus is a theory allowing one to apply mathematical functions to mathematical operators. It is now a branch (more accurately, several related areas) of the field of functional analysis, connected with spectral theory. (Historically, the term was also used synonymously with calculus of variations; this usage is obsolete, except for functional derivative. Sometimes it is used in relation to types of functional equations, or in logic for systems of predicate calculus.) | wikipedia |
functional calculus | If f {\displaystyle f} is a function, say a numerical function of a real number, and M {\displaystyle M} is an operator, there is no particular reason why the expression f ( M ) {\displaystyle f(M)} should make sense. If it does, then we are no longer using f {\displaystyle f} on its original function domain. In the tradition of operational calculus, algebraic expressions in operators are handled irrespective of their meaning. | wikipedia |
functional calculus | This passes nearly unnoticed if we talk about 'squaring a matrix', though, which is the case of f ( x ) = x 2 {\displaystyle f(x)=x^{2}} and M {\displaystyle M} an n × n {\displaystyle n\times n} matrix. The idea of a functional calculus is to create a principled approach to this kind of overloading of the notation. The most immediate case is to apply polynomial functions to a square matrix, extending what has just been discussed. | wikipedia |
functional calculus | In the finite-dimensional case, the polynomial functional calculus yields quite a bit of information about the operator. For example, consider the family of polynomials which annihilates an operator T {\displaystyle T} . This family is an ideal in the ring of polynomials. | wikipedia |
functional calculus | Furthermore, it is a nontrivial ideal: let n {\displaystyle n} be the finite dimension of the algebra of matrices, then { I , T , T 2 , … , T n } {\displaystyle \{I,T,T^{2},\ldots ,T^{n}\}} is linearly dependent. So ∑ i = 0 n α i T i = 0 {\displaystyle \sum _{i=0}^{n}\alpha _{i}T^{i}=0} for some scalars α i {\displaystyle \alpha _{i}} , not all equal to 0. This implies that the polynomial ∑ i = 0 n α i x i {\displaystyle \sum _{i=0}^{n}\alpha _{i}x^{i}} lies in the ideal. | wikipedia |
functional calculus | Since the ring of polynomials is a principal ideal domain, this ideal is generated by some polynomial m {\displaystyle m} . Multiplying by a unit if necessary, we can choose m {\displaystyle m} to be monic. When this is done, the polynomial m {\displaystyle m} is precisely the minimal polynomial of T {\displaystyle T} . | wikipedia |
functional calculus | This polynomial gives deep information about T {\displaystyle T} . For instance, a scalar α {\displaystyle \alpha } is an eigenvalue of T {\displaystyle T} if and only if α {\displaystyle \alpha } is a root of m {\displaystyle m} . | wikipedia |
functional calculus | Also, sometimes m {\displaystyle m} can be used to calculate the exponential of T {\displaystyle T} efficiently. The polynomial calculus is not as informative in the infinite-dimensional case. | wikipedia |
functional calculus | Consider the unilateral shift with the polynomials calculus; the ideal defined above is now trivial. Thus one is interested in functional calculi more general than polynomials. The subject is closely linked to spectral theory, since for a diagonal matrix or multiplication operator, it is rather clear what the definitions should be. | wikipedia |
abel's functional equation | In mathematics, a functional equation is, in the broadest meaning, an equation in which one or several functions appear as unknowns. So, differential equations and integral equations are functional equations. However, a more restricted meaning is often used, where a functional equation is an equation that relates several values of the same function. For example, the logarithm functions are essentially characterized by the logarithmic functional equation log ( x y ) = log ( x ) + log ( y ) . | wikipedia |
abel's functional equation | {\displaystyle \log(xy)=\log(x)+\log(y).} If the domain of the unknown function is supposed to be the natural numbers, the function is generally viewed as a sequence, and, in this case, a functional equation (in the narrower meaning) is called a recurrence relation. | wikipedia |
abel's functional equation | Thus the term functional equation is used mainly for real functions and complex functions. Moreover a smoothness condition is often assumed for the solutions, since without such a condition, most functional equations have very irregular solutions. For example, the gamma function is a function that satisfies the functional equation f ( x + 1 ) = x f ( x ) {\displaystyle f(x+1)=xf(x)} and the initial value f ( 1 ) = 1. {\displaystyle f(1)=1.} There are many functions that satisfy these conditions, but the gamma function is the unique one that is meromorphic in the whole complex plane, and logarithmically convex for x real and positive (Bohr–Mollerup theorem). | wikipedia |
functional square root | In mathematics, a functional square root (sometimes called a half iterate) is a square root of a function with respect to the operation of function composition. In other words, a functional square root of a function g is a function f satisfying f(f(x)) = g(x) for all x. | wikipedia |
general hypergeometric function | In mathematics, a general hypergeometric function or Aomoto–Gelfand hypergeometric function is a generalization of the hypergeometric function that was introduced by Gelfand (1986). The general hypergeometric function is a function that is (more or less) defined on a Grassmannian, and depends on a choice of some complex numbers and signs. | wikipedia |
borcherds algebra | In mathematics, a generalized Kac–Moody algebra is a Lie algebra that is similar to a Kac–Moody algebra, except that it is allowed to have imaginary simple roots. Generalized Kac–Moody algebras are also sometimes called GKM algebras, Borcherds–Kac–Moody algebras, BKM algebras, or Borcherds algebras. The best known example is the monster Lie algebra. | wikipedia |
exponential generating series | In mathematics, a generating function is a way of encoding an infinite sequence of numbers (an) by treating them as the coefficients of a formal power series. This series is called the generating function of the sequence. Unlike an ordinary series, the formal power series is not required to converge: in fact, the generating function is not actually regarded as a function, and the "variable" remains an indeterminate. Generating functions were first introduced by Abraham de Moivre in 1730, in order to solve the general linear recurrence problem. | wikipedia |
exponential generating series | One can generalize to formal power series in more than one indeterminate, to encode information about infinite multi-dimensional arrays of numbers. There are various types of generating functions, including ordinary generating functions, exponential generating functions, Lambert series, Bell series, and Dirichlet series; definitions and examples are given below. Every sequence in principle has a generating function of each type (except that Lambert and Dirichlet series require indices to start at 1 rather than 0), but the ease with which they can be handled may differ considerably. | wikipedia |
exponential generating series | The particular generating function, if any, that is most useful in a given context will depend upon the nature of the sequence and the details of the problem being addressed. Generating functions are often expressed in closed form (rather than as a series), by some expression involving operations defined for formal series. These expressions in terms of the indeterminate x may involve arithmetic operations, differentiation with respect to x and composition with (i.e., substitution into) other generating functions; since these operations are also defined for functions, the result looks like a function of x. Indeed, the closed form expression can often be interpreted as a function that can be evaluated at (sufficiently small) concrete values of x, and which has the formal series as its series expansion; this explains the designation "generating functions". | wikipedia |
exponential generating series | However such interpretation is not required to be possible, because formal series are not required to give a convergent series when a nonzero numeric value is substituted for x. Also, not all expressions that are meaningful as functions of x are meaningful as expressions designating formal series; for example, negative and fractional powers of x are examples of functions that do not have a corresponding formal power series. Generating functions are not functions in the formal sense of a mapping from a domain to a codomain. Generating functions are sometimes called generating series, in that a series of terms can be said to be the generator of its sequence of term coefficients. | wikipedia |
geometric algebra | In mathematics, a geometric algebra (also known as a real Clifford algebra) is an extension of elementary algebra to work with geometrical objects such as vectors. Geometric algebra is built out of two fundamental operations, addition and the geometric product. Multiplication of vectors results in higher-dimensional objects called multivectors. Compared to other formalisms for manipulating geometric objects, geometric algebra is noteworthy for supporting vector division and addition of objects of different dimensions. | wikipedia |
geometric algebra | The geometric product was first briefly mentioned by Hermann Grassmann, who was chiefly interested in developing the closely related exterior algebra. In 1878, William Kingdon Clifford greatly expanded on Grassmann's work to form what are now usually called Clifford algebras in his honor (although Clifford himself chose to call them "geometric algebras"). Clifford defined the Clifford algebra and its product as a unification of the Grassmann algebra and Hamilton's quaternion algebra. | wikipedia |
geometric algebra | Adding the dual of the Grassmann exterior product (the "meet") allows the use of the Grassmann–Cayley algebra, and a conformal version of the latter together with a conformal Clifford algebra yields a conformal geometric algebra (CGA) providing a framework for classical geometries. In practice, these and several derived operations allow a correspondence of elements, subspaces and operations of the algebra with geometric interpretations. For several decades, geometric algebras went somewhat ignored, greatly eclipsed by the vector calculus then newly developed to describe electromagnetism. | wikipedia |
geometric algebra | The term "geometric algebra" was repopularized in the 1960s by Hestenes, who advocated its importance to relativistic physics.The scalars and vectors have their usual interpretation, and make up distinct subspaces of a geometric algebra. Bivectors provide a more natural representation of the pseudovector quantities in vector algebra such as oriented area, oriented angle of rotation, torque, angular momentum and the electromagnetic field. | wikipedia |
geometric algebra | A trivector can represent an oriented volume, and so on. An element called a blade may be used to represent a subspace of V {\displaystyle V} and orthogonal projections onto that subspace. | wikipedia |
geometric algebra | Rotations and reflections are represented as elements. Unlike a vector algebra, a geometric algebra naturally accommodates any number of dimensions and any quadratic form such as in relativity. Examples of geometric algebras applied in physics include the spacetime algebra (and the less common algebra of physical space) and the conformal geometric algebra. | wikipedia |
geometric algebra | Geometric calculus, an extension of GA that incorporates differentiation and integration, can be used to formulate other theories such as complex analysis and differential geometry, e.g. by using the Clifford algebra instead of differential forms. Geometric algebra has been advocated, most notably by David Hestenes and Chris Doran, as the preferred mathematical framework for physics. Proponents claim that it provides compact and intuitive descriptions in many areas including classical and quantum mechanics, electromagnetic theory and relativity. GA has also found use as a computational tool in computer graphics and robotics. | wikipedia |
geometrical progression | In mathematics, a geometric progression, also known as a geometric sequence, is a sequence of non-zero numbers where each term after the first is found by multiplying the previous one by a fixed, non-zero number called the common ratio. For example, the sequence 2, 6, 18, 54, ... is a geometric progression with common ratio 3. Similarly 10, 5, 2.5, 1.25, ... is a geometric sequence with common ratio 1/2. | wikipedia |
geometrical progression | Examples of a geometric sequence are powers rk of a fixed non-zero number r, such as 2k and 3k. The general form of a geometric sequence is a , a r , a r 2 , a r 3 , a r 4 , … {\displaystyle a,\ ar,\ ar^{2},\ ar^{3},\ ar^{4},\ \ldots } where r ≠ 0 is the common ratio and a ≠ 0 is a scale factor, equal to the sequence's start value. The sum of a geometric progression's terms is called a geometric series. | wikipedia |
geometric series | In mathematics, a geometric series is the sum of an infinite number of terms that have a constant ratio between successive terms. For example, the series 1 2 + 1 4 + 1 8 + 1 16 + ⋯ {\displaystyle {\frac {1}{2}}\,+\,{\frac {1}{4}}\,+\,{\frac {1}{8}}\,+\,{\frac {1}{16}}\,+\,\cdots } is geometric, because each successive term can be obtained by multiplying the previous term by 1 / 2 {\displaystyle 1/2} . In general, a geometric series is written as a + a r + a r 2 + a r 3 + . . | wikipedia |
geometric series | . {\displaystyle a+ar+ar^{2}+ar^{3}+...} , where a {\displaystyle a} is the coefficient of each term and r {\displaystyle r} is the common ratio between adjacent terms. The geometric series had an important role in the early development of calculus, is used throughout mathematics, and can serve as an introduction to frequently used mathematical tools such as the Taylor series, the complex Fourier series, and the matrix exponential. The name geometric succession indicates each term is the geometric mean of its two neighboring terms, similar to how the name Arithmetic succession indicates each term is the arithmetic mean of its two neighboring terms. | wikipedia |
transformation (geometry) | In mathematics, a geometric transformation is any bijection of a set to itself (or to another such set) with some salient geometrical underpinning. More specifically, it is a function whose domain and range are sets of points — most often both R 2 {\displaystyle \mathbb {R} ^{2}} or both R 3 {\displaystyle \mathbb {R} ^{3}} — such that the function is bijective so that its inverse exists. The study of geometry may be approached by the study of these transformations. | wikipedia |
graded lie algebra | In mathematics, a graded Lie algebra is a Lie algebra endowed with a gradation which is compatible with the Lie bracket. In other words, a graded Lie algebra is a Lie algebra which is also a nonassociative graded algebra under the bracket operation. A choice of Cartan decomposition endows any semisimple Lie algebra with the structure of a graded Lie algebra. Any parabolic Lie algebra is also a graded Lie algebra. | wikipedia |
graded lie algebra | A graded Lie superalgebra extends the notion of a graded Lie algebra in such a way that the Lie bracket is no longer assumed to be necessarily anticommutative. These arise in the study of derivations on graded algebras, in the deformation theory of Murray Gerstenhaber, Kunihiko Kodaira, and Donald C. Spencer, and in the theory of Lie derivatives. | wikipedia |
graded lie algebra | A supergraded Lie superalgebra is a further generalization of this notion to the category of superalgebras in which a graded Lie superalgebra is endowed with an additional super Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } -gradation. These arise when one forms a graded Lie superalgebra in a classical (non-supersymmetric) setting, and then tensorizes to obtain the supersymmetric analog.Still greater generalizations are possible to Lie algebras over a class of braided monoidal categories equipped with a coproduct and some notion of a gradation compatible with the braiding in the category. For hints in this direction, see Lie superalgebra#Category-theoretic definition. | wikipedia |
graph c*-algebra | In mathematics, a graph C*-algebra is a universal C*-algebra constructed from a directed graph. Graph C*-algebras are direct generalizations of the Cuntz algebras and Cuntz-Krieger algebras, but the class of graph C*-algebras has been shown to also include several other widely studied classes of C*-algebras. As a result, graph C*-algebras provide a common framework for investigating many well-known classes of C*-algebras that were previously studied independently. Among other benefits, this provides a context in which one can formulate theorems that apply simultaneously to all of these subclasses and contain specific results for each subclass as special cases. | wikipedia |
graph c*-algebra | Although graph C*-algebras include numerous examples, they provide a class of C*-algebras that are surprisingly amenable to study and much more manageable than general C*-algebras. The graph not only determines the associated C*-algebra by specifying relations for generators, it also provides a useful tool for describing and visualizing properties of the C*-algebra. This visual quality has led to graph C*-algebras being referred to as "operator algebras we can see." Another advantage of graph C*-algebras is that much of their structure and many of their invariants can be readily computed. Using data coming from the graph, one can determine whether the associated C*-algebra has particular properties, describe the lattice of ideals, and compute K-theoretic invariants. | wikipedia |
extension (algebra) | In mathematics, a group extension is a general means of describing a group in terms of a particular normal subgroup and quotient group. If Q {\displaystyle Q} and N {\displaystyle N} are two groups, then G {\displaystyle G} is an extension of Q {\displaystyle Q} by N {\displaystyle N} if there is a short exact sequence 1 → N → ι G → π Q → 1. {\displaystyle 1\to N\;{\overset {\iota }{\to }}\;G\;{\overset {\pi }{\to }}\;Q\to 1.} If G {\displaystyle G} is an extension of Q {\displaystyle Q} by N {\displaystyle N} , then G {\displaystyle G} is a group, ι ( N ) {\displaystyle \iota (N)} is a normal subgroup of G {\displaystyle G} and the quotient group G / ι ( N ) {\displaystyle G/\iota (N)} is isomorphic to the group Q {\displaystyle Q} . | wikipedia |
extension (algebra) | Group extensions arise in the context of the extension problem, where the groups Q {\displaystyle Q} and N {\displaystyle N} are known and the properties of G {\displaystyle G} are to be determined. Note that the phrasing " G {\displaystyle G} is an extension of N {\displaystyle N} by Q {\displaystyle Q} " is also used by some.Since any finite group G {\displaystyle G} possesses a maximal normal subgroup N {\displaystyle N} with simple factor group G / N {\displaystyle G/N} , all finite groups may be constructed as a series of extensions with finite simple groups. This fact was a motivation for completing the classification of finite simple groups. An extension is called a central extension if the subgroup N {\displaystyle N} lies in the center of G {\displaystyle G} . | wikipedia |
half-exponential function | In mathematics, a half-exponential function is a functional square root of an exponential function. That is, a function f {\displaystyle f} such that f {\displaystyle f} composed with itself results in an exponential function: for some constants a {\displaystyle a} and b {\displaystyle b} . | wikipedia |
harmonic progression (mathematics) | In mathematics, a harmonic progression (or harmonic sequence) is a progression formed by taking the reciprocals of an arithmetic progression. Equivalently, a sequence is a harmonic progression when each term is the harmonic mean of the neighboring terms. As a third equivalent characterization, it is an infinite sequence of the form 1 a , 1 a + d , 1 a + 2 d , 1 a + 3 d , ⋯ , {\displaystyle {\frac {1}{a}},\ {\frac {1}{a+d}},\ {\frac {1}{a+2d}},\ {\frac {1}{a+3d}},\cdots ,} where a is not zero and −a/d is not a natural number, or a finite sequence of the form 1 a , 1 a + d , 1 a + 2 d , 1 a + 3 d , ⋯ , 1 a + k d , {\displaystyle {\frac {1}{a}},\ {\frac {1}{a+d}},\ {\frac {1}{a+2d}},\ {\frac {1}{a+3d}},\cdots ,\ {\frac {1}{a+kd}},} where a is not zero, k is a natural number and −a/d is not a natural number or is greater than k. | wikipedia |
helix | In mathematics, a helix is a curve in 3-dimensional space. The following parametrisation in Cartesian coordinates defines a particular helix; perhaps the simplest equations for one is x ( t ) = cos ( t ) , {\displaystyle x(t)=\cos(t),\,} y ( t ) = sin ( t ) , {\displaystyle y(t)=\sin(t),\,} z ( t ) = t . {\displaystyle z(t)=t.\,} As the parameter t increases, the point (x(t),y(t),z(t)) traces a right-handed helix of pitch 2π (or slope 1) and radius 1 about the z-axis, in a right-handed coordinate system. In cylindrical coordinates (r, θ, h), the same helix is parametrised by: r ( t ) = 1 , {\displaystyle r(t)=1,\,} θ ( t ) = t , {\displaystyle \theta (t)=t,\,} h ( t ) = t . | wikipedia |
helix | {\displaystyle h(t)=t.\,} A circular helix of radius a and slope a/b (or pitch 2πb) is described by the following parametrisation: x ( t ) = a cos ( t ) , {\displaystyle x(t)=a\cos(t),\,} y ( t ) = a sin ( t ) , {\displaystyle y(t)=a\sin(t),\,} z ( t ) = b t . {\displaystyle z(t)=bt.\,} Another way of mathematically constructing a helix is to plot the complex-valued function exi as a function of the real number x (see Euler's formula). The value of x and the real and imaginary parts of the function value give this plot three real dimensions. Except for rotations, translations, and changes of scale, all right-handed helices are equivalent to the helix defined above. The equivalent left-handed helix can be constructed in a number of ways, the simplest being to negate any one of the x, y or z components. | wikipedia |
heteroclinic network | In mathematics, a heteroclinic network is an invariant set in the phase space of a dynamical system. It can be thought of loosely as the union of more than one heteroclinic cycle. Heteroclinic networks arise naturally in a number of different types of applications, including fluid dynamics and populations dynamics. The dynamics of trajectories near to heteroclinic networks is intermittent: trajectories spend a long time performing one type of behaviour (often, close to equilibrium), before switching rapidly to another type of behaviour. This type of intermittent switching behaviour has led to several different groups of researchers using them as a way to model and understand various type of neural dynamics. | wikipedia |
binary relations | In mathematics, a heterogeneous relation is a binary relation, a subset of a Cartesian product A × B , {\displaystyle A\times B,} where A and B are possibly distinct sets. The prefix hetero is from the Greek ἕτερος (heteros, "other, another, different"). A heterogeneous relation has been called a rectangular relation, suggesting that it does not have the square-symmetry of a homogeneous relation on a set where A = B . {\displaystyle A=B.} Commenting on the development of binary relations beyond homogeneous relations, researchers wrote, "...a variant of the theory has evolved that treats relations from the very beginning as heterogeneous or rectangular, i.e. as relations where the normal case is that they are relations between different sets." | wikipedia |
hierarchy (mathematics) | In mathematics, a hierarchy is a set-theoretical object, consisting of a preorder defined on a set. This is often referred to as an ordered set, though that is an ambiguous term that many authors reserve for partially ordered sets or totally ordered sets. The term pre-ordered set is unambiguous, and is always synonymous with a mathematical hierarchy. The term hierarchy is used to stress a hierarchical relation among the elements. | wikipedia |
hierarchy (mathematics) | Sometimes, a set comes equipped with a natural hierarchical structure. For example, the set of natural numbers N is equipped with a natural pre-order structure, where n ≤ n ′ {\displaystyle n\leq n'} whenever we can find some other number m {\displaystyle m} so that n + m = n ′ {\displaystyle n+m=n'} . That is, n ′ {\displaystyle n'} is bigger than n {\displaystyle n} only because we can get to n ′ {\displaystyle n'} from n {\displaystyle n} using m {\displaystyle m} . | wikipedia |
hierarchy (mathematics) | This idea can be applied to any commutative monoid. On the other hand, the set of integers Z requires a more sophisticated argument for its hierarchical structure, since we can always solve the equation n + m = n ′ {\displaystyle n+m=n'} by writing m = ( n ′ − n ) {\displaystyle m=(n'-n)} .A mathematical hierarchy (a pre-ordered set) should not be confused with the more general concept of a hierarchy in the social realm, particularly when one is constructing computational models that are used to describe real-world social, economic or political systems. These hierarchies, or complex networks, are much too rich to be described in the category Set of sets. | wikipedia |
hierarchy (mathematics) | This is not just a pedantic claim; there are also mathematical hierarchies, in the general sense, that are not describable using set theory.Other natural hierarchies arise in computer science, where the word refers to partially ordered sets whose elements are classes of objects of increasing complexity. In that case, the preorder defining the hierarchy is the class-containment relation. Containment hierarchies are thus special cases of hierarchies. | wikipedia |
holomorphic map | In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space Cn. The existence of a complex derivative in a neighbourhood is a very strong condition: it implies that a holomorphic function is infinitely differentiable and locally equal to its own Taylor series (analytic). Holomorphic functions are the central objects of study in complex analysis. Though the term analytic function is often used interchangeably with "holomorphic function", the word "analytic" is defined in a broader sense to denote any function (real, complex, or of more general type) that can be written as a convergent power series in a neighbourhood of each point in its domain. | wikipedia |
holomorphic map | That all holomorphic functions are complex analytic functions, and vice versa, is a major theorem in complex analysis.Holomorphic functions are also sometimes referred to as regular functions. A holomorphic function whose domain is the whole complex plane is called an entire function. The phrase "holomorphic at a point z0" means not just differentiable at z0, but differentiable everywhere within some neighbourhood of z0 in the complex plane. | wikipedia |
homogenous function | In mathematics, a homogeneous function is a function of several variables such that, if all its arguments are multiplied by a scalar, then its value is multiplied by some power of this scalar, called the degree of homogeneity, or simply the degree; that is, if k is an integer, a function f of n variables is homogeneous of degree k if f ( s x 1 , … , s x n ) = s k f ( x 1 , … , x n ) {\displaystyle f(sx_{1},\ldots ,sx_{n})=s^{k}f(x_{1},\ldots ,x_{n})} for every x 1 , … , x n , {\displaystyle x_{1},\ldots ,x_{n},} and s ≠ 0. {\displaystyle s\neq 0.} For example, a homogeneous polynomial of degree k defines a homogeneous function of degree k. The above definition extends to functions whose domain and codomain are vector spaces over a field F: a function f: V → W {\displaystyle f:V\to W} between two F-vector spaces is homogeneous of degree k {\displaystyle k} if for all nonzero s ∈ F {\displaystyle s\in F} and v ∈ V . {\displaystyle v\in V.} | wikipedia |
homogenous function | This definition is often further generalized to functions whose domain is not V, but a cone in V, that is, a subset C of V such that v ∈ C {\displaystyle \mathbf {v} \in C} implies s v ∈ C {\displaystyle s\mathbf {v} \in C} for every nonzero scalar s. In the case of functions of several real variables and real vector spaces, a slightly more general form of homogeneity called positive homogeneity is often considered, by requiring only that the above identities hold for s > 0 , {\displaystyle s>0,} and allowing any real number k as a degree of homogeneity. Every homogeneous real function is positively homogeneous. The converse is not true, but is locally true in the sense that (for integer degrees) the two kinds of homogeneity cannot be distinguished by considering the behavior of a function near a given point. | wikipedia |
homogenous function | A norm over a real vector space is an example of a positively homogeneous function that is not homogeneous. A special case is the absolute value of real numbers. The quotient of two homogeneous polynomials of the same degree gives an example of a homogeneous function of degree zero. This example is fundamental in the definition of projective schemes. | wikipedia |
inhomogeneous polynomial | In mathematics, a homogeneous polynomial, sometimes called quantic in older texts, is a polynomial whose nonzero terms all have the same degree. For example, x 5 + 2 x 3 y 2 + 9 x y 4 {\displaystyle x^{5}+2x^{3}y^{2}+9xy^{4}} is a homogeneous polynomial of degree 5, in two variables; the sum of the exponents in each term is always 5. The polynomial x 3 + 3 x 2 y + z 7 {\displaystyle x^{3}+3x^{2}y+z^{7}} is not homogeneous, because the sum of exponents does not match from term to term. The function defined by a homogeneous polynomial is always a homogeneous function. | wikipedia |
inhomogeneous polynomial | An algebraic form, or simply form, is a function defined by a homogeneous polynomial. A binary form is a form in two variables. A form is also a function defined on a vector space, which may be expressed as a homogeneous function of the coordinates over any basis. | wikipedia |
inhomogeneous polynomial | A polynomial of degree 0 is always homogeneous; it is simply an element of the field or ring of the coefficients, usually called a constant or a scalar. A form of degree 1 is a linear form. A form of degree 2 is a quadratic form. | wikipedia |
inhomogeneous polynomial | In geometry, the Euclidean distance is the square root of a quadratic form. Homogeneous polynomials are ubiquitous in mathematics and physics. They play a fundamental role in algebraic geometry, as a projective algebraic variety is defined as the set of the common zeros of a set of homogeneous polynomials. | wikipedia |
mathematical limit | In mathematics, a limit is the value that a function (or sequence) approaches as the input (or index) approaches some value. Limits are essential to calculus and mathematical analysis, and are used to define continuity, derivatives, and integrals. The concept of a limit of a sequence is further generalized to the concept of a limit of a topological net, and is closely related to limit and direct limit in category theory. In formulas, a limit of a function is usually written as lim x → c f ( x ) = L , {\displaystyle \lim _{x\to c}f(x)=L,} (although a few authors use "Lt" instead of "lim") and is read as "the limit of f of x as x approaches c equals L". The fact that a function f approaches the limit L as x approaches c is sometimes denoted by a right arrow (→ or → {\displaystyle \rightarrow } ), as in f ( x ) → L as x → c , {\displaystyle f(x)\to L{\text{ as }}x\to c,} which reads " f {\displaystyle f} of x {\displaystyle x} tends to L {\displaystyle L} as x {\displaystyle x} tends to c {\displaystyle c} ". | wikipedia |
limiting case (mathematics) | In mathematics, a limiting case of a mathematical object is a special case that arises when one or more components of the object take on their most extreme possible values. For example: In statistics, the limiting case of the binomial distribution is the Poisson distribution. As the number of events tends to infinity in the binomial distribution, the random variable changes from the binomial to the Poisson distribution. A circle is a limiting case of various other figures, including the Cartesian oval, the ellipse, the superellipse, and the Cassini oval. | wikipedia |
limiting case (mathematics) | Each type of figure is a circle for certain values of the defining parameters, and the generic figure appears more like a circle as the limiting values are approached. Archimedes calculated an approximate value of π by treating the circle as the limiting case of a regular polygon with 3 × 2n sides, as n gets large. In electricity and magnetism, the long wavelength limit is the limiting case when the wavelength is much larger than the system size. | wikipedia |
limiting case (mathematics) | In economics, two limiting cases of a demand curve or supply curve are those in which the elasticity is zero (the totally inelastic case) or infinity (the infinitely elastic case). In finance, continuous compounding is the limiting case of compound interest in which the compounding period becomes infinitesimally small, achieved by taking the limit as the number of compounding periods per year goes to infinity.A limiting case is sometimes a degenerate case in which some qualitative properties differ from the corresponding properties of the generic case. For example: A point is a degenerate circle, namely one with radius 0. | wikipedia |
limiting case (mathematics) | A parabola can degenerate into two distinct or coinciding parallel lines. An ellipse can degenerate into a single point or a line segment. A hyperbola can degenerate into two intersecting lines. | wikipedia |
linear algebraic group action | In mathematics, a linear algebraic group is a subgroup of the group of invertible n × n {\displaystyle n\times n} matrices (under matrix multiplication) that is defined by polynomial equations. An example is the orthogonal group, defined by the relation M T M = I n {\displaystyle M^{T}M=I_{n}} where M T {\displaystyle M^{T}} is the transpose of M {\displaystyle M} . Many Lie groups can be viewed as linear algebraic groups over the field of real or complex numbers. (For example, every compact Lie group can be regarded as a linear algebraic group over R (necessarily R-anisotropic and reductive), as can many noncompact groups such as the simple Lie group SL(n,R).) | wikipedia |
linear algebraic group action | The simple Lie groups were classified by Wilhelm Killing and Élie Cartan in the 1880s and 1890s. At that time, no special use was made of the fact that the group structure can be defined by polynomials, that is, that these are algebraic groups. The founders of the theory of algebraic groups include Maurer, Chevalley, and Kolchin (1948). In the 1950s, Armand Borel constructed much of the theory of algebraic groups as it exists today. One of the first uses for the theory was to define the Chevalley groups. | wikipedia |
linearity | In mathematics, a linear map or linear function f(x) is a function that satisfies the two properties: Additivity: f(x + y) = f(x) + f(y). Homogeneity of degree 1: f(αx) = α f(x) for all α.These properties are known as the superposition principle. In this definition, x is not necessarily a real number, but can in general be an element of any vector space. A more special definition of linear function, not coinciding with the definition of linear map, is used in elementary mathematics (see below). | wikipedia |
linearity | Additivity alone implies homogeneity for rational α, since f ( x + x ) = f ( x ) + f ( x ) {\displaystyle f(x+x)=f(x)+f(x)} implies f ( n x ) = n f ( x ) {\displaystyle f(nx)=nf(x)} for any natural number n by mathematical induction, and then n f ( x ) = f ( n x ) = f ( m n m x ) = m f ( n m x ) {\displaystyle nf(x)=f(nx)=f(m{\tfrac {n}{m}}x)=mf({\tfrac {n}{m}}x)} implies f ( n m x ) = n m f ( x ) {\displaystyle f({\tfrac {n}{m}}x)={\tfrac {n}{m}}f(x)} . The density of the rational numbers in the reals implies that any additive continuous function is homogeneous for any real number α, and is therefore linear. The concept of linearity can be extended to linear operators. Important examples of linear operators include the derivative considered as a differential operator, and other operators constructed from it, such as del and the Laplacian. When a differential equation can be expressed in linear form, it can generally be solved by breaking the equation up into smaller pieces, solving each of those pieces, and summing the solutions. | wikipedia |
locally constant function | In mathematics, a locally constant function is a function from a topological space into a set with the property that around every point of its domain, there exists some neighborhood of that point on which it restricts to a constant function. | wikipedia |
locally integrable function | In mathematics, a locally integrable function (sometimes also called locally summable function) is a function which is integrable (so its integral is finite) on every compact subset of its domain of definition. The importance of such functions lies in the fact that their function space is similar to Lp spaces, but its members are not required to satisfy any growth restriction on their behavior at the boundary of their domain (at infinity if the domain is unbounded): in other words, locally integrable functions can grow arbitrarily fast at the domain boundary, but are still manageable in a way similar to ordinary integrable functions. | wikipedia |
map (mathematics) | In mathematics, a map or mapping is a function in its general sense. These terms may have originated as from the process of making a geographical map: mapping the Earth surface to a sheet of paper.The term map may be used to distinguish some special types of functions, such as homomorphisms. For example, a linear map is a homomorphism of vector spaces, while the term linear function may have this meaning or it may mean a linear polynomial. | wikipedia |
map (mathematics) | In category theory, a map may refer to a morphism. The term transformation can be used interchangeably, but transformation often refers to a function from a set to itself. There are also a few less common uses in logic and graph theory. | wikipedia |
matrix coefficient | In mathematics, a matrix coefficient (or matrix element) is a function on a group of a special form, which depends on a linear representation of the group and additional data. Precisely, it is a function on a compact topological group G obtained by composing a representation of G on a vector space V with a linear map from the endomorphisms of V into V's underlying field. It is also called a representative function. | wikipedia |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.