text
stringlengths
9
3.55k
source
stringlengths
31
280
In mathematics, a fractal is a geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scales, as illustrated in successive magnifications of the Mandelbrot set. This exhibition of similar patterns at increasingly smaller scales is called self-similarity, also known as expanding symmetry or unfolding symmetry; if this replication is exactly the same at every scale, as in the Menger sponge, the shape is called affine self-similar. Fractal geometry lies within the mathematical branch of measure theory.
https://en.wikipedia.org/wiki/Fractal_geometry
One way that fractals are different from finite geometric figures is how they scale. Doubling the edge lengths of a filled polygon multiplies its area by four, which is two (the ratio of the new to the old side length) raised to the power of two (the conventional dimension of the filled polygon). Likewise, if the radius of a filled sphere is doubled, its volume scales by eight, which is two (the ratio of the new to the old radius) to the power of three (the conventional dimension of the filled sphere).
https://en.wikipedia.org/wiki/Fractal_geometry
However, if a fractal's one-dimensional lengths are all doubled, the spatial content of the fractal scales by a power that is not necessarily an integer and is in general greater than its conventional dimension. This power is called the fractal dimension of the geometric object, to distinguish it from the conventional dimension (which is formally called the topological dimension).Analytically, many fractals are nowhere differentiable. An infinite fractal curve can be conceived of as winding through space differently from an ordinary line – although it is still topologically 1-dimensional, its fractal dimension indicates that it locally fills space more efficiently than an ordinary line.
https://en.wikipedia.org/wiki/Fractal_geometry
Starting in the 17th century with notions of recursion, fractals have moved through increasingly rigorous mathematical treatment to the study of continuous but not differentiable functions in the 19th century by the seminal work of Bernard Bolzano, Bernhard Riemann, and Karl Weierstrass, and on to the coining of the word fractal in the 20th century with a subsequent burgeoning of interest in fractals and computer-based modelling in the 20th century.There is some disagreement among mathematicians about how the concept of a fractal should be formally defined. Mandelbrot himself summarized it as "beautiful, damn hard, increasingly useful.
https://en.wikipedia.org/wiki/Fractal_geometry
That's fractals." More formally, in 1982 Mandelbrot defined fractal as follows: "A fractal is by definition a set for which the Hausdorff–Besicovitch dimension strictly exceeds the topological dimension." Later, seeing this as too restrictive, he simplified and expanded the definition to this: "A fractal is a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole."
https://en.wikipedia.org/wiki/Fractal_geometry
Still later, Mandelbrot proposed "to use fractal without a pedantic definition, to use fractal dimension as a generic term applicable to all the variants".The consensus among mathematicians is that theoretical fractals are infinitely self-similar iterated and detailed mathematical constructs, of which many examples have been formulated and studied. Fractals are not limited to geometric patterns, but can also describe processes in time. Fractal patterns with various degrees of self-similarity have been rendered or studied in visual, physical, and aural media and found in nature, technology, art, architecture and law. Fractals are of particular relevance in the field of chaos theory because they show up in the geometric depictions of most chaotic processes (typically either as attractors or as boundaries between basins of attraction).
https://en.wikipedia.org/wiki/Fractal_geometry
In mathematics, a fractal is a geometrical shape that exhibits invariance under scaling. A piece of the whole, if enlarged, has the same geometrical features as the entire object itself. A fractal ambigram is a sort of space-filling ambigrams where the tiled word branches from itself and then shrinks in a self-similar manner, forming a fractal. In general, only a few letters are constrainted in a fractal ambigram. The other letters don't need to look like any other, and thus can be shaped freely.
https://en.wikipedia.org/wiki/Ambigram
In mathematics, a fractal sequence is one that contains itself as a proper subsequence. An example is 1, 1, 2, 1, 2, 3, 1, 2, 3, 4, 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, 6, ...If the first occurrence of each n is deleted, the remaining sequence is identical to the original. The process can be repeated indefinitely, so that actually, the original sequence contains not only one copy of itself, but rather, infinitely many.
https://en.wikipedia.org/wiki/Fractal_sequence
In mathematics, a frame bundle is a principal fiber bundle F(E) associated to any vector bundle E. The fiber of F(E) over a point x is the set of all ordered bases, or frames, for Ex. The general linear group acts naturally on F(E) via a change of basis, giving the frame bundle the structure of a principal GL(k, R)-bundle (where k is the rank of E). The frame bundle of a smooth manifold is the one associated to its tangent bundle. For this reason it is sometimes called the tangent frame bundle.
https://en.wikipedia.org/wiki/Unitary_frame_bundle
In mathematics, a free Boolean algebra is a Boolean algebra with a distinguished set of elements, called generators, such that: Each element of the Boolean algebra can be expressed as a finite combination of generators, using the Boolean operations, and The generators are as independent as possible, in the sense that there are no relationships among them (again in terms of finite expressions using the Boolean operations) that do not hold in every Boolean algebra no matter which elements are chosen.
https://en.wikipedia.org/wiki/Free_Boolean_algebra
In mathematics, a free Lie algebra over a field K is a Lie algebra generated by a set X, without any imposed relations other than the defining relations of alternating K-bilinearity and the Jacobi identity.
https://en.wikipedia.org/wiki/Free_Lie_algebra
In mathematics, a free abelian group is an abelian group with a basis. Being an abelian group means that it is a set with an addition operation that is associative, commutative, and invertible. A basis, also called an integral basis, is a subset such that every element of the group can be uniquely expressed as an integer combination of finitely many basis elements. For instance the two-dimensional integer lattice forms a free abelian group, with coordinatewise addition as its operation, and with the two points (1,0) and (0,1) as its basis.
https://en.wikipedia.org/wiki/Free_abelian_group
Free abelian groups have properties which make them similar to vector spaces, and may equivalently be called free Z {\displaystyle \mathbb {Z} } -modules, the free modules over the integers. Lattice theory studies free abelian subgroups of real vector spaces.
https://en.wikipedia.org/wiki/Free_abelian_group
In algebraic topology, free abelian groups are used to define chain groups, and in algebraic geometry they are used to define divisors. The elements of a free abelian group with basis B {\displaystyle B} may be described in several equivalent ways. These include formal sums over B {\displaystyle B} , which are expressions of the form ∑ a i b i {\textstyle \sum a_{i}b_{i}} where each a i {\displaystyle a_{i}} is a nonzero integer, each b i {\displaystyle b_{i}} is a distinct basis element, and the sum has finitely many terms.
https://en.wikipedia.org/wiki/Free_abelian_group
Alternatively, the elements of a free abelian group may be thought of as signed multisets containing finitely many elements of B {\displaystyle B} , with the multiplicity of an element in the multiset equal to its coefficient in the formal sum. Another way to represent an element of a free abelian group is as a function from B {\displaystyle B} to the integers with finitely many nonzero values; for this functional representation, the group operation is the pointwise addition of functions. Every set B {\displaystyle B} has a free abelian group with B {\displaystyle B} as its basis.
https://en.wikipedia.org/wiki/Free_abelian_group
This group is unique in the sense that every two free abelian groups with the same basis are isomorphic. Instead of constructing it by describing its individual elements, a free abelian group with basis B {\displaystyle B} may be constructed as a direct sum of copies of the additive group of the integers, with one copy per member of B {\displaystyle B} . Alternatively, the free abelian group with basis B {\displaystyle B} may be described by a presentation with the elements of B {\displaystyle B} as its generators and with the commutators of pairs of members as its relators.
https://en.wikipedia.org/wiki/Free_abelian_group
The rank of a free abelian group is the cardinality of a basis; every two bases for the same group give the same rank, and every two free abelian groups with the same rank are isomorphic. Every subgroup of a free abelian group is itself free abelian; this fact allows a general abelian group to be understood as a quotient of a free abelian group by "relations", or as a cokernel of an injective homomorphism between free abelian groups. The only free abelian groups that are free groups are the trivial group and the infinite cyclic group.
https://en.wikipedia.org/wiki/Free_abelian_group
In mathematics, a free boundary problem (FB problem) is a partial differential equation to be solved for both an unknown function u {\displaystyle u} and an unknown domain Ω {\displaystyle \Omega } . The segment Γ {\displaystyle \Gamma } of the boundary of Ω {\displaystyle \Omega } which is not known at the outset of the problem is the free boundary. FBs arise in various mathematical models encompassing applications that ranges from physical to economical, financial and biological phenomena, where there is an extra effect of the medium. This effect is in general a qualitative change of the medium and hence an appearance of a phase transition: ice to water, liquid to crystal, buying to selling (assets), active to inactive (biology), blue to red (coloring games), disorganized to organized (self-organizing criticality).
https://en.wikipedia.org/wiki/Free_boundary_problem
An interesting aspect of such a criticality is the so-called sandpile dynamic (or Internal DLA). The most classical example is the melting of ice: Given a block of ice, one can solve the heat equation given appropriate initial and boundary conditions to determine its temperature. But, if in any region the temperature is greater than the melting point of ice, this domain will be occupied by liquid water instead. The boundary formed from the ice/liquid interface is controlled dynamically by the solution of the PDE.
https://en.wikipedia.org/wiki/Free_boundary_problem
In mathematics, a free module is a module that has a basis, that is, a generating set consisting of linearly independent elements. Every vector space is a free module, but, if the ring of the coefficients is not a division ring (not a field in the commutative case), then there exist non-free modules. Given any set S and ring R, there is a free R-module with basis S, which is called the free module on S or module of formal R-linear combinations of the elements of S. A free abelian group is precisely a free module over the ring Z of integers.
https://en.wikipedia.org/wiki/Free_module
In mathematics, a frieze or frieze pattern is a two-dimensional design that repeats in one direction. Such patterns occur frequently in architecture and decorative art. Frieze patterns can be classified into seven types according to their symmetries.
https://en.wikipedia.org/wiki/Frieze_pattern
The set of symmetries of a frieze pattern is called a frieze group. Frieze groups are two-dimensional line groups, having repetition in only one direction. They are related to the more complex wallpaper groups, which classify patterns that are repetitive in two directions, and crystallographic groups, which classify patterns that are repetitive in three directions.
https://en.wikipedia.org/wiki/Frieze_pattern
In mathematics, a full subcategory A of a category B is said to be reflective in B when the inclusion functor from A to B has a left adjoint. : 91 This adjoint is sometimes called a reflector, or localization. Dually, A is said to be coreflective in B when the inclusion functor has a right adjoint. Informally, a reflector acts as a kind of completion operation. It adds in any "missing" pieces of the structure in such a way that reflecting it again has no further effect.
https://en.wikipedia.org/wiki/Reflective_subcategory
In mathematics, a function between topological spaces is called proper if inverse images of compact subsets are compact. In algebraic geometry, the analogous concept is called a proper morphism.
https://en.wikipedia.org/wiki/Proper_map
In mathematics, a function defined on a region of the complex plane is said to be of bounded type if it is equal to the ratio of two analytic functions bounded in that region. But more generally, a function is of bounded type in a region Ω {\displaystyle \Omega } if and only if f {\displaystyle f} is analytic on Ω {\displaystyle \Omega } and log + ⁡ | f ( z ) | {\displaystyle \log ^{+}|f(z)|} has a harmonic majorant on Ω , {\displaystyle \Omega ,} where log + ⁡ ( x ) = max {\displaystyle \log ^{+}(x)=\max} . Being the ratio of two bounded analytic functions is a sufficient condition for a function to be of bounded type (defined in terms of a harmonic majorant), and if Ω {\displaystyle \Omega } is simply connected the condition is also necessary. The class of all such f {\displaystyle f} on Ω {\displaystyle \Omega } is commonly denoted N ( Ω ) {\displaystyle N(\Omega )} and is sometimes called the Nevanlinna class for Ω {\displaystyle \Omega } .
https://en.wikipedia.org/wiki/Bounded_type_(mathematics)
In mathematics, a function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its argument.
https://en.wikipedia.org/wiki/Rotational_invariance
In mathematics, a function f defined on some set X with real or complex values is called bounded if the set of its values is bounded. In other words, there exists a real number M such that | f ( x ) | ≤ M {\displaystyle |f(x)|\leq M} for all x in X. A function that is not bounded is said to be unbounded.If f is real-valued and f(x) ≤ A for all x in X, then the function is said to be bounded (from) above by A. If f(x) ≥ B for all x in X, then the function is said to be bounded (from) below by B. A real-valued function is bounded if and only if it is bounded from above and below.An important special case is a bounded sequence, where X is taken to be the set N of natural numbers. Thus a sequence f = (a0, a1, a2, ...) is bounded if there exists a real number M such that | a n | ≤ M {\displaystyle |a_{n}|\leq M} for every natural number n. The set of all bounded sequences forms the sequence space l ∞ {\displaystyle l^{\infty }} .The definition of boundedness can be generalized to functions f: X → Y taking values in a more general space Y by requiring that the image f(X) is a bounded set in Y.
https://en.wikipedia.org/wiki/Bounded_sequences
In mathematics, a function f is cofunction of a function g if f(A) = g(B) whenever A and B are complementary angles. This definition typically applies to trigonometric functions. The prefix "co-" can be found already in Edmund Gunter's Canon triangulorum (1620).For example, sine (Latin: sinus) and cosine (Latin: cosinus, sinus complementi) are cofunctions of each other (hence the "co" in "cosine"): The same is true of secant (Latin: secans) and cosecant (Latin: cosecans, secans complementi) as well as of tangent (Latin: tangens) and cotangent (Latin: cotangens, tangens complementi): These equations are also known as the cofunction identities.This also holds true for the versine (versed sine, ver) and coversine (coversed sine, cvs), the vercosine (versed cosine, vcs) and covercosine (coversed cosine, cvc), the haversine (half-versed sine, hav) and hacoversine (half-coversed sine, hcv), the havercosine (half-versed cosine, hvc) and hacovercosine (half-coversed cosine, hcc), as well as the exsecant (external secant, exs) and excosecant (external cosecant, exc):
https://en.wikipedia.org/wiki/Cofunction
In mathematics, a function f is logarithmically convex or superconvex if log ∘ f {\displaystyle {\log }\circ f} , the composition of the logarithm with f, is itself a convex function.
https://en.wikipedia.org/wiki/Logarithmically_convex_function
In mathematics, a function f of n variables x1, ..., xn leads to a Chisini mean M if, for every vector ⟨x1, ..., xn⟩, there exists a unique M such that f(M,M, ..., M) = f(x1,x2, ..., xn).The arithmetic, harmonic, geometric, generalised, Heronian and quadratic means are all Chisini means, as are their weighted variants. While Oscar Chisini was arguably the first to deal with "substitution means" in some depth in 1929, the idea of defining a mean as above is quite old, appearing (for example) in early works of Augustus De Morgan.
https://en.wikipedia.org/wiki/Chisini_mean
In mathematics, a function f on the interval has the Luzin N property, named after Nikolai Luzin (also called Luzin property or N property) if for all N ⊂ {\displaystyle N\subset } such that λ ( N ) = 0 {\displaystyle \lambda (N)=0} , there holds: λ ( f ( N ) ) = 0 {\displaystyle \lambda (f(N))=0} , where λ {\displaystyle \lambda } stands for the Lebesgue measure. Note that the image of such a set N is not necessarily measurable, but since the Lebesgue measure is complete, it follows that if the Lebesgue outer measure of that set is zero, then it is measurable and its Lebesgue measure is zero as well.
https://en.wikipedia.org/wiki/Luzin_N_property
In mathematics, a function f {\displaystyle f} is superadditive if for all x {\displaystyle x} and y {\displaystyle y} in the domain of f . {\displaystyle f.} Similarly, a sequence a 1 , a 2 , … {\displaystyle a_{1},a_{2},\ldots } is called superadditive if it satisfies the inequality for all m {\displaystyle m} and n . {\displaystyle n.} The term "superadditive" is also applied to functions from a boolean algebra to the real numbers where P ( X ∨ Y ) ≥ P ( X ) + P ( Y ) , {\displaystyle P(X\lor Y)\geq P(X)+P(Y),} such as lower probabilities.
https://en.wikipedia.org/wiki/Superadditivity
In mathematics, a function f {\displaystyle f} is weakly harmonic in a domain D {\displaystyle D} if ∫ D f Δ g = 0 {\displaystyle \int _{D}f\,\Delta g=0} for all g {\displaystyle g} with compact support in D {\displaystyle D} and continuous second derivatives, where Δ is the Laplacian. This is the same notion as a weak derivative, however, a function can have a weak derivative and not be differentiable. In this case, we have the somewhat surprising result that a function is weakly harmonic if and only if it is harmonic. Thus weakly harmonic is actually equivalent to the seemingly stronger harmonic condition.
https://en.wikipedia.org/wiki/Weakly_harmonic
In mathematics, a function f: R k → R {\displaystyle f\colon \mathbb {R} ^{k}\to \mathbb {R} } is supermodular if f ( x ↑ y ) + f ( x ↓ y ) ≥ f ( x ) + f ( y ) {\displaystyle f(x\uparrow y)+f(x\downarrow y)\geq f(x)+f(y)} for all x {\displaystyle x} , y ∈ R k {\displaystyle y\in \mathbb {R} ^{k}} , where x ↑ y {\displaystyle x\uparrow y} denotes the componentwise maximum and x ↓ y {\displaystyle x\downarrow y} the componentwise minimum of x {\displaystyle x} and y {\displaystyle y} . If −f is supermodular then f is called submodular, and if the inequality is changed to an equality the function is modular. If f is twice continuously differentiable, then supermodularity is equivalent to the condition ∂ 2 f ∂ z i ∂ z j ≥ 0 for all i ≠ j . {\displaystyle {\frac {\partial ^{2}f}{\partial z_{i}\,\partial z_{j}}}\geq 0{\mbox{ for all }}i\neq j.}
https://en.wikipedia.org/wiki/Supermodular_function
In mathematics, a function f: R n → R {\displaystyle f:\mathbb {R} ^{n}\rightarrow \mathbb {R} } is said to be closed if for each α ∈ R {\displaystyle \alpha \in \mathbb {R} } , the sublevel set { x ∈ dom f | f ( x ) ≤ α } {\displaystyle \{x\in {\mbox{dom}}f\vert f(x)\leq \alpha \}} is a closed set. Equivalently, if the epigraph defined by epi f = { ( x , t ) ∈ R n + 1 | x ∈ dom f , f ( x ) ≤ t } {\displaystyle {\mbox{epi}}f=\{(x,t)\in \mathbb {R} ^{n+1}\vert x\in {\mbox{dom}}f,\;f(x)\leq t\}} is closed, then the function f {\displaystyle f} is closed. This definition is valid for any function, but most used for convex functions. A proper convex function is closed if and only if it is lower semi-continuous. For a convex function which is not proper there is disagreement as to the definition of the closure of the function.
https://en.wikipedia.org/wiki/Closed_convex_function
In mathematics, a function f: R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } is symmetrically continuous at a point x if lim h → 0 f ( x + h ) − f ( x − h ) = 0. {\displaystyle \lim _{h\to 0}f(x+h)-f(x-h)=0.} The usual definition of continuity implies symmetric continuity, but the converse is not true.
https://en.wikipedia.org/wiki/Symmetrically_continuous_function
For example, the function x − 2 {\displaystyle x^{-2}} is symmetrically continuous at x = 0 {\displaystyle x=0} , but not continuous. Also, symmetric differentiability implies symmetric continuity, but the converse is not true just like usual continuity does not imply differentiability. The set of the symmetrically continuous functions, with the usual scalar multiplication can be easily shown to have the structure of a vector space over R {\displaystyle \mathbb {R} } , similarly to the usually continuous functions, which form a linear subspace within it.
https://en.wikipedia.org/wiki/Symmetrically_continuous_function
In mathematics, a function f: V → W {\displaystyle f:V\to W} between two complex vector spaces is said to be antilinear or conjugate-linear if hold for all vectors x , y ∈ V {\displaystyle x,y\in V} and every complex number s , {\displaystyle s,} where s ¯ {\displaystyle {\overline {s}}} denotes the complex conjugate of s . {\displaystyle s.} Antilinear maps stand in contrast to linear maps, which are additive maps that are homogeneous rather than conjugate homogeneous.
https://en.wikipedia.org/wiki/Antidual_space
If the vector spaces are real then antilinearity is the same as linearity. Antilinear maps occur in quantum mechanics in the study of time reversal and in spinor calculus, where it is customary to replace the bars over the basis vectors and the components of geometric objects by dots put above the indices. Scalar-valued antilinear maps often arise when dealing with complex inner products and Hilbert spaces.
https://en.wikipedia.org/wiki/Antidual_space
In mathematics, a function from a set X to a set Y assigns to each element of X exactly one element of Y. The set X is called the domain of the function and the set Y is called the codomain of the function.Functions were originally the idealization of how a varying quantity depends on another quantity. For example, the position of a planet is a function of time. Historically, the concept was elaborated with the infinitesimal calculus at the end of the 17th century, and, until the 19th century, the functions that were considered were differentiable (that is, they had a high degree of regularity). The concept of a function was formalized at the end of the 19th century in terms of set theory, and this greatly enlarged the domains of application of the concept.
https://en.wikipedia.org/wiki/Function_evaluation
A function is most often denoted by letters such as f, g and h, and the value of a function f at an element x of its domain is denoted by f(x); the numerical value resulting from the function evaluation at a particular input value is denoted by replacing x with this value; for example, the value of f at x = 4 is denoted by f(4). When the function is not named and is represented by an expression E, the value of the function at, say, x = 4 may be denoted by E|x=4. For example, the value at 4 of the function that maps x to ( x + 1 ) 2 {\displaystyle (x+1)^{2}} may be denoted by ( x + 1 ) 2 | x = 4 {\displaystyle \left.
https://en.wikipedia.org/wiki/Function_evaluation
(x+1)^{2}\right\vert _{x=4}} (which results in 25).A function is uniquely represented by the set of all pairs (x, f (x)), called the graph of the function, a popular means of illustrating the function. When the domain and the codomain are sets of real numbers, each such pair may be thought of as the Cartesian coordinates of a point in the plane. Functions are widely used in science, engineering, and in most fields of mathematics. It has been said that functions are "the central objects of investigation" in most fields of mathematics.
https://en.wikipedia.org/wiki/Function_evaluation
In mathematics, a function is a rule for taking an input (in the simplest case, a number or set of numbers) and providing an output (which may also be a number). A symbol that stands for an arbitrary input is called an independent variable, while a symbol that stands for an arbitrary output is called a dependent variable. The most common symbol for the input is x, and the most common symbol for the output is y; the function itself is commonly written y = f(x).It is possible to have multiple independent variables or multiple dependent variables. For instance, in multivariable calculus, one often encounters functions of the form z = f(x,y), where z is a dependent variable and x and y are independent variables. Functions with multiple outputs are often referred to as vector-valued functions.
https://en.wikipedia.org/wiki/Response_variable
In mathematics, a function is locally bounded if it is bounded around every point. A family of functions is locally bounded if for any point in their domain all the functions are bounded around that point and by the same number.
https://en.wikipedia.org/wiki/Locally_bounded
In mathematics, a function is said to vanish at infinity if its values approach 0 as the input grows without bounds. There are two different ways to define this with one definition applying to functions defined on normed vector spaces and the other applying to functions defined on locally compact spaces. Aside from this difference, both of these notions correspond to the intuitive notion of adding a point at infinity, and requiring the values of the function to get arbitrarily close to zero as one approaches it. This definition can be formalized in many cases by adding an (actual) point at infinity.
https://en.wikipedia.org/wiki/Rapidly_decreasing_function
In mathematics, a function of a motor variable is a function with arguments and values in the split-complex number plane, much as functions of a complex variable involve ordinary complex numbers. William Kingdon Clifford coined the term motor for a kinematic operator in his "Preliminary Sketch of Biquaternions" (1873). He used split-complex numbers for scalars in his split-biquaternions. Motor variable is used here in place of split-complex variable for euphony and tradition.
https://en.wikipedia.org/wiki/Motor_variable
For example, f ( z ) = u ( z ) + j v ( z ) , z = x + j y , x , y ∈ R , j 2 = + 1 , u ( z ) , v ( z ) ∈ R . {\displaystyle f(z)=u(z)+j\ v(z),\ z=x+jy,\ x,y\in R,\quad j^{2}=+1,\quad u(z),v(z)\in R.}
https://en.wikipedia.org/wiki/Motor_variable
Functions of a motor variable provide a context to extend real analysis and provide compact representation of mappings of the plane. However, the theory falls well short of function theory on the ordinary complex plane. Nevertheless, some of the aspects of conventional complex analysis have an interpretation given with motor variables, and more generally in hypercomplex analysis.
https://en.wikipedia.org/wiki/Motor_variable
In mathematics, a function of bounded deformation is a function whose distributional derivatives are not quite well-behaved-enough to qualify as functions of bounded variation, although the symmetric part of the derivative matrix does meet that condition. Thought of as deformations of elasto-plastic bodies, functions of bounded deformation play a major role in the mathematical study of materials, e.g. the Francfort-Marigo model of brittle crack evolution. More precisely, given an open subset Ω of Rn, a function u: Ω → Rn is said to be of bounded deformation if the symmetrized gradient ε(u) of u, ε ( u ) = ∇ u + ∇ u ⊤ 2 {\displaystyle \varepsilon (u)={\frac {\nabla u+\nabla u^{\top }}{2}}} is a bounded, symmetric n × n matrix-valued Radon measure. The collection of all functions of bounded deformation is denoted BD(Ω; Rn), or simply BD, introduced essentially by P.-M.
https://en.wikipedia.org/wiki/Bounded_deformation
Suquet in 1978. BD is a strictly larger space than the space BV of functions of bounded variation.
https://en.wikipedia.org/wiki/Bounded_deformation
One can show that if u is of bounded deformation then the measure ε(u) can be decomposed into three parts: one absolutely continuous with respect to Lebesgue measure, denoted e(u) dx; a jump part, supported on a rectifiable (n − 1)-dimensional set Ju of points where u has two different approximate limits u+ and u−, together with a normal vector νu; and a "Cantor part", which vanishes on Borel sets of finite Hn−1-measure (where Hk denotes k-dimensional Hausdorff measure). A function u is said to be of special bounded deformation if the Cantor part of ε(u) vanishes, so that the measure can be written as ε ( u ) = e ( u ) d x + ( u + ( x ) − u − ( x ) ) ⊙ ν u ( x ) H n − 1 | J u , {\displaystyle \varepsilon (u)=e(u)\,\mathrm {d} x+{\big (}u_{+}(x)-u_{-}(x){\big )}\odot \nu _{u}(x)H^{n-1}|J_{u},} where H n−1 | Ju denotes H n−1 on the jump set Ju and ⊙ {\displaystyle \odot } denotes the symmetrized dyadic product: a ⊙ b = a ⊗ b + b ⊗ a 2 . {\displaystyle a\odot b={\frac {a\otimes b+b\otimes a}{2}}.} The collection of all functions of special bounded deformation is denoted SBD(Ω; Rn), or simply SBD.
https://en.wikipedia.org/wiki/Bounded_deformation
In mathematics, a function of n {\displaystyle n} variables is symmetric if its value is the same no matter the order of its arguments. For example, a function f ( x 1 , x 2 ) {\displaystyle f\left(x_{1},x_{2}\right)} of two arguments is a symmetric function if and only if f ( x 1 , x 2 ) = f ( x 2 , x 1 ) {\displaystyle f\left(x_{1},x_{2}\right)=f\left(x_{2},x_{1}\right)} for all x 1 {\displaystyle x_{1}} and x 2 {\displaystyle x_{2}} such that ( x 1 , x 2 ) {\displaystyle \left(x_{1},x_{2}\right)} and ( x 2 , x 1 ) {\displaystyle \left(x_{2},x_{1}\right)} are in the domain of f . {\displaystyle f.} The most commonly encountered symmetric functions are polynomial functions, which are given by the symmetric polynomials.
https://en.wikipedia.org/wiki/Complete_symmetric_function
A related notion is alternating polynomials, which change sign under an interchange of variables. Aside from polynomial functions, tensors that act as functions of several vectors can be symmetric, and in fact the space of symmetric k {\displaystyle k} -tensors on a vector space V {\displaystyle V} is isomorphic to the space of homogeneous polynomials of degree k {\displaystyle k} on V . {\displaystyle V.} Symmetric functions should not be confused with even and odd functions, which have a different sort of symmetry.
https://en.wikipedia.org/wiki/Complete_symmetric_function
In mathematics, a function on the real numbers is called a step function if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having only finitely many pieces.
https://en.wikipedia.org/wiki/Step_function
In mathematics, a function or sequence is said to exhibit quadratic growth when its values are proportional to the square of the function argument or sequence position. "Quadratic growth" often means more generally "quadratic growth in the limit", as the argument or sequence position goes to infinity – in big Theta notation, f ( x ) = Θ ( x 2 ) {\displaystyle f(x)=\Theta (x^{2})} . This can be defined both continuously (for a real-valued function of a real variable) or discretely (for a sequence of real numbers, i.e., real-valued function of an integer or natural number variable).
https://en.wikipedia.org/wiki/Quadratic_growth
In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set X into a vector space has a natural vector space structure given by pointwise addition and scalar multiplication. In other scenarios, the function space might inherit a topological or metric structure, hence the name function space.
https://en.wikipedia.org/wiki/Function_space
In mathematics, a functional (as a noun) is a certain type of function. The exact definition of the term varies depending on the subfield (and sometimes even the author). In linear algebra, it is synonymous with linear forms, which are linear mappings from a vector space V {\displaystyle V} into its field of scalars (that is, they are elements of the dual space V ∗ {\displaystyle V^{*}} ) In functional analysis and related fields, it refers more generally to a mapping from a space X {\displaystyle X} into the field of real or complex numbers. In functional analysis, the term linear functional is a synonym of linear form; that is, it is a scalar-valued linear map.
https://en.wikipedia.org/wiki/Functional_(mathematics)
Depending on the author, such mappings may or may not be assumed to be linear, or to be defined on the whole space X . {\displaystyle X.} In computer science, it is synonymous with higher-order functions, that is, functions that take functions as arguments or return them.This article is mainly concerned with the second concept, which arose in the early 18th century as part of the calculus of variations.
https://en.wikipedia.org/wiki/Functional_(mathematics)
The first concept, which is more modern and abstract, is discussed in detail in a separate article, under the name linear form. The third concept is detailed in the computer science article on higher-order functions. In the case where the space X {\displaystyle X} is a space of functions, the functional is a "function of a function", and some older authors actually define the term "functional" to mean "function of a function". However, the fact that X {\displaystyle X} is a space of functions is not mathematically essential, so this older definition is no longer prevalent.The term originates from the calculus of variations, where one searches for a function that minimizes (or maximizes) a given functional. A particularly important application in physics is search for a state of a system that minimizes (or maximizes) the action, or in other words the time integral of the Lagrangian.
https://en.wikipedia.org/wiki/Functional_(mathematics)
In mathematics, a functional calculus is a theory allowing one to apply mathematical functions to mathematical operators. It is now a branch (more accurately, several related areas) of the field of functional analysis, connected with spectral theory. (Historically, the term was also used synonymously with calculus of variations; this usage is obsolete, except for functional derivative. Sometimes it is used in relation to types of functional equations, or in logic for systems of predicate calculus.)
https://en.wikipedia.org/wiki/Functional_calculus
If f {\displaystyle f} is a function, say a numerical function of a real number, and M {\displaystyle M} is an operator, there is no particular reason why the expression f ( M ) {\displaystyle f(M)} should make sense. If it does, then we are no longer using f {\displaystyle f} on its original function domain. In the tradition of operational calculus, algebraic expressions in operators are handled irrespective of their meaning.
https://en.wikipedia.org/wiki/Functional_calculus
This passes nearly unnoticed if we talk about 'squaring a matrix', though, which is the case of f ( x ) = x 2 {\displaystyle f(x)=x^{2}} and M {\displaystyle M} an n × n {\displaystyle n\times n} matrix. The idea of a functional calculus is to create a principled approach to this kind of overloading of the notation. The most immediate case is to apply polynomial functions to a square matrix, extending what has just been discussed.
https://en.wikipedia.org/wiki/Functional_calculus
In the finite-dimensional case, the polynomial functional calculus yields quite a bit of information about the operator. For example, consider the family of polynomials which annihilates an operator T {\displaystyle T} . This family is an ideal in the ring of polynomials.
https://en.wikipedia.org/wiki/Functional_calculus
Furthermore, it is a nontrivial ideal: let n {\displaystyle n} be the finite dimension of the algebra of matrices, then { I , T , T 2 , … , T n } {\displaystyle \{I,T,T^{2},\ldots ,T^{n}\}} is linearly dependent. So ∑ i = 0 n α i T i = 0 {\displaystyle \sum _{i=0}^{n}\alpha _{i}T^{i}=0} for some scalars α i {\displaystyle \alpha _{i}} , not all equal to 0. This implies that the polynomial ∑ i = 0 n α i x i {\displaystyle \sum _{i=0}^{n}\alpha _{i}x^{i}} lies in the ideal.
https://en.wikipedia.org/wiki/Functional_calculus
Since the ring of polynomials is a principal ideal domain, this ideal is generated by some polynomial m {\displaystyle m} . Multiplying by a unit if necessary, we can choose m {\displaystyle m} to be monic. When this is done, the polynomial m {\displaystyle m} is precisely the minimal polynomial of T {\displaystyle T} .
https://en.wikipedia.org/wiki/Functional_calculus
This polynomial gives deep information about T {\displaystyle T} . For instance, a scalar α {\displaystyle \alpha } is an eigenvalue of T {\displaystyle T} if and only if α {\displaystyle \alpha } is a root of m {\displaystyle m} .
https://en.wikipedia.org/wiki/Functional_calculus
Also, sometimes m {\displaystyle m} can be used to calculate the exponential of T {\displaystyle T} efficiently. The polynomial calculus is not as informative in the infinite-dimensional case.
https://en.wikipedia.org/wiki/Functional_calculus
Consider the unilateral shift with the polynomials calculus; the ideal defined above is now trivial. Thus one is interested in functional calculi more general than polynomials. The subject is closely linked to spectral theory, since for a diagonal matrix or multiplication operator, it is rather clear what the definitions should be.
https://en.wikipedia.org/wiki/Functional_calculus
In mathematics, a functional equation is, in the broadest meaning, an equation in which one or several functions appear as unknowns. So, differential equations and integral equations are functional equations. However, a more restricted meaning is often used, where a functional equation is an equation that relates several values of the same function. For example, the logarithm functions are essentially characterized by the logarithmic functional equation log ⁡ ( x y ) = log ⁡ ( x ) + log ⁡ ( y ) .
https://en.wikipedia.org/wiki/Abel's_functional_equation
{\displaystyle \log(xy)=\log(x)+\log(y).} If the domain of the unknown function is supposed to be the natural numbers, the function is generally viewed as a sequence, and, in this case, a functional equation (in the narrower meaning) is called a recurrence relation.
https://en.wikipedia.org/wiki/Abel's_functional_equation
Thus the term functional equation is used mainly for real functions and complex functions. Moreover a smoothness condition is often assumed for the solutions, since without such a condition, most functional equations have very irregular solutions. For example, the gamma function is a function that satisfies the functional equation f ( x + 1 ) = x f ( x ) {\displaystyle f(x+1)=xf(x)} and the initial value f ( 1 ) = 1. {\displaystyle f(1)=1.} There are many functions that satisfy these conditions, but the gamma function is the unique one that is meromorphic in the whole complex plane, and logarithmically convex for x real and positive (Bohr–Mollerup theorem).
https://en.wikipedia.org/wiki/Abel's_functional_equation
In mathematics, a functional square root (sometimes called a half iterate) is a square root of a function with respect to the operation of function composition. In other words, a functional square root of a function g is a function f satisfying f(f(x)) = g(x) for all x.
https://en.wikipedia.org/wiki/Functional_square_root
In mathematics, a fundamental discriminant D is an integer invariant in the theory of integral binary quadratic forms. If Q(x, y) = ax2 + bxy + cy2 is a quadratic form with integer coefficients, then D = b2 − 4ac is the discriminant of Q(x, y). Conversely, every integer D with D ≡ 0, 1 (mod 4) is the discriminant of some binary quadratic form with integer coefficients.
https://en.wikipedia.org/wiki/Fundamental_discriminant
Thus, all such integers are referred to as discriminants in this theory. There are explicit congruence conditions that give the set of fundamental discriminants. Specifically, D is a fundamental discriminant if and only if one of the following statements holds D ≡ 1 (mod 4) and is square-free, D = 4m, where m ≡ 2 or 3 (mod 4) and m is square-free.The first ten positive fundamental discriminants are: 1, 5, 8, 12, 13, 17, 21, 24, 28, 29, 33 (sequence A003658 in the OEIS).The first ten negative fundamental discriminants are: −3, −4, −7, −8, −11, −15, −19, −20, −23, −24, −31 (sequence A003657 in the OEIS).
https://en.wikipedia.org/wiki/Fundamental_discriminant
In mathematics, a fundamental matrix of a system of n homogeneous linear ordinary differential equations is a matrix-valued function Ψ ( t ) {\displaystyle \Psi (t)} whose columns are linearly independent solutions of the system. Then every solution to the system can be written as x ( t ) = Ψ ( t ) c {\displaystyle \mathbf {x} (t)=\Psi (t)\mathbf {c} } , for some constant vector c {\displaystyle \mathbf {c} } (written as a column vector of height n). One can show that a matrix-valued function Ψ {\displaystyle \Psi } is a fundamental matrix of x ˙ ( t ) = A ( t ) x ( t ) {\displaystyle {\dot {\mathbf {x} }}(t)=A(t)\mathbf {x} (t)} if and only if Ψ ˙ ( t ) = A ( t ) Ψ ( t ) {\displaystyle {\dot {\Psi }}(t)=A(t)\Psi (t)} and Ψ {\displaystyle \Psi } is a non-singular matrix for all t {\displaystyle t} .
https://en.wikipedia.org/wiki/Fundamental_matrix_(linear_differential_equation)
In mathematics, a fundamental pair of periods is an ordered pair of complex numbers that defines a lattice in the complex plane. This type of lattice is the underlying object with which elliptic functions and modular forms are defined.
https://en.wikipedia.org/wiki/Lattice_basis
In mathematics, a fundamental polygon can be defined for every compact Riemann surface of genus greater than 0. It encodes not only the topology of the surface through its fundamental group but also determines the Riemann surface up to conformal equivalence. By the uniformization theorem, every compact Riemann surface has simply connected universal covering surface given by exactly one of the following: the Riemann sphere, the complex plane, the unit disk D or equivalently the upper half-plane H.In the first case of genus zero, the surface is conformally equivalent to the Riemann sphere. In the second case of genus one, the surface is conformally equivalent to a torus C/Λ for some lattice Λ in C. The fundamental polygon of Λ, if assumed convex, may be taken to be either a period parallelogram or a centrally symmetric hexagon, a result first proved by Fedorov in 1891.
https://en.wikipedia.org/wiki/Fundamental_polygon
In the last case of genus g > 1, the Riemann surface is conformally equivalent to H/Γ where Γ is a Fuchsian group of Möbius transformations. A fundamental domain for Γ is given by a convex polygon for the hyperbolic metric on H. These can be defined by Dirichlet polygons and have an even number of sides. The structure of the fundamental group Γ can be read off from such a polygon.
https://en.wikipedia.org/wiki/Fundamental_polygon
Using the theory of quasiconformal mappings and the Beltrami equation, it can be shown there is a canonical convex Dirichlet polygon with 4g sides, first defined by Fricke, which corresponds to the standard presentation of Γ as the group with 2g generators a1, b1, a2, b2, ..., ag, bg and the single relation ⋅⋅⋅ = 1, where = a b a−1b−1. Any Riemannian metric on an oriented closed 2-manifold M defines a complex structure on M, making M a compact Riemann surface. Through the use of fundamental polygons, it follows that two oriented closed 2-manifolds are classified by their genus, that is half the rank of the Abelian group Γ/, where Γ = π1(M).
https://en.wikipedia.org/wiki/Fundamental_polygon
Moreover, it also follows from the theory of quasiconformal mappings that two compact Riemann surfaces are diffeomorphic if and only if they are homeomorphic. Consequently, two closed oriented 2-manifolds are homeomorphic if and only if they are diffeomorphic. Such a result can also be proved using the methods of differential topology.
https://en.wikipedia.org/wiki/Fundamental_polygon
In mathematics, a fundamental solution for a linear partial differential operator L is a formulation in the language of distribution theory of the older idea of a Green's function (although unlike Green's functions, fundamental solutions do not address boundary conditions). In terms of the Dirac delta "function" δ(x), a fundamental solution F is a solution of the inhomogeneous equation Here F is a priori only assumed to be a distribution. This concept has long been utilized for the Laplacian in two and three dimensions.
https://en.wikipedia.org/wiki/Fundamental_solution
It was investigated for all dimensions for the Laplacian by Marcel Riesz. The existence of a fundamental solution for any operator with constant coefficients — the most important case, directly linked to the possibility of using convolution to solve an arbitrary right hand side — was shown by Bernard Malgrange and Leon Ehrenpreis. In the context of functional analysis, fundamental solutions are usually developed via the Fredholm alternative and explored in Fredholm theory.
https://en.wikipedia.org/wiki/Fundamental_solution
In mathematics, a fundamental theorem is a theorem which is considered to be central and conceptually important for some topic. For example, the fundamental theorem of calculus gives the relationship between differential calculus and integral calculus. The names are mostly traditional, so that for example the fundamental theorem of arithmetic is basic to what would now be called number theory. Some of these are classification theorems of objects which are mainly dealt with in the field.
https://en.wikipedia.org/wiki/Fundamental_lemma
For instance, the fundamental theorem of curves describe classification of regular curves in space up to translation and rotation. Likewise, the mathematical literature sometimes refers to the fundamental lemma of a field. The term lemma is conventionally used to denote a proven proposition which is used as a stepping stone to a larger result, rather than as a useful statement in-and-of itself.
https://en.wikipedia.org/wiki/Fundamental_lemma
In mathematics, a fusion category is a category that is rigid, semisimple, k {\displaystyle k} -linear, monoidal and has only finitely many isomorphism classes of simple objects, such that the monoidal unit is simple. If the ground field k {\displaystyle k} is algebraically closed, then the latter is equivalent to H o m ( 1 , 1 ) ≅ k {\displaystyle \mathrm {Hom} (1,1)\cong k} by Schur's lemma.
https://en.wikipedia.org/wiki/Fusion_category
In mathematics, a fusion frame of a vector space is a natural extension of a frame. It is an additive construct of several, potentially "overlapping" frames. The motivation for this concept comes from the event that a signal can not be acquired by a single sensor alone (a constraint found by limitations of hardware or data throughput), rather the partial components of the signal must be collected via a network of sensors, and the partial signal representations are then fused into the complete signal. By construction, fusion frames easily lend themselves to parallel or distributed processing of sensor networks consisting of arbitrary overlapping sensor fields.
https://en.wikipedia.org/wiki/Fusion_frame
In mathematics, a general hypergeometric function or Aomoto–Gelfand hypergeometric function is a generalization of the hypergeometric function that was introduced by Gelfand (1986). The general hypergeometric function is a function that is (more or less) defined on a Grassmannian, and depends on a choice of some complex numbers and signs.
https://en.wikipedia.org/wiki/General_hypergeometric_function
In mathematics, a generalized Kac–Moody algebra is a Lie algebra that is similar to a Kac–Moody algebra, except that it is allowed to have imaginary simple roots. Generalized Kac–Moody algebras are also sometimes called GKM algebras, Borcherds–Kac–Moody algebras, BKM algebras, or Borcherds algebras. The best known example is the monster Lie algebra.
https://en.wikipedia.org/wiki/Borcherds_algebra
In mathematics, a generalized Korteweg–De Vries equation (Masayoshi Tsutsumi, Toshio Mukasa & Riichi Iino 1970) is the nonlinear partial differential equation ∂ t u + ∂ x 3 u + ∂ x f ( u ) = 0. {\displaystyle \partial _{t}u+\partial _{x}^{3}u+\partial _{x}f(u)=0.\,} The function f is sometimes taken to be f(u) = uk+1/(k+1) + u for some positive integer k (where the extra u is a "drift term" that makes the analysis a little easier). The case f(u) = 3u2 is the original Korteweg–De Vries equation.
https://en.wikipedia.org/wiki/Generalized_Korteweg–De_Vries_equation
In mathematics, a generalized arithmetic progression (or multiple arithmetic progression) is a generalization of an arithmetic progression equipped with multiple common differences – whereas an arithmetic progression is generated by a single common difference, a generalized arithmetic progression can be generated by multiple common differences. For example, the sequence 17 , 20 , 22 , 23 , 25 , 26 , 27 , 28 , 29 , … {\displaystyle 17,20,22,23,25,26,27,28,29,\dots } is not an arithmetic progression, but is instead generated by starting with 17 and adding either 3 or 5, thus allowing multiple common differences to generate it. A semilinear set generalizes this idea to multiple dimensions -- it is a set of vectors of integers, rather than a set of integers.
https://en.wikipedia.org/wiki/Multi-dimensional_arithmetic_progression
In mathematics, a generalized conic is a geometrical object defined by a property which is a generalization of some defining property of the classical conic. For example, in elementary geometry, an ellipse can be defined as the locus of a point which moves in a plane such that the sum of its distances from two fixed points – the foci – in the plane is a constant. The curve obtained when the set of two fixed points is replaced by an arbitrary, but fixed, finite set of points in the plane is called an n–ellipse and can be thought of as a generalized ellipse. Since an ellipse is the equidistant set of two circles, the equidistant set of two arbitrary sets of points in a plane can be viewed as a generalized conic.
https://en.wikipedia.org/wiki/Generalized_conic
In rectangular Cartesian coordinates, the equation y = x2 represents a parabola. The generalized equation y = x r, for r ≠ 0 and r ≠ 1, can be treated as defining a generalized parabola. The idea of generalized conic has found applications in approximation theory and optimization theory.Among the several possible ways in which the concept of a conic can be generalized, the most widely used approach is to define it as a generalization of the ellipse.
https://en.wikipedia.org/wiki/Generalized_conic
The starting point for this approach is to look upon an ellipse as a curve satisfying the 'two-focus property': an ellipse is a curve that is the locus of points the sum of whose distances from two given points is constant. The two points are the foci of the ellipse. The curve obtained by replacing the set of two fixed points by an arbitrary, but fixed, finite set of points in the plane can be thought of as a generalized ellipse.
https://en.wikipedia.org/wiki/Generalized_conic
Generalized conics with three foci are called trifocal ellipses. This can be further generalized to curves which are obtained as the loci of points such that some weighted sum of the distances from a finite set of points is a constant. A still further generalization is possible by assuming that the weights attached to the distances can be of arbitrary sign, namely, plus or minus.
https://en.wikipedia.org/wiki/Generalized_conic
Finally, the restriction that the set of fixed points, called the set of foci of the generalized conic, be finite may also be removed. The set may be assumed to be finite or infinite.
https://en.wikipedia.org/wiki/Generalized_conic
In the infinite case, the weighted arithmetic mean has to be replaced by an appropriate integral. Generalized conics in this sense are also called polyellipses, egglipses, or, generalized ellipses. Since such curves were considered by the German mathematician Ehrenfried Walther von Tschirnhaus (1651 – 1708) they are also known as Tschirnhaus'sche Eikurve. Also such generalizations have been discussed by Rene Descartes and by James Clerk Maxwell.
https://en.wikipedia.org/wiki/Generalized_conic
In mathematics, a generalized cwatset (GC-set) is an algebraic structure generalizing the notion of closure with a twist, the defining characteristic of the cwatset.
https://en.wikipedia.org/wiki/Closure_with_a_twist
In mathematics, a generalized flag variety (or simply flag variety) is a homogeneous space whose points are flags in a finite-dimensional vector space V over a field F. When F is the real or complex numbers, a generalized flag variety is a smooth or complex manifold, called a real or complex flag manifold. Flag varieties are naturally projective varieties. Flag varieties can be defined in various degrees of generality. A prototype is the variety of complete flags in a vector space V over a field F, which is a flag variety for the special linear group over F. Other flag varieties arise by considering partial flags, or by restriction from the special linear group to subgroups such as the symplectic group.
https://en.wikipedia.org/wiki/Generalized_flag_manifold
For partial flags, one needs to specify the sequence of dimensions of the flags under consideration. For subgroups of the linear group, additional conditions must be imposed on the flags. In the most general sense, a generalized flag variety is defined to mean a projective homogeneous variety, that is, a smooth projective variety X over a field F with a transitive action of a reductive group G (and smooth stabilizer subgroup; that is no restriction for F of characteristic zero).
https://en.wikipedia.org/wiki/Generalized_flag_manifold
If X has an F-rational point, then it is isomorphic to G/P for some parabolic subgroup P of G. A projective homogeneous variety may also be realised as the orbit of a highest weight vector in a projectivized representation of G. The complex projective homogeneous varieties are the compact flat model spaces for Cartan geometries of parabolic type. They are homogeneous Riemannian manifolds under any maximal compact subgroup of G, and they are precisely the coadjoint orbits of compact Lie groups. Flag manifolds can be symmetric spaces. Over the complex numbers, the corresponding flag manifolds are the Hermitian symmetric spaces. Over the real numbers, an R-space is a synonym for a real flag manifold and the corresponding symmetric spaces are called symmetric R-spaces.
https://en.wikipedia.org/wiki/Generalized_flag_manifold