title
stringlengths
1
113
text
stringlengths
9
3.55k
source
stringclasses
1 value
haar's theorem
In mathematical statistics, Haar measures are used for prior measures, which are prior probabilities for compact groups of transformations. These prior measures are used to construct admissible procedures, by appeal to the characterization of admissible procedures as Bayesian procedures (or limits of Bayesian procedures) by Wald. For example, a right Haar measure for a family of distributions with a location parameter results in the Pitman estimator, which is best equivariant. When left and right Haar measures differ, the right measure is usually preferred as a prior distribution.
wikipedia
haar's theorem
For the group of affine transformations on the parameter space of the normal distribution, the right Haar measure is the Jeffreys prior measure. Unfortunately, even right Haar measures sometimes result in useless priors, which cannot be recommended for practical use, like other methods of constructing prior measures that avoid subjective information.Another use of Haar measure in statistics is in conditional inference, in which the sampling distribution of a statistic is conditioned on another statistic of the data. In invariant-theoretic conditional inference, the sampling distribution is conditioned on an invariant of the group of transformations (with respect to which the Haar measure is defined). The result of conditioning sometimes depends on the order in which invariants are used and on the choice of a maximal invariant, so that by itself a statistical principle of invariance fails to select any unique best conditional statistic (if any exist); at least another principle is needed. For non-compact groups, statisticians have extended Haar-measure results using amenable groups.
wikipedia
standardized variable
In mathematical statistics, a random variable X is standardized by subtracting its expected value E ⁡ {\displaystyle \operatorname {E} } and dividing the difference by its standard deviation σ ( X ) = Var ⁡ ( X ): {\displaystyle \sigma (X)={\sqrt {\operatorname {Var} (X)}}:} Z = X − E ⁡ σ ( X ) {\displaystyle Z={X-\operatorname {E} \over \sigma (X)}} If the random variable under consideration is the sample mean of a random sample X 1 , … , X n {\displaystyle \ X_{1},\dots ,X_{n}} of X: X ¯ = 1 n ∑ i = 1 n X i {\displaystyle {\bar {X}}={1 \over n}\sum _{i=1}^{n}X_{i}} then the standardized version is Z = X ¯ − E ⁡ σ ( X ) / n . {\displaystyle Z={\frac {{\bar {X}}-\operatorname {E} }{\sigma (X)/{\sqrt {n}}}}.}
wikipedia
growth curve (statistics)
In mathematical statistics, growth curves such as those used in biology are often modeled as being continuous stochastic processes, e.g. as being sample paths that almost surely solve stochastic differential equations. Growth curves have been also applied in forecasting market development. When variables are measured with error, a Latent growth modeling SEM can be used.
wikipedia
singular statistical model
In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ of a distribution that models X. Formally, it is the variance of the score, or the expected value of the observed information. The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized by the statistician Ronald Fisher (following some initial results by Francis Ysidro Edgeworth). The Fisher information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates.
wikipedia
singular statistical model
It can also be used in the formulation of test statistics, such as the Wald test. In Bayesian statistics, the Fisher information plays a role in the derivation of non-informative prior distributions according to Jeffreys' rule. It also appears as the large-sample covariance of the posterior distribution, provided that the prior is sufficiently smooth (a result known as Bernstein–von Mises theorem, which was anticipated by Laplace for exponential families). The same result is used when approximating the posterior with Laplace's approximation, where the Fisher information appears as the covariance of the fitted Gaussian.Statistical systems of a scientific nature (physical, biological, etc.) whose likelihood functions obey shift invariance have been shown to obey maximum Fisher information. The level of the maximum depends upon the nature of the system constraints.
wikipedia
eddy covariance
In mathematical terms, "eddy flux" is computed as a covariance between instantaneous deviation in vertical wind speed ( w ′ {\displaystyle w'} ) from the mean value ( w ¯ {\displaystyle {\bar {w}}} ) and instantaneous deviation in gas concentration, mixing ratio ( s ′ {\displaystyle s'} ), from its mean value ( s ¯ {\displaystyle {\bar {s}}} ), multiplied by mean air density ( ρ a {\displaystyle \rho _{a}} ). Several mathematical operations and assumptions, including Reynolds decomposition, are involved in getting from physically complete equations of the turbulent flow to practical equations for computing "eddy flux," as shown below.
wikipedia
alfvén's theorem
In mathematical terms, Alfvén's theorem states that, in an electrically conducting fluid in the limit of a large magnetic Reynolds number, the magnetic flux ΦB through an orientable, open material surface advected by a macroscopic, space- and time-dependent velocity field v is constant, or D Φ B D t = 0 , {\displaystyle {\frac {D\Phi _{B}}{Dt}}=0,} where D/Dt = ∂/∂t + (v ⋅ ∇) is the advective derivative.
wikipedia
multi-objective linear programming
In mathematical terms, a MOLP can be written as: min x P x s.t. a ≤ B x ≤ b , ℓ ≤ x ≤ u {\displaystyle \min _{x}Px\quad {\text{s.t. }}\quad a\leq Bx\leq b,\;\ell \leq x\leq u} where B {\displaystyle B} is an ( m × n ) {\displaystyle (m\times n)} matrix, P {\displaystyle P} is a ( q × n ) {\displaystyle (q\times n)} matrix, a {\displaystyle a} is an m {\displaystyle m} -dimensional vector with components in R ∪ { − ∞ } {\displaystyle \mathbb {R} \cup \{-\infty \}} , b {\displaystyle b} is an m {\displaystyle m} -dimensional vector with components in R ∪ { + ∞ } {\displaystyle \mathbb {R} \cup \{+\infty \}} , ℓ {\displaystyle \ell } is an n {\displaystyle n} -dimensional vector with components in R ∪ { − ∞ } {\displaystyle \mathbb {R} \cup \{-\infty \}} , u {\displaystyle u} is an n {\displaystyle n} -dimensional vector with components in R ∪ { + ∞ } {\displaystyle \mathbb {R} \cup \{+\infty \}}
wikipedia
statistical modeling
In mathematical terms, a statistical model is usually thought of as a pair ( S , P {\displaystyle S,{\mathcal {P}}} ), where S {\displaystyle S} is the set of possible observations, i.e. the sample space, and P {\displaystyle {\mathcal {P}}} is a set of probability distributions on S {\displaystyle S} .The intuition behind this definition is as follows. It is assumed that there is a "true" probability distribution induced by the process that generates the observed data. We choose P {\displaystyle {\mathcal {P}}} to represent a set (of distributions) which contains a distribution that adequately approximates the true distribution. Note that we do not require that P {\displaystyle {\mathcal {P}}} contains the true distribution, and in practice that is rarely the case.
wikipedia
statistical modeling
Indeed, as Burnham & Anderson state, "A model is a simplification or approximation of reality and hence will not reflect all of reality"—hence the saying "all models are wrong". The set P {\displaystyle {\mathcal {P}}} is almost always parameterized: P = { F θ: θ ∈ Θ } {\displaystyle {\mathcal {P}}=\{F_{\theta }:\theta \in \Theta \}} .
wikipedia
statistical modeling
The set of distributions Θ {\displaystyle \Theta } defines the parameters of the model. A parameterization is generally required to have distinct parameter values give rise to distinct distributions, i.e. F θ 1 = F θ 2 ⇒ θ 1 = θ 2 {\displaystyle F_{\theta _{1}}=F_{\theta _{2}}\Rightarrow \theta _{1}=\theta _{2}} must hold (in other words, it must be injective). A parameterization that meets the requirement is said to be identifiable.
wikipedia
point transformation
In mathematical terms, canonical coordinates are any coordinates on the phase space (cotangent bundle) of the system that allow the canonical one-form to be written as up to a total differential (exact form). The change of variable between one set of canonical coordinates and another is a canonical transformation. The index of the generalized coordinates q is written here as a superscript ( q i {\displaystyle q^{i}} ), not as a subscript as done above ( q i {\displaystyle q_{i}} ). The superscript conveys the contravariant transformation properties of the generalized coordinates, and does not mean that the coordinate is being raised to a power. Further details may be found at the symplectomorphism article.
wikipedia
inverse demand function
In mathematical terms, if the demand function is d e m a n d = f ( p r i c e ) {\displaystyle {demand}=f({price})} , then the inverse demand function is p r i c e = f − 1 ( d e m a n d ) {\displaystyle {price}=f^{-1}({demand})} . The value of the inverse demand function is the highest price that could be charged and still generate the quantity demanded. This is useful because economists typically place price (P) on the vertical axis and quantity (demand, Q) on the horizontal axis in supply-and-demand diagrams, so it is the inverse demand function that depicts the graphed demand curve in the way the reader expects to see.
wikipedia
inverse demand function
The inverse demand function is the same as the average revenue function, since P = AR.To compute the inverse demand function, simply solve for P from the demand function. For example, if the demand function has the form Q = 240 − 2 P {\displaystyle Q=240-2P} then the inverse demand function would be P = 120 − .5 Q {\displaystyle P=120-.5Q} . Note that although price is the dependent variable in the inverse demand function, it is still the case that the equation represents how the price determines the quantity demanded, not the reverse.
wikipedia
quantum discord
In mathematical terms, quantum discord is defined in terms of the quantum mutual information. More specifically, quantum discord is the difference between two expressions which each, in the classical limit, represent the mutual information. These two expressions are: I ( A ; B ) = H ( A ) + H ( B ) − H ( A , B ) {\displaystyle I(A;B)=H(A)+H(B)-H(A,B)} J ( A ; B ) = H ( A ) − H ( A | B ) {\displaystyle J(A;B)=H(A)-H(A|B)} where, in the classical case, H(A) is the information entropy, H(A, B) the joint entropy and H(A|B) the conditional entropy, and the two expressions yield identical results. In the nonclassical case, the quantum physics analogy for the three terms are used – S(ρA) the von Neumann entropy, S(ρ) the joint quantum entropy and S(ρA|ρB) a quantum generalization of conditional entropy (not to be confused with conditional quantum entropy), respectively, for probability density function ρ; I ( ρ ) = S ( ρ A ) + S ( ρ B ) − S ( ρ ) {\displaystyle I(\rho )=S(\rho _{A})+S(\rho _{B})-S(\rho )} J A ( ρ ) = S ( ρ B ) − S ( ρ B | ρ A ) {\displaystyle J_{A}(\rho )=S(\rho _{B})-S(\rho _{B}|\rho _{A})} The difference between the two expressions defines the basis-dependent quantum discord D A ( ρ ) = I ( ρ ) − J A ( ρ ) , {\displaystyle {\mathcal {D}}_{A}(\rho )=I(\rho )-J_{A}(\rho ),} which is asymmetrical in the sense that D A ( ρ ) {\displaystyle {\mathcal {D}}_{A}(\rho )} can differ from D B ( ρ ) {\displaystyle {\mathcal {D}}_{B}(\rho )} .
wikipedia
quantum discord
The notation J represents the part of the correlations that can be attributed to classical correlations and varies in dependence on the chosen eigenbasis; therefore, in order for the quantum discord to reflect the purely nonclassical correlations independently of basis, it is necessary that J first be maximized over the set of all possible projective measurements onto the eigenbasis: D A ( ρ ) = I ( ρ ) − max { Π j A } J { Π j A } ( ρ ) = S ( ρ A ) − S ( ρ ) + min { Π j A } S ( ρ B | { Π j A } ) {\displaystyle {\mathcal {D}}_{A}(\rho )=I(\rho )-\max _{\{\Pi _{j}^{A}\}}J_{\{\Pi _{j}^{A}\}}(\rho )=S(\rho _{A})-S(\rho )+\min _{\{\Pi _{j}^{A}\}}S(\rho _{B|\{\Pi _{j}^{A}\}})} Nonzero quantum discord indicates the presence of correlations that are due to noncommutativity of quantum operators. For pure states, the quantum discord becomes a measure of quantum entanglement, more specifically, in that case it equals the entropy of entanglement.Vanishing quantum discord is a criterion for the pointer states, which constitute preferred effectively classical states of a system. It could be shown that quantum discord must be non-negative and that states with vanishing quantum discord can in fact be identified with pointer states.
wikipedia
quantum discord
Other conditions have been identified which can be seen in analogy to the Peres–Horodecki criterion and in relation to the strong subadditivity of the von Neumann entropy.Efforts have been made to extend the definition of quantum discord to continuous variable systems, in particular to bipartite systems described by Gaussian states. A very recent work has demonstrated that the upper-bound of Gaussian discord indeed coincides with the actual quantum discord of a Gaussian state, when the latter belongs to a suitable large family of Gaussian states. Computing quantum discord is NP-complete and hence difficult to compute in the general case. For certain classes of two-qubit states, quantum discord can be calculated analytically.
wikipedia
jahn–teller distortion
In mathematical terms, the APESs characterising the JT distortion arise as the eigenvalues of the potential energy matrix. Generally, the APESs take the characteristic appearance of a double cone, circular or elliptic, where the point of contact, i.e. degeneracy, denotes the high-symmetry configuration for which the JT theorem applies. For the above case of the linear E ⊗ e JT effect the situation is illustrated by the APES V = μ ω 2 2 ( Q θ 2 + Q ϵ 2 ) ± k Q θ 2 + Q ϵ 2 {\displaystyle V={\frac {\mu \omega ^{2}}{2}}(Q_{\theta }^{2}+Q_{\epsilon }^{2})\pm k{\sqrt {Q_{\theta }^{2}+Q_{\epsilon }^{2}}}} displayed in the figure, with part cut away to reveal its shape, which is known as a Mexican Hat potential. Here, ω {\displaystyle \omega } is the frequency of the vibrational e mode, μ {\displaystyle \mu } is its mass and k {\displaystyle k} is a measure of the strength of the JT coupling.
wikipedia
jahn–teller distortion
The conical shape near the degeneracy at the origin makes it immediately clear that this point cannot be stationary, that is, the system is unstable against asymmetric distortions, which leads to a symmetry lowering. In this particular case there are infinitely many isoenergetic JT distortions.
wikipedia
jahn–teller distortion
The Q i {\displaystyle Q_{i}} giving these distortions are arranged in a circle, as shown by the red curve in the figure. Quadratic coupling or cubic elastic terms lead to a warping along this "minimum energy path", replacing this infinite manifold by three equivalent potential minima and three equivalent saddle points. In other JT systems, linear coupling results in discrete minima.
wikipedia
binary golay code
In mathematical terms, the extended binary Golay code G24 consists of a 12-dimensional linear subspace W of the space V = F242 of 24-bit words such that any two distinct elements of W differ in at least 8 coordinates. W is called a linear code because it is a vector space. In all, W comprises 4096 = 212 elements. The elements of W are called code words.
wikipedia
binary golay code
They can also be described as subsets of a set of 24 elements, where addition is defined as taking the symmetric difference of the subsets. In the extended binary Golay code, all code words have Hamming weights of 0, 8, 12, 16, or 24. Code words of weight 8 are called octads and code words of weight 12 are called dodecads.
wikipedia
binary golay code
Octads of the code G24 are elements of the S(5,8,24) Steiner system. There are 759 = 3 × 11 × 23 octads and 759 complements thereof. It follows that there are 2576 = 24 × 7 × 23 dodecads.
wikipedia
binary golay code
Two octads intersect (have 1's in common) in 0, 2, or 4 coordinates in the binary vector representation (these are the possible intersection sizes in the subset representation). An octad and a dodecad intersect at 2, 4, or 6 coordinates. Up to relabeling coordinates, W is unique.The binary Golay code, G23 is a perfect code.
wikipedia
binary golay code
That is, the spheres of radius three around code words form a partition of the vector space. G23 is a 12-dimensional subspace of the space F232. The automorphism group of the perfect binary Golay code G23 (meaning the subgroup of the group S23 of permutations of the coordinates of F232 which leave G23 invariant), is the Mathieu group M 23 {\displaystyle M_{23}} .
wikipedia
binary golay code
The automorphism group of the extended binary Golay code is the Mathieu group M 24 {\displaystyle M_{24}} , of order 210 × 33 × 5 × 7 × 11 × 23. M 24 {\displaystyle M_{24}} is transitive on octads and on dodecads. The other Mathieu groups occur as stabilizers of one or several elements of W.
wikipedia
euler top
In mathematical terms, the spatial configuration of the body is described by a point on the Lie group S O ( 3 ) {\displaystyle SO(3)} , the three-dimensional rotation group, which is the rotation matrix from the lab frame to the body frame. The full configuration space or phase space is the cotangent bundle T ∗ S O ( 3 ) {\displaystyle T^{*}SO(3)} , with the fibers T R ∗ S O ( 3 ) {\displaystyle T_{R}^{*}SO(3)} parametrizing the angular momentum at spatial configuration R {\displaystyle R} . The Hamiltonian is a function on this phase space.
wikipedia
faddeev–leverrier algorithm
In mathematics (linear algebra), the Faddeev–LeVerrier algorithm is a recursive method to calculate the coefficients of the characteristic polynomial p A ( λ ) = det ( λ I n − A ) {\displaystyle p_{A}(\lambda )=\det(\lambda I_{n}-A)} of a square matrix, A, named after Dmitry Konstantinovich Faddeev and Urbain Le Verrier. Calculation of this polynomial yields the eigenvalues of A as its roots; as a matrix polynomial in the matrix A itself, it vanishes by the Cayley–Hamilton theorem. Computing the characteristic polynomial directly from the definition of the determinant is computationally cumbersome insofar as it introduces a new symbolic quantity λ {\displaystyle \lambda } ; by contrast, the Faddeev-Le Verrier algorithm works directly with coefficients of matrix A {\displaystyle A} . The algorithm has been independently rediscovered several times in different forms.
wikipedia
faddeev–leverrier algorithm
It was first published in 1840 by Urbain Le Verrier, subsequently redeveloped by P. Horst, Jean-Marie Souriau, in its present form here by Faddeev and Sominsky, and further by J. S. Frame, and others. (For historical points, see Householder.
wikipedia
faddeev–leverrier algorithm
An elegant shortcut to the proof, bypassing Newton polynomials, was introduced by Hou. The bulk of the presentation here follows Gantmacher, p. 88.)
wikipedia
lie coalgebra
In mathematics a Lie coalgebra is the dual structure to a Lie algebra. In finite dimensions, these are dual objects: the dual vector space to a Lie algebra naturally has the structure of a Lie coalgebra, and conversely.
wikipedia
partial differential algebraic equation
In mathematics a partial differential algebraic equation (PDAE) set is an incomplete system of partial differential equations that is closed with a set of algebraic equations.
wikipedia
radial basis function
In mathematics a radial basis function (RBF) is a real-valued function φ {\textstyle \varphi } whose value depends only on the distance between the input and some fixed point, either the origin, so that φ ( x ) = φ ^ ( ‖ x ‖ ) {\textstyle \varphi (\mathbf {x} )={\hat {\varphi }}(\left\|\mathbf {x} \right\|)} , or some other fixed point c {\textstyle \mathbf {c} } , called a center, so that φ ( x ) = φ ^ ( ‖ x − c ‖ ) {\textstyle \varphi (\mathbf {x} )={\hat {\varphi }}(\left\|\mathbf {x} -\mathbf {c} \right\|)} . Any function φ {\textstyle \varphi } that satisfies the property φ ( x ) = φ ^ ( ‖ x ‖ ) {\textstyle \varphi (\mathbf {x} )={\hat {\varphi }}(\left\|\mathbf {x} \right\|)} is a radial function. The distance is usually Euclidean distance, although other metrics are sometimes used.
wikipedia
radial basis function
They are often used as a collection { φ k } k {\displaystyle \{\varphi _{k}\}_{k}} which forms a basis for some function space of interest, hence the name. Sums of radial basis functions are typically used to approximate given functions. This approximation process can also be interpreted as a simple kind of neural network; this was the context in which they were originally applied to machine learning, in work by David Broomhead and David Lowe in 1988, which stemmed from Michael J. D. Powell's seminal research from 1977. RBFs are also used as a kernel in support vector classification. The technique has proven effective and flexible enough that radial basis functions are now applied in a variety of engineering applications.
wikipedia
stack (mathematics)
In mathematics a stack or 2-sheaf is, roughly speaking, a sheaf that takes values in categories rather than sets. Stacks are used to formalise some of the main constructions of descent theory, and to construct fine moduli stacks when fine moduli spaces do not exist. Descent theory is concerned with generalisations of situations where isomorphic, compatible geometrical objects (such as vector bundles on topological spaces) can be "glued together" within a restriction of the topological basis. In a more general set-up the restrictions are replaced with pullbacks; fibred categories then make a good framework to discuss the possibility of such gluing.
wikipedia
stack (mathematics)
The intuitive meaning of a stack is that it is a fibred category such that "all possible gluings work". The specification of gluings requires a definition of coverings with regard to which the gluings can be considered. It turns out that the general language for describing these coverings is that of a Grothendieck topology. Thus a stack is formally given as a fibred category over another base category, where the base has a Grothendieck topology and where the fibred category satisfies a few axioms that ensure existence and uniqueness of certain gluings with respect to the Grothendieck topology.
wikipedia
narrowing of algebraic value sets
In mathematics an expression represents a single value. A function maps one or more values to one unique value. Inverses of functions are not always well defined as functions. Sometimes extra conditions are required to make an inverse of a function fit the definition of a function.
wikipedia
narrowing of algebraic value sets
Some Boolean operations, in particular do not have inverses that may be defined as functions. In particular the disjunction "or" has inverses that allow two values. In natural language "or" represents alternate possibilities.
wikipedia
narrowing of algebraic value sets
Narrowing is based on value sets that allow multiple values to be packaged and considered as a single value. This allows the inverses of functions to always be considered as functions. To achieve this value sets must record the context to which a value belongs.
wikipedia
narrowing of algebraic value sets
A variable may only take on a single value in each possible world. The value sets tag each value in the value set with the world to which it belongs. Possible worlds belong to world sets.
wikipedia
narrowing of algebraic value sets
A world set is a set of all mutually exclusive worlds. Combining values from different possible worlds is impossible, because that would mean combining mutually exclusive possible worlds. The application of functions to value sets creates combinations of value sets from different worlds.
wikipedia
narrowing of algebraic value sets
Narrowing reduces those worlds by eliminating combinations of different worlds from the same world set. Narrowing rules also detect situations where some combinations of worlds are shown to be impossible. No back tracking is required in the use of narrowing. By packaging the possible values in a value set all combinations of values may be considered at the same time. Evaluation proceeds as for a functional language, combining combinations of values in value sets, with narrowing rules eliminating impossible values from the sets.
wikipedia
relation algebra
In mathematics and abstract algebra, a relation algebra is a residuated Boolean algebra expanded with an involution called converse, a unary operation. The motivating example of a relation algebra is the algebra 2 X 2 of all binary relations on a set X, that is, subsets of the cartesian square X2, with R•S interpreted as the usual composition of binary relations R and S, and with the converse of R as the converse relation. Relation algebra emerged in the 19th-century work of Augustus De Morgan and Charles Peirce, which culminated in the algebraic logic of Ernst Schröder. The equational form of relation algebra treated here was developed by Alfred Tarski and his students, starting in the 1940s. Tarski and Givant (1987) applied relation algebra to a variable-free treatment of axiomatic set theory, with the implication that mathematics founded on set theory could itself be conducted without variables.
wikipedia
list of group theory topics
In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced advances and have become subject areas in their own right.
wikipedia
list of group theory topics
Various physical systems, such as crystals and the hydrogen atom, may be modelled by symmetry groups. Thus group theory and the closely related representation theory have many important applications in physics, chemistry, and materials science. Group theory is also central to public key cryptography.
wikipedia
two-element boolean algebra
In mathematics and abstract algebra, the two-element Boolean algebra is the Boolean algebra whose underlying set (or universe or carrier) B is the Boolean domain. The elements of the Boolean domain are 1 and 0 by convention, so that B = {0, 1}. Paul Halmos's name for this algebra "2" has some following in the literature, and will be employed here.
wikipedia
computational differentiation
In mathematics and computer algebra, automatic differentiation (auto-differentiation, autodiff, or AD), also called algorithmic differentiation, computational differentiation, is a set of techniques to evaluate the partial derivative of a function specified by a computer program. Automatic differentiation exploits the fact that every computer calculation, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, partial derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor of more arithmetic operations than the original program.
wikipedia
precedence rule
In mathematics and computer programming, the order of operations is a collection of rules that reflect conventions about which operations to perform first in order to evaluate a given mathematical expression. These rules are formalized with a ranking of the operators. The rank of an operator is called its precedence, and an operation with a higher precedence is performed before operations with lower precedence. Calculators generally perform operations with the same precedence from left to right, but some programming languages and calculators adopt different conventions.
wikipedia
precedence rule
For example, multiplication is granted a higher precedence than addition, and it has been this way since the introduction of modern algebraic notation. Thus, in the expression 1 + 2 × 3, the multiplication is performed before addition, and the expression has the value 1 + (2 × 3) = 7, and not (1 + 2) × 3 = 9. When exponents were introduced in the 16th and 17th centuries, they were given precedence over both addition and multiplication and placed as a superscript to the right of their base.
wikipedia
precedence rule
Thus 3 + 52 = 28 and 3 × 52 = 75. These conventions exist to avoid notational ambiguity while allowing notation to remain brief. Where it is desired to override the precedence conventions, or even simply to emphasize them, parentheses ( ) can be used.
wikipedia
precedence rule
For example, (2 + 3) × 4 = 20 forces addition to precede multiplication, while (3 + 5)2 = 64 forces addition to precede exponentiation. If multiple pairs of parentheses are required in a mathematical expression (such as in the case of nested parentheses), the parentheses may be replaced by brackets or braces to avoid confusion, as in − 5 = 9.
wikipedia
precedence rule
These rules are meaningful only when the usual notation (called infix notation) is used. When functional or Polish notation are used for all operations, the order of operations results from the notation itself. Internet memes sometimes present ambiguous infix expressions that cause disputes and increase web traffic. Most of these ambiguous expressions involve mixed division and multiplication, where there is no general agreement about the order of operations.
wikipedia
balanced boolean function
In mathematics and computer science, a balanced boolean function is a boolean function whose output yields as many 0s as 1s over its input set. This means that for a uniformly random input string of bits, the probability of getting a 1 is 1/2. Examples of balanced boolean functions are the function that copies the first bit of its input to the output, and the function that produces the exclusive or of the input bits.
wikipedia
normal form (mathematics)
In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression. Often, it is one which provides the simplest representation of an object and allows it to be identified in a unique way. The distinction between "canonical" and "normal" forms varies from subfield to subfield. In most fields, a canonical form specifies a unique representation for every object, while a normal form simply specifies its form, without the requirement of uniqueness.The canonical form of a positive integer in decimal representation is a finite sequence of digits that does not begin with zero.
wikipedia
normal form (mathematics)
More generally, for a class of objects on which an equivalence relation is defined, a canonical form consists in the choice of a specific object in each class. For example: Jordan normal form is a canonical form for matrix similarity. The row echelon form is a canonical form, when one considers as equivalent a matrix and its left product by an invertible matrix.In computer science, and more specifically in computer algebra, when representing mathematical objects in a computer, there are usually many different ways to represent the same object.
wikipedia
normal form (mathematics)
In this context, a canonical form is a representation such that every object has a unique representation (with canonicalization being the process through which a representation is put into its canonical form). Thus, the equality of two objects can easily be tested by testing the equality of their canonical forms. Despite this advantage, canonical forms frequently depend on arbitrary choices (like ordering the variables), which introduce difficulties for testing the equality of two objects resulting on independent computations.
wikipedia
normal form (mathematics)
Therefore, in computer algebra, normal form is a weaker notion: A normal form is a representation such that zero is uniquely represented. This allows testing for equality by putting the difference of two objects in normal form. Canonical form can also mean a differential form that is defined in a natural (canonical) way.
wikipedia
functional form
In mathematics and computer science, a higher-order function (HOF) is a function that does at least one of the following: takes one or more functions as arguments (i.e. a procedural parameter, which is a parameter of a procedure that is itself a procedure), returns a function as its result.All other functions are first-order functions. In mathematics higher-order functions are also termed operators or functionals. The differential operator in calculus is a common example, since it maps a function to its derivative, also a function. Higher-order functions should not be confused with other uses of the word "functor" throughout mathematics, see Functor (disambiguation). In the untyped lambda calculus, all functions are higher-order; in a typed lambda calculus, from which most functional programming languages are derived, higher-order functions that take one function as argument are values with types of the form ( τ 1 → τ 2 ) → τ 3 {\displaystyle (\tau _{1}\to \tau _{2})\to \tau _{3}} .
wikipedia
algorithmic problem
In mathematics and computer science, an algorithm ( ) is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning), achieving automation eventually.
wikipedia
algorithmic problem
Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus".In contrast, a heuristic is an approach to problem solving that may not be fully specified or may not guarantee correct or optimal results, especially in problem domains where there is no well-defined correct or optimal result.As an effective method, an algorithm can be expressed within a finite amount of space and time, and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
wikipedia
algorithmic technique
In mathematics and computer science, an algorithmic technique is a general approach for implementing a process or computation.
wikipedia
apply
In mathematics and computer science, apply is a function that applies a function to arguments. It is central to programming languages derived from lambda calculus, such as LISP and Scheme, and also in functional languages. It has a role in the study of the denotational semantics of computer programs, because it is a continuous function on complete partial orders.
wikipedia
apply
Apply is also a continuous function in homotopy theory, and, indeed underpins the entire theory: it allows a homotopy deformation to be viewed as a continuous path in the space of functions. Likewise, valid mutations (refactorings) of computer programs can be seen as those that are "continuous" in the Scott topology. The most general setting for apply is in category theory, where it is right adjoint to currying in closed monoidal categories. A special case of this are the Cartesian closed categories, whose internal language is simply typed lambda calculus.
wikipedia
algorithmic number theory
In mathematics and computer science, computational number theory, also known as algorithmic number theory, is the study of computational methods for investigating and solving problems in number theory and arithmetic geometry, including algorithms for primality testing and integer factorization, finding solutions to diophantine equations, and explicit methods in arithmetic geometry. Computational number theory has applications to cryptography, including RSA, elliptic curve cryptography and post-quantum cryptography, and is used to investigate conjectures and open problems in number theory, including the Riemann hypothesis, the Birch and Swinnerton-Dyer conjecture, the ABC conjecture, the modularity conjecture, the Sato-Tate conjecture, and explicit aspects of the Langlands program.
wikipedia
symbolic reasoning
In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions and other mathematical objects. Although computer algebra could be considered a subfield of scientific computing, they are generally considered as distinct fields because scientific computing is usually based on numerical computation with approximate floating point numbers, while symbolic computation emphasizes exact computation with expressions containing variables that have no given value and are manipulated as symbols. Software applications that perform symbolic calculations are called computer algebra systems, with the term system alluding to the complexity of the main applications that include, at least, a method to represent mathematical data in a computer, a user programming language (usually different from the language used for the implementation), a dedicated memory manager, a user interface for the input/output of mathematical expressions, a large set of routines to perform usual operations, like simplification of expressions, differentiation using chain rule, polynomial factorization, indefinite integration, etc. Computer algebra is widely used to experiment in mathematics and to design the formulas that are used in numerical programs. It is also used for complete scientific computations, when purely numerical methods fail, as in public key cryptography, or for some non-linear problems.
wikipedia
floor function
In mathematics and computer science, the floor function is the function that takes as input a real number x, and gives as output the greatest integer less than or equal to x, denoted ⌊x⌋ or floor(x). Similarly, the ceiling function maps x to the least integer greater than or equal to x, denoted ⌈x⌉ or ceil(x).For example (floor), ⌊2.4⌋ = 2, ⌊−2.4⌋ = −3, and for ceiling; ⌈2.4⌉ = 3, and ⌈−2.4⌉ = −2. Historically, the floor of x has been–and still is–called the integral part or integer part of x, often denoted (as well as a variety of other notations). However, the same term, integer part, is also used for truncation towards zero, which differs from the floor function for negative numbers. For n an integer, ⌊n⌋ = ⌈n⌉ = = n.
wikipedia
fuzzy concept
In mathematics and computer science, the gradations of applicable meaning of a fuzzy concept are described in terms of quantitative relationships defined by logical operators. Such an approach is sometimes called "degree-theoretic semantics" by logicians and philosophers, but the more usual term is fuzzy logic or many-valued logic. The novelty of fuzzy logic is, that it "breaks with the traditional principle that formalisation should correct and avoid, but not compromise with, vagueness". The basic idea of fuzzy logic is that a real number is assigned to each statement written in a language, within a range from 0 to 1, where 1 means that the statement is completely true, and 0 means that the statement is completely false, while values less than 1 but greater than 0 represent that the statements are "partly true", to a given, quantifiable extent.
wikipedia
fuzzy concept
Susan Haack comments: "Whereas in classical set theory an object either is or is not a member of a given set, in fuzzy set theory membership is a matter of degree; the degree of membership of an object in a fuzzy set is represented by some real number between 0 and 1, with 0 denoting no membership and 1 full membership." "Truth" in this mathematical context usually means simply that "something is the case", or that "something is applicable". This makes it possible to analyze a distribution of statements for their truth-content, identify data patterns, make inferences and predictions, and model how processes operate. Petr Hájek claimed that "fuzzy logic is not just some "applied logic", but may bring "new light to classical logical problems", and therefore might be well classified as a distinct branch of "philosophical logic" similar to e.g. modal logics.
wikipedia
trace theory
In mathematics and computer science, trace theory aims to provide a concrete mathematical underpinning for the study of concurrent computation and process calculi. The underpinning is provided by an algebraic definition of the free partially commutative monoid or trace monoid, or equivalently, the history monoid, which provides a concrete algebraic foundation, analogous to the way that the free monoid provides the underpinning for formal languages. The power of trace theory stems from the fact that the algebra of dependency graphs (such as Petri nets) is isomorphic to that of trace monoids, and thus, one can apply both algebraic formal language tools, as well as tools from graph theory. While the trace monoid had been studied by Pierre Cartier and Dominique Foata for its combinatorics in the 1960s, trace theory was first formulated by Antoni Mazurkiewicz in the 1970s, in an attempt to evade some of the problems in the theory of concurrent computation, including the problems of interleaving and non-deterministic choice with regards to refinement in process calculi.
wikipedia
root-finding algorithms
In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function f, from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number x such that f(x) = 0. As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as floating-point numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound).Solving an equation f(x) = g(x) is the same as finding the roots of the function h(x) = f(x) – g(x). Thus root-finding algorithms allow solving any equation defined by continuous functions.
wikipedia
root-finding algorithms
However, most root-finding algorithms do not guarantee that they will find all the roots; in particular, if such an algorithm does not find any root, that does not mean that no root exists. Most numerical root-finding methods use iteration, producing a sequence of numbers that hopefully converges towards the root as its limit. They require one or more initial guesses of the root as starting values, then each iteration of the algorithm produces a successively more accurate approximation to the root.
wikipedia
root-finding algorithms
Since the iteration must be stopped at some point, these methods produce an approximation to the root, not an exact solution. Many methods compute subsequent values by evaluating an auxiliary function on the preceding values.
wikipedia
root-finding algorithms
The limit is thus a fixed point of the auxiliary function, which is chosen for having the roots of the original equation as fixed points, and for converging rapidly to these fixed points. The behavior of general root-finding algorithms is studied in numerical analysis. However, for polynomials, root-finding study belongs generally to computer algebra, since algebraic properties of polynomials are fundamental for the most efficient algorithms.
wikipedia
root-finding algorithms
The efficiency of an algorithm may depend dramatically on the characteristics of the given functions. For example, many algorithms use the derivative of the input function, while others work on every continuous function. In general, numerical algorithms are not guaranteed to find all the roots of a function, so failing to find a root does not prove that there is no root. However, for polynomials, there are specific algorithms that use algebraic properties for certifying that no root is missed, and locating the roots in separate intervals (or disks for complex roots) that are small enough to ensure the convergence of numerical methods (typically Newton's method) to the unique root so located.
wikipedia
levenberg–marquardt nonlinear least squares fitting algorithm
In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least squares curve fitting. The LMA interpolates between the Gauss–Newton algorithm (GNA) and the method of gradient descent.
wikipedia
levenberg–marquardt nonlinear least squares fitting algorithm
The LMA is more robust than the GNA, which means that in many cases it finds a solution even if it starts very far off the final minimum. For well-behaved functions and reasonable starting parameters, the LMA tends to be slower than the GNA. LMA can also be viewed as Gauss–Newton using a trust region approach.
wikipedia
levenberg–marquardt nonlinear least squares fitting algorithm
The algorithm was first published in 1944 by Kenneth Levenberg, while working at the Frankford Army Arsenal. It was rediscovered in 1963 by Donald Marquardt, who worked as a statistician at DuPont, and independently by Girard, Wynne and Morrison.The LMA is used in many software applications for solving generic curve-fitting problems. By using the Gauss–Newton algorithm it often converges faster than first-order methods. However, like other iterative optimization algorithms, the LMA finds only a local minimum, which is not necessarily the global minimum.
wikipedia
stanley symmetric function
In mathematics and especially in algebraic combinatorics, the Stanley symmetric functions are a family of symmetric functions introduced by Richard Stanley (1984) in his study of the symmetric group of permutations. Formally, the Stanley symmetric function Fw(x1, x2, ...) indexed by a permutation w is defined as a sum of certain fundamental quasisymmetric functions. Each summand corresponds to a reduced decomposition of w, that is, to a way of writing w as a product of a minimal possible number of adjacent transpositions.
wikipedia
stanley symmetric function
They were introduced in the course of Stanley's enumeration of the reduced decompositions of permutations, and in particular his proof that the permutation w0 = n(n − 1)...21 (written here in one-line notation) has exactly ( n 2 ) ! 1 n − 1 ⋅ 3 n − 2 ⋅ 5 n − 3 ⋯ ( 2 n − 3 ) 1 {\displaystyle {\frac {{\binom {n}{2}}! }{1^{n-1}\cdot 3^{n-2}\cdot 5^{n-3}\cdots (2n-3)^{1}}}} reduced decompositions. (Here ( n 2 ) {\displaystyle {\binom {n}{2}}} denotes the binomial coefficient n(n − 1)/2 and ! denotes the factorial.)
wikipedia
variadic functions
In mathematics and in computer programming, a variadic function is a function of indefinite arity, i.e., one which accepts a variable number of arguments. Support for variadic functions differs widely among programming languages. The term variadic is a neologism, dating back to 1936–1937. The term was not widely used until the 1970s.
wikipedia
measurable function
In mathematics and in particular measure theory, a measurable function is a function between the underlying sets of two measurable spaces that preserves the structure of the spaces: the preimage of any measurable set is measurable. This is in direct analogy to the definition that a continuous function between topological spaces preserves the topological structure: the preimage of any open set is open. In real analysis, measurable functions are used in the definition of the Lebesgue integral. In probability theory, a measurable function on a probability space is known as a random variable.
wikipedia
parametric family of functions
In mathematics and its applications, a parametric family or a parameterized family is a family of objects (a set of related objects) whose differences depend only on the chosen values for a set of parameters.Common examples are parametrized (families of) functions, probability distributions, curves, shapes, etc.
wikipedia
signed distance function
In mathematics and its applications, the signed distance function (or oriented distance function) is the orthogonal distance of a given point x to the boundary of a set Ω in a metric space, with the sign determined by whether or not x is in the interior of Ω. The function has positive values at points x inside Ω, it decreases in value as x approaches the boundary of Ω where the signed distance function is zero, and it takes negative values outside of Ω. However, the alternative convention is also sometimes taken instead (i.e., negative inside Ω and positive outside).
wikipedia
ordered logic
In mathematics and logic, a higher-order logic (abbreviated HOL) is a form of predicate logic that is distinguished from first-order logic by additional quantifiers and, sometimes, stronger semantics. Higher-order logics with their standard semantics are more expressive, but their model-theoretic properties are less well-behaved than those of first-order logic. The term "higher-order logic" is commonly used to mean higher-order simple predicate logic.
wikipedia
ordered logic
Here "simple" indicates that the underlying type theory is the theory of simple types, also called the simple theory of types. Leon Chwistek and Frank P. Ramsey proposed this as a simplification of the complicated and clumsy ramified theory of types specified in the Principia Mathematica by Alfred North Whitehead and Bertrand Russell. Simple types is sometimes also meant to exclude polymorphic and dependent types.
wikipedia
lexical ambiguity
In mathematics and logic, ambiguity can be considered to be an instance of the logical concept of underdetermination—for example, X = Y {\displaystyle X=Y} leaves open what the value of X is—while its opposite is a self-contradiction, also called inconsistency, paradoxicalness, or oxymoron, or in mathematics an inconsistent system—such as X = 2 , X = 3 {\displaystyle X=2,X=3} , which has no solution. Logical ambiguity and self-contradiction is analogous to visual ambiguity and impossible objects, such as the Necker cube and impossible cube, or many of the drawings of M. C. Escher.
wikipedia
boolean value
In mathematics and mathematical logic, Boolean algebra is a branch of algebra. It differs from elementary algebra in two ways. First, the values of the variables are the truth values true and false, usually denoted 1 and 0, whereas in elementary algebra the values of the variables are numbers.
wikipedia
boolean value
Second, Boolean algebra uses logical operators such as conjunction (and) denoted as ∧, disjunction (or) denoted as ∨, and the negation (not) denoted as ¬. Elementary algebra, on the other hand, uses arithmetic operators such as addition, multiplication, subtraction, and division. Boolean algebra is therefore a formal way of describing logical operations, in the same way that elementary algebra describes numerical operations.
wikipedia
boolean value
Boolean algebra was introduced by George Boole in his first book The Mathematical Analysis of Logic (1847), and set forth more fully in his An Investigation of the Laws of Thought (1854). According to Huntington, the term "Boolean algebra" was first suggested by Henry M. Sheffer in 1913, although Charles Sanders Peirce gave the title "A Boolian Algebra with One Constant" to the first chapter of his "The Simplest Mathematics" in 1880. Boolean algebra has been fundamental in the development of digital electronics, and is provided for in all modern programming languages. It is also used in set theory and statistics.
wikipedia
factorization algebra
In mathematics and mathematical physics, a factorization algebra is an algebraic structure first introduced by Beilinson and Drinfel'd in an algebro-geometric setting as a reformulation of chiral algebras, and also studied in a more general setting by Costello to study quantum field theory.
wikipedia
probabilistic potential theory
In mathematics and mathematical physics, potential theory is the study of harmonic functions. The term "potential theory" was coined in 19th-century physics when it was realized that two fundamental forces of nature known at the time, namely gravity and the electrostatic force, could be modeled using functions called the gravitational potential and electrostatic potential, both of which satisfy Poisson's equation—or in the vacuum, Laplace's equation. There is considerable overlap between potential theory and the theory of Poisson's equation to the extent that it is impossible to draw a distinction between these two fields. The difference is more one of emphasis than subject matter and rests on the following distinction: potential theory focuses on the properties of the functions as opposed to the properties of the equation.
wikipedia
probabilistic potential theory
For example, a result about the singularities of harmonic functions would be said to belong to potential theory whilst a result on how the solution depends on the boundary data would be said to belong to the theory of the Laplace equation. This is not a hard and fast distinction, and in practice there is considerable overlap between the two fields, with methods and results from one being used in the other. Modern potential theory is also intimately connected with probability and the theory of Markov chains.
wikipedia
probabilistic potential theory
In the continuous case, this is closely related to analytic theory. In the finite state space case, this connection can be introduced by introducing an electrical network on the state space, with resistance between points inversely proportional to transition probabilities and densities proportional to potentials. Even in the finite case, the analogue I-K of the Laplacian in potential theory has its own maximum principle, uniqueness principle, balance principle, and others.
wikipedia
pseudo-boolean function
In mathematics and optimization, a pseudo-Boolean function is a function of the form f: B n → R , {\displaystyle f:\mathbf {B} ^{n}\to \mathbb {R} ,} where B = {0, 1} is a Boolean domain and n is a nonnegative integer called the arity of the function. A Boolean function is then a special case, where the values are also restricted to 0 or 1.
wikipedia
spherical harmonic function
In mathematics and physical science, spherical harmonics are special functions defined on the surface of a sphere. They are often employed in solving partial differential equations in many scientific fields. Since the spherical harmonics form a complete set of orthogonal functions and thus an orthonormal basis, each function defined on the surface of a sphere can be written as a sum of these spherical harmonics. This is similar to periodic functions defined on a circle that can be expressed as a sum of circular functions (sines and cosines) via Fourier series.
wikipedia
spherical harmonic function
Like the sines and cosines in Fourier series, the spherical harmonics may be organized by (spatial) angular frequency, as seen in the rows of functions in the illustration on the right. Further, spherical harmonics are basis functions for irreducible representations of SO(3), the group of rotations in three dimensions, and thus play a central role in the group theoretic discussion of SO(3).
wikipedia
spherical harmonic function
Spherical harmonics originate from solving Laplace's equation in the spherical domains. Functions that are solutions to Laplace's equation are called harmonics. Despite their name, spherical harmonics take their simplest form in Cartesian coordinates, where they can be defined as homogeneous polynomials of degree ℓ {\displaystyle \ell } in ( x , y , z ) {\displaystyle (x,y,z)} that obey Laplace's equation.
wikipedia
spherical harmonic function
The connection with spherical coordinates arises immediately if one uses the homogeneity to extract a factor of radial dependence r ℓ {\displaystyle r^{\ell }} from the above-mentioned polynomial of degree ℓ {\displaystyle \ell } ; the remaining factor can be regarded as a function of the spherical angular coordinates θ {\displaystyle \theta } and φ {\displaystyle \varphi } only, or equivalently of the orientational unit vector r {\displaystyle \mathbf {r} } specified by these angles. In this setting, they may be viewed as the angular portion of a set of solutions to Laplace's equation in three dimensions, and this viewpoint is often taken as an alternative definition. Notice, however, that spherical harmonics are not functions on the sphere which are harmonic with respect to the Laplace-Beltrami operator for the standard round metric on the sphere: the only harmonic functions in this sense on the sphere are the constants, since harmonic functions satisfy the Maximum principle.
wikipedia
spherical harmonic function
Spherical harmonics, as functions on the sphere, are eigenfunctions of the Laplace-Beltrami operator (see the section Higher dimensions below). A specific set of spherical harmonics, denoted Y ℓ m ( θ , φ ) {\displaystyle Y_{\ell }^{m}(\theta ,\varphi )} or Y ℓ m ( r ) {\displaystyle Y_{\ell }^{m}({\mathbf {r} })} , are known as Laplace's spherical harmonics, as they were first introduced by Pierre Simon de Laplace in 1782.
wikipedia