title
stringlengths
1
113
text
stringlengths
9
3.55k
source
stringclasses
1 value
sequent calculus
Natural deduction. Every (conditional) line has exactly one asserted proposition on the right. Sequent calculus.
wikipedia
sequent calculus
Every (conditional) line has zero or more asserted propositions on the right.In other words, natural deduction and sequent calculus systems are particular distinct kinds of Gentzen-style systems. Hilbert-style systems typically have a very small number of inference rules, relying more on sets of axioms. Gentzen-style systems typically have very few axioms, if any, relying more on sets of rules.
wikipedia
sequent calculus
Gentzen-style systems have significant practical and theoretical advantages compared to Hilbert-style systems. For example, both natural deduction and sequent calculus systems facilitate the elimination and introduction of universal and existential quantifiers so that unquantified logical expressions can be manipulated according to the much simpler rules of propositional calculus. In a typical argument, quantifiers are eliminated, then propositional calculus is applied to unquantified expressions (which typically contain free variables), and then the quantifiers are reintroduced.
wikipedia
sequent calculus
This very much parallels the way in which mathematical proofs are carried out in practice by mathematicians. Predicate calculus proofs are generally much easier to discover with this approach, and are often shorter. Natural deduction systems are more suited to practical theorem-proving. Sequent calculus systems are more suited to theoretical analysis.
wikipedia
computable real function
In mathematical logic, specifically computability theory, a function f: R → R {\displaystyle f\colon \mathbb {R} \to \mathbb {R} } is sequentially computable if, for every computable sequence { x i } i = 1 ∞ {\displaystyle \{x_{i}\}_{i=1}^{\infty }} of real numbers, the sequence { f ( x i ) } i = 1 ∞ {\displaystyle \{f(x_{i})\}_{i=1}^{\infty }} is also computable. A function f: R → R {\displaystyle f\colon \mathbb {R} \to \mathbb {R} } is effectively uniformly continuous if there exists a recursive function d: N → N {\displaystyle d\colon \mathbb {N} \to \mathbb {N} } such that, if | x − y | < 1 d ( n ) {\displaystyle |x-y|<{1 \over d(n)}} then | f ( x ) − f ( y ) | < 1 n {\displaystyle |f(x)-f(y)|<{1 \over n}} A real function is computable if it is both sequentially computable and effectively uniformly continuous,These definitions can be generalized to functions of more than one variable or functions only defined on a subset of R n . {\displaystyle \mathbb {R} ^{n}.} The generalizations of the latter two need not be restated.
wikipedia
computable real function
A suitable generalization of the first definition is: Let D {\displaystyle D} be a subset of R n . {\displaystyle \mathbb {R} ^{n}.}
wikipedia
computable real function
A function f: D → R {\displaystyle f\colon D\to \mathbb {R} } is sequentially computable if, for every n {\displaystyle n} -tuplet ( { x i 1 } i = 1 ∞ , … { x i n } i = 1 ∞ ) {\displaystyle \left(\{x_{i\,1}\}_{i=1}^{\infty },\ldots \{x_{i\,n}\}_{i=1}^{\infty }\right)} of computable sequences of real numbers such that ( ∀ i ) ( x i 1 , … x i n ) ∈ D , {\displaystyle (\forall i)\quad (x_{i\,1},\ldots x_{i\,n})\in D\qquad ,} the sequence { f ( x i ) } i = 1 ∞ {\displaystyle \{f(x_{i})\}_{i=1}^{\infty }} is also computable. This article incorporates material from Computable real function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. == References ==
wikipedia
stratified formula
In mathematical logic, stratification is any consistent assignment of numbers to predicate symbols guaranteeing that a unique formal interpretation of a logical theory exists. Specifically, we say that a set of clauses of the form Q 1 ∧ ⋯ ∧ Q n ∧ ¬ Q n + 1 ∧ ⋯ ∧ ¬ Q n + m → P {\displaystyle Q_{1}\wedge \dots \wedge Q_{n}\wedge \neg Q_{n+1}\wedge \dots \wedge \neg Q_{n+m}\rightarrow P} is stratified if and only if there is a stratification assignment S that fulfills the following conditions: If a predicate P is positively derived from a predicate Q (i.e., P is the head of a rule, and Q occurs positively in the body of the same rule), then the stratification number of P must be greater than or equal to the stratification number of Q, in short S ( P ) ≥ S ( Q ) {\displaystyle S(P)\geq S(Q)} . If a predicate P is derived from a negated predicate Q (i.e., P is the head of a rule, and Q occurs negatively in the body of the same rule), then the stratification number of P must be greater than the stratification number of Q, in short S ( P ) > S ( Q ) {\displaystyle S(P)>S(Q)} .The notion of stratified negation leads to a very effective operational semantics for stratified programs in terms of the stratified least fixpoint, that is obtained by iteratively applying the fixpoint operator to each stratum of the program, from the lowest one up. Stratification is not only useful for guaranteeing unique interpretation of Horn clause theories.
wikipedia
display logic
In mathematical logic, structural proof theory is the subdiscipline of proof theory that studies proof calculi that support a notion of analytic proof, a kind of proof whose semantic properties are exposed. When all the theorems of a logic formalised in a structural proof theory have analytic proofs, then the proof theory can be used to demonstrate such things as consistency, provide decision procedures, and allow mathematical or computational witnesses to be extracted as counterparts to theorems, the kind of task that is more often given to model theory.
wikipedia
integer-valued function
In mathematical logic, such concepts as primitive recursive functions and μ-recursive functions represent integer-valued functions of several natural variables or, in other words, functions on Nn. Gödel numbering, defined on well-formed formulae of some formal language, is a natural-valued function. Computability theory is essentially based on natural numbers and natural (or integer) functions on them.
wikipedia
lindenbaum algebra
In mathematical logic, the Lindenbaum–Tarski algebra (or Lindenbaum algebra) of a logical theory T consists of the equivalence classes of sentences of the theory (i.e., the quotient, under the equivalence relation ~ defined such that p ~ q exactly when p and q are provably equivalent in T). That is, two sentences are equivalent if the theory T proves that each implies the other. The Lindenbaum–Tarski algebra is thus the quotient algebra obtained by factoring the algebra of formulas by this congruence relation.
wikipedia
lindenbaum algebra
The algebra is named for logicians Adolf Lindenbaum and Alfred Tarski. Starting in the academic year 1926-1927, Lindenbaum pioneered his method in Jan Łukasiewicz's mathematical logic seminar, and the method was popularized and generalized in subsequent decades through work by Tarski. The Lindenbaum–Tarski algebra is considered the origin of the modern algebraic logic.
wikipedia
implicational propositional calculus
In mathematical logic, the implicational propositional calculus is a version of classical propositional calculus which uses only one connective, called implication or conditional. In formulas, this binary operation is indicated by "implies", "if ..., then ...", "→", " → {\displaystyle \rightarrow } ", etc..
wikipedia
primitive recursive functional
In mathematical logic, the primitive recursive functionals are a generalization of primitive recursive functions into higher type theory. They consist of a collection of functions in all pure finite types. The primitive recursive functionals are important in proof theory and constructive mathematics. They are a central part of the Dialectica interpretation of intuitionistic arithmetic developed by Kurt Gödel. In recursion theory, the primitive recursive functionals are an example of higher-type computability, as primitive recursive functions are examples of Turing computability.
wikipedia
rules of passage (logic)
In mathematical logic, the rules of passage govern how quantifiers distribute over the basic logical connectives of first-order logic. The rules of passage govern the "passage" (translation) from any formula of first-order logic to the equivalent formula in prenex normal form, and vice versa.
wikipedia
controversy over cantor's theory
In mathematical logic, the theory of infinite sets was first developed by Georg Cantor. Although this work has become a thoroughly standard fixture of classical set theory, it has been criticized in several areas by mathematicians and philosophers. Cantor's theorem implies that there are sets having cardinality greater than the infinite cardinality of the set of natural numbers. Cantor's argument for this theorem is presented with one small change.
wikipedia
controversy over cantor's theory
This argument can be improved by using a definition he gave later. The resulting argument uses only five axioms of set theory.
wikipedia
controversy over cantor's theory
Cantor's set theory was controversial at the start, but later became largely accepted. Most modern mathematics textbooks implicitly use Cantor's views on mathematical infinity. For example, a line is generally presented as the infinite set of its points, and it is commonly taught that there are more real numbers than rational numbers (see cardinality of the continuum).
wikipedia
fuzzy logics
In mathematical logic, there are several formal systems of "fuzzy logic", most of which are in the family of t-norm fuzzy logics.
wikipedia
decidable sublanguages of set theory
In mathematical logic, various sublanguages of set theory are decidable. These include: Sets with Monotone, Additive, and Multiplicative Functions. Sets with restricted quantifiers. == References ==
wikipedia
resilience (mathematics)
In mathematical modeling, resilience refers to the ability of a dynamical system to recover from perturbations and return to its original stable steady state. It is a measure of the stability and robustness of a system in the face of changes or disturbances. If a system is not resilient enough, it is more susceptible to perturbations and can more easily undergo a critical transition. A common analogy used to explain the concept of resilience of an equilibrium is one of a ball in a valley.
wikipedia
resilience (mathematics)
A resilient steady state corresponds to a ball in a deep valley, so any push or perturbation will very quickly lead the ball to return to the resting point where it started. On the other hand, a less resilient steady state corresponds to a ball in a shallow valley, so the ball will take a much longer time to return to the equilibrium after a perturbation. The concept of resilience is particularly useful in systems that exhibit tipping points, whose study has a long history that can be traced back to catastrophe theory. While this theory was initially overhyped and fell out of favor, its mathematical foundation remains strong and is now recognized as relevant to many different systems.
wikipedia
extraneous variables
In mathematical modeling, the dependent variable is studied to see if and how much it varies as the independent variables vary. In the simple stochastic linear model yi = a + bxi + ei the term yi is the ith value of the dependent variable and xi is the ith value of the independent variable. The term ei is known as the "error" and contains the variability of the dependent variable not explained by the independent variable. With multiple independent variables, the model is yi = a + bxi,1 + bxi,2 + ... + bxi,n + ei, where n is the number of independent variables.In statistics, more specifically in linear regression, a scatter plot of data is generated with X as the independent variable and Y as the dependent variable.
wikipedia
extraneous variables
This is also called a bivariate dataset, (x1, y1)(x2, y2) ...(xi, yi). The simple linear regression model takes the form of Yi = a + Bxi + Ui, for i = 1, 2, ... , n. In this case, Ui, ... ,Un are independent random variables. This occurs when the measurements do not influence each other.
wikipedia
extraneous variables
Through propagation of independence, the independence of Ui implies independence of Yi, even though each Yi has a different expectation value. Each Ui has an expectation value of 0 and a variance of σ2. Expectation of Yi Proof: E = E = α + β x i + E = α + β x i .
wikipedia
extraneous variables
{\displaystyle E=E=\alpha +\beta x_{i}+E=\alpha +\beta x_{i}.} The line of best fit for the bivariate dataset takes the form y = α + βx and is called the regression line. α and β correspond to the intercept and slope, respectively.In an experiment, the variable manipulated by an experimenter is something that is proven to work, called an independent variable.
wikipedia
extraneous variables
The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data. The target variable is used in supervised learning algorithms but not in unsupervised learning.
wikipedia
morphological gradient
In mathematical morphology and digital image processing, a morphological gradient is the difference between the dilation and the erosion of a given image. It is an image where each pixel value (typically non-negative) indicates the contrast intensity in the close neighborhood of that pixel. It is useful for edge detection and segmentation applications.
wikipedia
place value system
In mathematical numeral systems the radix r is usually the number of unique digits, including zero, that a positional numeral system uses to represent numbers. In some cases, such as with a negative base, the radix is the absolute value r = | b | {\displaystyle r=|b|} of the base b. For example, for the decimal system the radix (and base) is ten, because it uses the ten digits from 0 through 9. When a number "hits" 9, the next number will not be another different symbol, but a "1" followed by a "0". In binary, the radix is two, since after it hits "1", instead of "2" or another written symbol, it jumps straight to "10", followed by "11" and "100".
wikipedia
place value system
The highest symbol of a positional numeral system usually has the value one less than the value of the radix of that numeral system. The standard positional numeral systems differ from one another only in the base they use. The radix is an integer that is greater than 1, since a radix of zero would not have any digits, and a radix of 1 would only have the zero digit.
wikipedia
place value system
Negative bases are rarely used. In a system with more than | b | {\displaystyle |b|} unique digits, numbers may have many different possible representations.
wikipedia
place value system
It is important that the radix is finite, from which follows that the number of digits is quite low. Otherwise, the length of a numeral would not necessarily be logarithmic in its size. (In certain non-standard positional numeral systems, including bijective numeration, the definition of the base or the allowed digits deviates from the above.)
wikipedia
zero-one loss function
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized.
wikipedia
zero-one loss function
The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century.
wikipedia
zero-one loss function
In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss.
wikipedia
simplex algorithm
In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming.The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are not actually used in the method, but one interpretation of it is that it operates on simplicial cones, and these become proper simplices with an additional constraint. The simplicial cones in question are the corners (i.e., the neighborhoods of the vertices) of a geometric object called a polytope. The shape of this polytope is defined by the constraints applied to the objective function.
wikipedia
himmelblau's function
In mathematical optimization, Himmelblau's function is a multi-modal function, used to test the performance of optimization algorithms. The function is defined by: f ( x , y ) = ( x 2 + y − 11 ) 2 + ( x + y 2 − 7 ) 2 . {\displaystyle f(x,y)=(x^{2}+y-11)^{2}+(x+y^{2}-7)^{2}.\quad } It has one local maximum at x = − 0.270845 {\displaystyle x=-0.270845} and y = − 0.923039 {\displaystyle y=-0.923039} where f ( x , y ) = 181.617 {\displaystyle f(x,y)=181.617} , and four identical local minima: f ( 3.0 , 2.0 ) = 0.0 , {\displaystyle f(3.0,2.0)=0.0,\quad } f ( − 2.805118 , 3.131312 ) = 0.0 , {\displaystyle f(-2.805118,3.131312)=0.0,\quad } f ( − 3.779310 , − 3.283186 ) = 0.0 , {\displaystyle f(-3.779310,-3.283186)=0.0,\quad } f ( 3.584428 , − 1.848126 ) = 0.0. {\displaystyle f(3.584428,-1.848126)=0.0.\quad } The locations of all the minima can be found analytically. However, because they are roots of cubic polynomials, when written in terms of radicals, the expressions are somewhat complicated.The function is named after David Mautner Himmelblau (1924–2011), who introduced it.
wikipedia
fractional programming
In mathematical optimization, fractional programming is a generalization of linear-fractional programming. The objective function in a fractional program is a ratio of two functions that are in general nonlinear. The ratio to be optimized often describes some kind of efficiency of a system.
wikipedia
linear-fractional programming
In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function 1. Formally, a linear-fractional program is defined as the problem of maximizing (or minimizing) a ratio of affine functions over a polyhedron, maximize c T x + α d T x + β subject to A x ≤ b , {\displaystyle {\begin{aligned}{\text{maximize}}\quad &{\frac {\mathbf {c} ^{T}\mathbf {x} +\alpha }{\mathbf {d} ^{T}\mathbf {x} +\beta }}\\{\text{subject to}}\quad &A\mathbf {x} \leq \mathbf {b} ,\end{aligned}}} where x ∈ R n {\displaystyle \mathbf {x} \in \mathbb {R} ^{n}} represents the vector of variables to be determined, c , d ∈ R n {\displaystyle \mathbf {c} ,\mathbf {d} \in \mathbb {R} ^{n}} and b ∈ R m {\displaystyle \mathbf {b} \in \mathbb {R} ^{m}} are vectors of (known) coefficients, A ∈ R m × n {\displaystyle A\in \mathbb {R} ^{m\times n}} is a (known) matrix of coefficients and α , β ∈ R {\displaystyle \alpha ,\beta \in \mathbb {R} } are constants. The constraints have to restrict the feasible region to { x | d T x + β > 0 } {\displaystyle \{\mathbf {x} |\mathbf {d} ^{T}\mathbf {x} +\beta >0\}} , i.e. the region on which the denominator is positive. Alternatively, the denominator of the objective function has to be strictly negative in the entire feasible region.
wikipedia
ackley function
In mathematical optimization, the Ackley function is a non-convex function used as a performance test problem for optimization algorithms. It was proposed by David Ackley in his 1987 PhD dissertation.On a 2-dimensional domain it is defined by: f ( x , y ) = − 20 exp ⁡ − exp ⁡ + e + 20 {\displaystyle {\begin{aligned}f(x,y)=-20&{}\exp \left\\&{}-\exp \left+e+20\end{aligned}}} Its global optimum point is f ( 0 , 0 ) = 0. {\displaystyle f(0,0)=0.}
wikipedia
rastrigin function
In mathematical optimization, the Rastrigin function is a non-convex function used as a performance test problem for optimization algorithms. It is a typical example of non-linear multimodal function. It was first proposed in 1974 by Rastrigin as a 2-dimensional function and has been generalized by Rudolph.
wikipedia
rastrigin function
The generalized version was popularized by Hoffmeister & Bäck and Mühlenbein et al. Finding the minimum of this function is a fairly difficult problem due to its large search space and its large number of local minima. On an n {\displaystyle n} -dimensional domain it is defined by: f ( x ) = A n + ∑ i = 1 n {\displaystyle f(\mathbf {x} )=An+\sum _{i=1}^{n}\left} where A = 10 {\displaystyle A=10} and x i ∈ {\displaystyle x_{i}\in } . There are many extrema: The global minimum is at x = 0 {\displaystyle \mathbf {x} =\mathbf {0} } where f ( x ) = 0 {\displaystyle f(\mathbf {x} )=0} . The maximum function value for x i ∈ {\displaystyle x_{i}\in } is located around x i ∈ {\displaystyle x_{i}\in } :Here are all the values at 0.5 interval listed for the 2D Rastrigin function with x i ∈ {\displaystyle x_{i}\in }: The abundance of local minima underlines the necessity of a global optimization algorithm when needing to find the global minimum. Local optimization algorithms are likely to get stuck in a local minimum.
wikipedia
rosenbrock function
In mathematical optimization, the Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960, which is used as a performance test problem for optimization algorithms. It is also known as Rosenbrock's valley or Rosenbrock's banana function. The global minimum is inside a long, narrow, parabolic shaped flat valley. To find the valley is trivial.
wikipedia
rosenbrock function
To converge to the global minimum, however, is difficult. The function is defined by f ( x , y ) = ( a − x ) 2 + b ( y − x 2 ) 2 {\displaystyle f(x,y)=(a-x)^{2}+b(y-x^{2})^{2}} It has a global minimum at ( x , y ) = ( a , a 2 ) {\displaystyle (x,y)=(a,a^{2})} , where f ( x , y ) = 0 {\displaystyle f(x,y)=0} . Usually, these parameters are set such that a = 1 {\displaystyle a=1} and b = 100 {\displaystyle b=100} . Only in the trivial case where a = 0 {\displaystyle a=0} the function is symmetric and the minimum is at the origin.
wikipedia
criss-cross algorithm
In mathematical optimization, the criss-cross algorithm is any of a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear inequality constraints and nonlinear objective functions; there are criss-cross algorithms for linear-fractional programming problems, quadratic-programming problems, and linear complementarity problems.Like the simplex algorithm of George B. Dantzig, the criss-cross algorithm is not a polynomial-time algorithm for linear programming. Both algorithms visit all 2D corners of a (perturbed) cube in dimension D, the Klee–Minty cube (after Victor Klee and George J. Minty), in the worst case. However, when it is started at a random corner, the criss-cross algorithm on average visits only D additional corners. Thus, for the three-dimensional cube, the algorithm visits all 8 corners in the worst case and exactly 3 additional corners on average.
wikipedia
ellipsoidal algorithm
In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which finds an optimal solution in a number of steps that is polynomial in the input size. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every step, thus enclosing a minimizer of a convex function.
wikipedia
firefly algorithm
In mathematical optimization, the firefly algorithm is a metaheuristic proposed by Xin-She Yang and inspired by the flashing behavior of fireflies.
wikipedia
network simplex algorithm
In mathematical optimization, the network simplex algorithm is a graph theoretic specialization of the simplex algorithm. The algorithm is usually formulated in terms of a minimum-cost flow problem. The network simplex method works very well in practice, typically 200 to 300 times faster than the simplex method applied to general linear program of same dimensions.
wikipedia
perturbation function
In mathematical optimization, the perturbation function is any function which relates to primal and dual problems. The name comes from the fact that any such function defines a perturbation of the initial problem. In many cases this takes the form of shifting the constraints.In some texts the value function is called the perturbation function, and the perturbation function is called the bifunction.
wikipedia
push–relabel maximum flow algorithm
In mathematical optimization, the push–relabel algorithm (alternatively, preflow–push algorithm) is an algorithm for computing maximum flows in a flow network. The name "push–relabel" comes from the two basic operations used in the algorithm. Throughout its execution, the algorithm maintains a "preflow" and gradually converts it into a maximum flow by moving flow locally between neighboring nodes using push operations under the guidance of an admissible network maintained by relabel operations. In comparison, the Ford–Fulkerson algorithm performs global augmentations that send flow following paths from the source all the way to the sink.The push–relabel algorithm is considered one of the most efficient maximum flow algorithms.
wikipedia
push–relabel maximum flow algorithm
The generic algorithm has a strongly polynomial O(V 2E) time complexity, which is asymptotically more efficient than the O(VE 2) Edmonds–Karp algorithm. Specific variants of the algorithms achieve even lower time complexities. The variant based on the highest label node selection rule has O(V 2√E) time complexity and is generally regarded as the benchmark for maximum flow algorithms. Subcubic O(VElog(V 2/E)) time complexity can be achieved using dynamic trees, although in practice it is less efficient.The push–relabel algorithm has been extended to compute minimum cost flows. The idea of distance labels has led to a more efficient augmenting path algorithm, which in turn can be incorporated back into the push–relabel algorithm to create a variant with even higher empirical performance.
wikipedia
revised simplex algorithm
In mathematical optimization, the revised simplex method is a variant of George Dantzig's simplex method for linear programming. The revised simplex method is mathematically equivalent to the standard simplex method but differs in implementation. Instead of maintaining a tableau which explicitly represents the constraints adjusted to a set of basic variables, it maintains a representation of a basis of the matrix representing the constraints. The matrix-oriented approach allows for greater computational efficiency by enabling sparse matrix operations.
wikipedia
geometry of special relativity
In mathematical physics, Minkowski space (or Minkowski spacetime) () combines inertial space and time manifolds (x,y) with a non-inertial reference frame of space and time (x',t') into a four-dimensional model relating a position (inertial frame of reference) to the field. A four-vector (x,y,z,t) consists of a coordinate axes such as a Euclidean space plus time. This may be used with the non-inertial frame to illustrate specifics of motion, but should not be confused with the spacetime model generally.
wikipedia
geometry of special relativity
The model helps show how a spacetime interval between any two events is independent of the inertial frame of reference in which they are recorded. Mathematician Hermann Minkowski developed it from the work of Hendrik Lorentz, Henri Poincaré, and others, and said it "was grown on experimental physical grounds." Minkowski space is closely associated with Einstein's theories of special relativity and general relativity and is the most common mathematical structure by which special relativity is formalized.
wikipedia
geometry of special relativity
While the individual components in Euclidean space and time might differ due to length contraction and time dilation, in Minkowski spacetime, all frames of reference will agree on the total interval in spacetime between events. Minkowski space differs from four-dimensional Euclidean space insofar as it treats time differently than the three spatial dimensions. In 3-dimensional Euclidean space, the isometry group (the maps preserving the regular Euclidean distance) is the Euclidean group.
wikipedia
geometry of special relativity
It is generated by rotations, reflections and translations. When time is appended as a fourth dimension, the further transformations of translations in time and Lorentz boosts are added, and the group of all these transformations is called the Poincaré group. Minkowski's model follows special relativity where motion causes time dilation changing the scale applied to the frame in motion and shifts the phase of light.
wikipedia
geometry of special relativity
Spacetime is equipped with an indefinite non-degenerate bilinear form, variously called the Minkowski metric, the Minkowski norm squared or Minkowski inner product depending on the context. The Minkowski inner product is defined so as to yield the spacetime interval between two events when given their coordinate difference vector as argument. Equipped with this inner product, the mathematical model of spacetime is called Minkowski space. The group of transformations for Minkowski space that preserve the spacetime interval (as opposed to the spatial Euclidean distance) is the Poincaré group (as opposed to the isometry group).
wikipedia
lattice model (physics)
In mathematical physics, a lattice model is a mathematical model of a physical system that is defined on a lattice, as opposed to a continuum, such as the continuum of space or spacetime. Lattice models originally occurred in the context of condensed matter physics, where the atoms of a crystal automatically form a lattice. Currently, lattice models are quite popular in theoretical physics, for many reasons. Some models are exactly solvable, and thus offer insight into physics beyond what can be learned from perturbation theory.
wikipedia
lattice model (physics)
Lattice models are also ideal for study by the methods of computational physics, as the discretization of any continuum model automatically turns it into a lattice model. The exact solution to many of these models (when they are solvable) includes the presence of solitons. Techniques for solving these include the inverse scattering transform and the method of Lax pairs, the Yang–Baxter equation and quantum groups.
wikipedia
lattice model (physics)
The solution of these models has given insights into the nature of phase transitions, magnetization and scaling behaviour, as well as insights into the nature of quantum field theory. Physical lattice models frequently occur as an approximation to a continuum theory, either to give an ultraviolet cutoff to the theory to prevent divergences or to perform numerical computations. An example of a continuum theory that is widely studied by lattice models is the QCD lattice model, a discretization of quantum chromodynamics.
wikipedia
lattice model (physics)
However, digital physics considers nature fundamentally discrete at the Planck scale, which imposes upper limit to the density of information, aka Holographic principle. More generally, lattice gauge theory and lattice field theory are areas of study. Lattice models are also used to simulate the structure and dynamics of polymers.
wikipedia
constructive quantum field theory
In mathematical physics, constructive quantum field theory is the field devoted to showing that quantum field theory can be defined in terms of precise mathematical structures. This demonstration requires new mathematics, in a sense analogous to classical real analysis, putting calculus on a mathematically rigorous foundation. Weak, strong, and electromagnetic forces of nature are believed to have their natural description in terms of quantum fields.
wikipedia
constructive quantum field theory
Attempts to put quantum field theory on a basis of completely defined concepts have involved most branches of mathematics, including functional analysis, differential equations, probability theory, representation theory, geometry, and topology. It is known that a quantum field is inherently hard to handle using conventional mathematical techniques like explicit estimates. This is because a quantum field has the general nature of an operator-valued distribution, a type of object from mathematical analysis.
wikipedia
constructive quantum field theory
The existence theorems for quantum fields can be expected to be very difficult to find, if indeed they are possible at all. One discovery of the theory that can be related in non-technical terms, is that the dimension d of the spacetime involved is crucial. Notable work in the field by James Glimm and Arthur Jaffe showed that with d < 4 many examples can be found.
wikipedia
constructive quantum field theory
Along with work of their students, coworkers, and others, constructive field theory resulted in a mathematical foundation and exact interpretation to what previously was only a set of recipes, also in the case d < 4. Theoretical physicists had given these rules the name "renormalization," but most physicists had been skeptical about whether they could be turned into a mathematical theory. Today one of the most important open problems, both in theoretical physics and in mathematics, is to establish similar results for gauge theory in the realistic case d = 4.
wikipedia
constructive quantum field theory
The traditional basis of constructive quantum field theory is the set of Wightman axioms. Osterwalder and Schrader showed that there is an equivalent problem in mathematical probability theory. The examples with d < 4 satisfy the Wightman axioms as well as the Osterwalder–Schrader axioms. They also fall in the related framework introduced by Haag and Kastler, called algebraic quantum field theory. There is a firm belief in the physics community that the gauge theory of Yang and Mills (the Yang–Mills theory) can lead to a tractable theory, but new ideas and new methods will be required to actually establish this, and this could take many years.
wikipedia
covariant classical field theory
In mathematical physics, covariant classical field theory represents classical fields by sections of fiber bundles, and their dynamics is phrased in the context of a finite-dimensional space of fields. Nowadays, it is well known that jet bundles and the variational bicomplex are the correct domain for such a description. The Hamiltonian variant of covariant classical field theory is the covariant Hamiltonian field theory where momenta correspond to derivatives of field variables with respect to all world coordinates. Non-autonomous mechanics is formulated as covariant classical field theory on fiber bundles over the time axis ℝ.
wikipedia
four-dimensional chern-simons theory
In mathematical physics, four-dimensional Chern–Simons theory, also known as semi-holomorphic or semi-topological Chern–Simons theory, is a quantum field theory initially defined by Nikita Nekrasov, rediscovered and studied by Kevin Costello, and later by Edward Witten and Masahito Yamazaki. It is named after mathematicians Shiing-Shen Chern and James Simons who discovered the Chern–Simons 3-form appearing in the theory. The gauge theory has been demonstrated to be related to many integrable systems, including exactly solvable lattice models such as the six-vertex model of Lieb and the Heisenberg spin chain and integrable field theories such as principal chiral models, symmetric space coset sigma models and Toda field theory, although the integrable field theories require the introduction of two-dimensional surface defects. The theory is also related to the Yang–Baxter equation and quantum groups such as the Yangian. The theory is similar to three-dimensional Chern–Simons theory which is a topological quantum field theory, and the relation of 4d Chern–Simons theory to the Yang–Baxter equation bears similarities to the relation of 3d Chern–Simons theory to knot invariants such as the Jones polynomial discovered by Witten.
wikipedia
geometric quantisation
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in.
wikipedia
energy quantization
In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in. A more geometric approach to quantization, in which the classical phase space can be a general symplectic manifold, was developed in the 1970s by Bertram Kostant and Jean-Marie Souriau.
wikipedia
energy quantization
The method proceeds in two stages. First, once constructs a "prequantum Hilbert space" consisting of square-integrable functions (or, more properly, sections of a line bundle) over the phase space. Here one can construct operators satisfying commutation relations corresponding exactly to the classical Poisson-bracket relations. On the other hand, this prequantum Hilbert space is too big to be physically meaningful. One then restricts to functions (or sections) depending on half the variables on the phase space, yielding the quantum Hilbert space.
wikipedia
noncommutative field theory
In mathematical physics, noncommutative quantum field theory (or quantum field theory on noncommutative spacetime) is an application of noncommutative mathematics to the spacetime of quantum field theory that is an outgrowth of noncommutative geometry and index theory in which the coordinate functions are noncommutative. One commonly studied version of such theories has the "canonical" commutation relation: = i θ μ ν {\displaystyle =i\theta ^{\mu \nu }\,\!} which means that (with any given set of axes), it is impossible to accurately measure the position of a particle with respect to more than one axis. In fact, this leads to an uncertainty relation for the coordinates analogous to the Heisenberg uncertainty principle.
wikipedia
noncommutative field theory
Various lower limits have been claimed for the noncommutative scale, (i.e. how accurately positions can be measured) but there is currently no experimental evidence in favour of such a theory or grounds for ruling them out. One of the novel features of noncommutative field theories is the UV/IR mixing phenomenon in which the physics at high energies affects the physics at low energies which does not occur in quantum field theories in which the coordinates commute. Other features include violation of Lorentz invariance due to the preferred direction of noncommutativity. Relativistic invariance can however be retained in the sense of twisted Poincaré invariance of the theory. The causality condition is modified from that of the commutative theories.
wikipedia
light scattering in liquids and solids
In mathematical physics, scattering theory is a framework for studying and understanding the interaction or scattering of solutions to partial differential equations. In acoustics, the differential equation is the wave equation, and scattering studies how its solutions, the sound waves, scatter from solid objects or propagate through non-uniform media (such as sound waves, in sea water, coming from a submarine). In the case of classical electrodynamics, the differential equation is again the wave equation, and the scattering of light or radio waves is studied. In particle physics, the equations are those of Quantum electrodynamics, Quantum chromodynamics and the Standard Model, the solutions of which correspond to fundamental particles.
wikipedia
light scattering in liquids and solids
In regular quantum mechanics, which includes quantum chemistry, the relevant equation is the Schrödinger equation, although equivalent formulations, such as the Lippmann-Schwinger equation and the Faddeev equations, are also largely used. The solutions of interest describe the long-term motion of free atoms, molecules, photons, electrons, and protons.
wikipedia
light scattering in liquids and solids
The scenario is that several particles come together from an infinite distance away. These reagents then collide, optionally reacting, getting destroyed or creating new particles. The products and unused reagents then fly away to infinity again.
wikipedia
light scattering in liquids and solids
(The atoms and molecules are effectively particles for our purposes. Also, under everyday circumstances, only photons are being created and destroyed.) The solutions reveal which directions the products are most likely to fly off to and how quickly. They also reveal the probability of various reactions, creations, and decays occurring. There are two predominant techniques of finding solutions to scattering problems: partial wave analysis, and the Born approximation.
wikipedia
six-dimensional holomorphic chern–simons theory
In mathematical physics, six-dimensional holomorphic Chern–Simons theory or sometimes holomorphic Chern–Simons theory is a gauge theory on a three-dimensional complex manifold. It is a complex analogue of Chern–Simons theory, named after Shiing-Shen Chern and James Simons who first studied Chern–Simons forms which appear in the action of Chern–Simons theory. The theory is referred to as six-dimensional as the underlying manifold of the theory is three-dimensional as a complex manifold, hence six-dimensional as a real manifold. The theory has been used to study integrable systems through four-dimensional Chern–Simons theory, which can be viewed as a symmetry reduction of the six-dimensional theory. For this purpose, the underlying three-dimensional complex manifold is taken to be the three-dimensional complex projective space P 3 {\displaystyle \mathbb {P} ^{3}} , viewed as twistor space.
wikipedia
schrödinger functional
In mathematical physics, some approaches to quantum field theory are more popular than others. For historical reasons, the Schrödinger representation is less favored than Fock space methods. In the early days of quantum field theory, maintaining symmetries such as Lorentz invariance, displaying them manifestly, and proving renormalisation were of paramount importance. The Schrödinger representation is not manifestly Lorentz invariant and its renormalisability was only shown as recently as the 1980s by Kurt Symanzik (1981). The Schrödinger functional is, in its most basic form, the time translation generator of state wavefunctionals. In layman's terms, it defines how a system of quantum particles evolves through time and what the subsequent systems look like.
wikipedia
spacetime algebra
In mathematical physics, spacetime algebra (STA) is a name for the Clifford algebra Cl1,3(R), or equivalently the geometric algebra G(M4). According to David Hestenes, spacetime algebra can be particularly closely associated with the geometry of special relativity and relativistic spacetime. It is a vector space that allows not only vectors, but also bivectors (directed quantities associated with particular planes, such as areas, or rotations) or blades (quantities associated with particular hyper-volumes) to be combined, as well as rotated, reflected, or Lorentz boosted. It is also the natural parent algebra of spinors in special relativity. These properties allow many of the most important equations in physics to be expressed in particularly simple forms, and can be very helpful towards a more geometric understanding of their meanings.
wikipedia
n=2 superconformal algebra
In mathematical physics, the 2D N = 2 superconformal algebra is an infinite-dimensional Lie superalgebra, related to supersymmetry, that occurs in string theory and two-dimensional conformal field theory. It has important applications in mirror symmetry. It was introduced by M. Ademollo, L. Brink, and A. D'Adda et al. (1976) as a gauge algebra of the U(1) fermionic string.
wikipedia
dirac algebra
In mathematical physics, the Dirac algebra is the Clifford algebra Cl 1 , 3 ( C ) {\displaystyle {\text{Cl}}_{1,3}(\mathbb {C} )} . This was introduced by the mathematical physicist P. A. M. Dirac in 1928 in developing the Dirac equation for spin-½ particles with a matrix representation of the gamma matrices, which represent the generators of the algebra. The gamma matrices are a set of four 4 × 4 {\displaystyle 4\times 4} matrices { γ μ } = { γ 0 , γ 1 , γ 2 , γ 3 } {\displaystyle \{\gamma ^{\mu }\}=\{\gamma ^{0},\gamma ^{1},\gamma ^{2},\gamma ^{3}\}} with entries in C {\displaystyle \mathbb {C} } , that is, elements of Mat 4 × 4 ( C ) {\displaystyle {\text{Mat}}_{4\times 4}(\mathbb {C} )} , satisfying { γ μ , γ ν } = γ μ γ ν + γ ν γ μ = 2 η μ ν , {\displaystyle \displaystyle \{\gamma ^{\mu },\gamma ^{\nu }\}=\gamma ^{\mu }\gamma ^{\nu }+\gamma ^{\nu }\gamma ^{\mu }=2\eta ^{\mu \nu },} where by convention, an identity matrix has been suppressed on the right-hand side. The numbers η μ ν {\displaystyle \eta ^{\mu \nu }\,} are the components of the Minkowski metric.
wikipedia
dirac algebra
For this article we fix the signature to be mostly minus, that is, ( + , − , − , − ) {\displaystyle (+,-,-,-)} . The Dirac algebra is then the linear span of the identity, the gamma matrices γ μ {\displaystyle \gamma ^{\mu }} as well as any linearly independent products of the gamma matrices. This forms a finite-dimensional algebra over the field R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } , with dimension 16 = 2 4 {\displaystyle 16=2^{4}} .
wikipedia
dirac delta functions
In mathematical physics, the Dirac delta distribution (δ distribution), also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one.The current understanding of the unit impulse is as a linear functional that maps every continuous function (e.g., f ( x ) {\displaystyle f(x)} ) to its value at zero of its domain ( f ( 0 ) {\displaystyle f(0)} ), or as the weak limit of a sequence of bump functions (e.g., δ ( x ) = lim b → 0 1 | b | π e − ( x / b ) 2 {\displaystyle \delta (x)=\lim _{b\to 0}{\frac {1}{|b|{\sqrt {\pi }}}}e^{-(x/b)^{2}}} ), which are zero over most of the real line, with a tall spike at the origin. Bump functions are thus sometimes called "approximate" or "nascent" delta distributions. The delta function was introduced by physicist Paul Dirac as a tool for the normalization of state vectors. It also has uses in probability theory and signal processing. Its validity was disputed until Laurent Schwartz developed the theory of distributions where it is defined as a linear form acting on functions. The Kronecker delta function, which is usually defined on a discrete domain and takes values 0 and 1, is the discrete analog of the Dirac delta function.
wikipedia
duffin–kemmer–petiau algebra
In mathematical physics, the Duffin–Kemmer–Petiau algebra (DKP algebra), introduced by R.J. Duffin, Nicholas Kemmer and G. Petiau, is the algebra which is generated by the Duffin–Kemmer–Petiau matrices. These matrices form part of the Duffin–Kemmer–Petiau equation that provides a relativistic description of spin-0 and spin-1 particles. The DKP algebra is also referred to as the meson algebra.
wikipedia
operator-valued distribution
In mathematical physics, the Wightman axioms (also called Gårding–Wightman axioms), named after Arthur Wightman, are an attempt at a mathematically rigorous formulation of quantum field theory. Arthur Wightman formulated the axioms in the early 1950s, but they were first published only in 1964 after Haag–Ruelle scattering theory affirmed their significance. The axioms exist in the context of constructive quantum field theory and are meant to provide a basis for rigorous treatment of quantum fields and strict foundation for the perturbative methods used. One of the Millennium Problems is to realize the Wightman axioms in the case of Yang–Mills fields.
wikipedia
almost mathieu operator
In mathematical physics, the almost Mathieu operator arises in the study of the quantum Hall effect. It is given by ( n ) = u ( n + 1 ) + u ( n − 1 ) + 2 λ cos ⁡ ( 2 π ( ω + n α ) ) u ( n ) , {\displaystyle (n)=u(n+1)+u(n-1)+2\lambda \cos(2\pi (\omega +n\alpha ))u(n),\,} acting as a self-adjoint operator on the Hilbert space ℓ 2 ( Z ) {\displaystyle \ell ^{2}(\mathbb {Z} )} . Here α , ω ∈ T , λ > 0 {\displaystyle \alpha ,\omega \in \mathbb {T} ,\lambda >0} are parameters.
wikipedia
almost mathieu operator
In pure mathematics, its importance comes from the fact of being one of the best-understood examples of an ergodic Schrödinger operator. For example, three problems (now all solved) of Barry Simon's fifteen problems about Schrödinger operators "for the twenty-first century" featured the almost Mathieu operator. In physics, the almost Mathieu operators can be used to study metal to insulator transitions like in the Aubry–André model. For λ = 1 {\displaystyle \lambda =1} , the almost Mathieu operator is sometimes called Harper's equation.
wikipedia
quantum spacetime
In mathematical physics, the concept of quantum spacetime is a generalization of the usual concept of spacetime in which some variables that ordinarily commute are assumed not to commute and form a different Lie algebra. The choice of that algebra still varies from theory to theory. As a result of this change some variables that are usually continuous may become discrete. Often only such discrete variables are called "quantized"; usage varies.
wikipedia
quantum spacetime
The idea of quantum spacetime was proposed in the early days of quantum theory by Heisenberg and Ivanenko as a way to eliminate infinities from quantum field theory. The germ of the idea passed from Heisenberg to Rudolf Peierls, who noted that electrons in a magnetic field can be regarded as moving in a quantum spacetime, and to Robert Oppenheimer, who carried it to Hartland Snyder, who published the first concrete example. Snyder's Lie algebra was made simple by C. N. Yang in the same year.
wikipedia
conformal algebra
In mathematical physics, the conformal symmetry of spacetime is expressed by an extension of the Poincaré group, known as the conformal group. The extension includes special conformal transformations and dilations. In three spatial plus one time dimensions, conformal symmetry has 15 degrees of freedom: ten for the Poincaré group, four for special conformal transformations, and one for a dilation.
wikipedia
conformal algebra
Harry Bateman and Ebenezer Cunningham were the first to study the conformal symmetry of Maxwell's equations. They called a generic expression of conformal symmetry a spherical wave transformation. General relativity in two spacetime dimensions also enjoys conformal symmetry.
wikipedia
quantum kz equations
In mathematical physics, the quantum KZ equations or quantum Knizhnik–Zamolodchikov equations or qKZ equations are the analogue for quantum affine algebras of the Knizhnik–Zamolodchikov equations for affine Kac–Moody algebras. They are a consistent system of difference equations satisfied by the N-point functions, the vacuum expectations of products of primary fields. In the limit as the deformation parameter q approaches 1, the N-point functions of the quantum affine algebra tend to those of the affine Kac–Moody algebra and the difference equations become partial differential equations. The quantum KZ equations have been used to study exactly solved models in quantum statistical mechanics.
wikipedia
two-dimensional yang–mills theory
In mathematical physics, two-dimensional Yang–Mills theory is the special case of Yang–Mills theory in which the dimension of spacetime is taken to be two. This special case allows for a rigorously defined Yang–Mills measure, meaning that the (Euclidean) path integral can be interpreted as a measure on the set of connections modulo gauge transformations. This situation contrasts with the four-dimensional case, where a rigorous construction of the theory as a measure is currently unknown. An aspect of the subject of particular interest is the large-N limit, in which the structure group is taken to be the unitary group U ( N ) {\displaystyle U(N)} and then the N {\displaystyle N} tends to infinity limit is taken. The large-N limit of two-dimensional Yang–Mills theory has connections to string theory.
wikipedia
unit interval graph
In mathematical psychology, indifference graphs arise from utility functions, by scaling the function so that one unit represents a difference in utilities small enough that individuals can be assumed to be indifferent to it. In this application, pairs of items whose utilities have a large difference may be partially ordered by the relative order of their utilities, giving a semiorder.In bioinformatics, the problem of augmenting a colored graph to a properly colored unit interval graph can be used to model the detection of false negatives in DNA sequence assembly from complete digests.
wikipedia
hecke algebra of a pair
In mathematical representation theory, the Hecke algebra of a pair (g,K) is an algebra with an approximate identity, whose approximately unital modules are the same as K-finite representations of the pairs (g,K). Here K is a compact subgroup of a Lie group with Lie algebra g.
wikipedia
cohen algebra
In mathematical set theory, a Cohen algebra, named after Paul Cohen, is a type of Boolean algebra used in the theory of forcing. A Cohen algebra is a Boolean algebra whose completion is isomorphic to the completion of a free Boolean algebra (Koppelberg 1993).
wikipedia
jónsson function
In mathematical set theory, an ω-Jónsson function for a set x of ordinals is a function f: ω → x {\displaystyle f:^{\omega }\to x} with the property that, for any subset y of x with the same cardinality as x, the restriction of f {\displaystyle f} to ω {\displaystyle ^{\omega }} is surjective on x {\displaystyle x} . Here ω {\displaystyle ^{\omega }} denotes the set of strictly increasing sequences of members of x {\displaystyle x} , or equivalently the family of subsets of x {\displaystyle x} with order type ω {\displaystyle \omega } , using a standard notation for the family of subsets with a given order type. Jónsson functions are named for Bjarni Jónsson.
wikipedia
jónsson function
Erdős and Hajnal (1966) showed that for every ordinal λ there is an ω-Jónsson function for λ. Kunen's proof of Kunen's inconsistency theorem uses a Jónsson function for cardinals λ such that 2λ = λℵ0, and Kunen observed that for this special case there is a simpler proof of the existence of Jónsson functions. Galvin and Prikry (1976) gave a simple proof for the general case. The existence of Jónsson functions shows that for any cardinal there is an algebra with an infinitary operation that has no proper subalgebras of the same cardinality. In particular if infinitary operations are allowed then an analogue of Jónsson algebras exists in any cardinality, so there are no infinitary analogues of Jónsson cardinals.
wikipedia
illness narrative
In mathematical sociology, the theory of comparative narratives was devised in order to describe and compare the structures (expressed as "and" in a directed graph where multiple causal links incident into a node are conjoined) of action-driven sequential events.Narratives so conceived comprise the following ingredients: A finite set of state descriptions of the world S, the components of which are weakly ordered in time; A finite set of actors/agents (individual or collective), P; A finite set of actions A; A mapping of P onto A;The structure (directed graph) is generated by letting the nodes stand for the states and the directed edges represent how the states are changed by specified actions. The action skeleton can then be abstracted, comprising a further digraph where the actions are depicted as nodes and edges take the form "action a co-determined (in context of other actions) action b". Narratives can be both abstracted and generalised by imposing an algebra upon their structures and thence defining homomorphism between the algebras. The insertion of action-driven causal links in a narrative can be achieved using the method of Bayesian narratives.
wikipedia