text
stringlengths 9
3.55k
| source
stringlengths 31
280
|
|---|---|
A suitable generalization of the first definition is: Let D {\displaystyle D} be a subset of R n . {\displaystyle \mathbb {R} ^{n}.}
|
https://en.wikipedia.org/wiki/Computable_real_function
|
A function f: D → R {\displaystyle f\colon D\to \mathbb {R} } is sequentially computable if, for every n {\displaystyle n} -tuplet ( { x i 1 } i = 1 ∞ , … { x i n } i = 1 ∞ ) {\displaystyle \left(\{x_{i\,1}\}_{i=1}^{\infty },\ldots \{x_{i\,n}\}_{i=1}^{\infty }\right)} of computable sequences of real numbers such that ( ∀ i ) ( x i 1 , … x i n ) ∈ D , {\displaystyle (\forall i)\quad (x_{i\,1},\ldots x_{i\,n})\in D\qquad ,} the sequence { f ( x i ) } i = 1 ∞ {\displaystyle \{f(x_{i})\}_{i=1}^{\infty }} is also computable. This article incorporates material from Computable real function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. == References ==
|
https://en.wikipedia.org/wiki/Computable_real_function
|
In mathematical logic, specifically in the discipline of model theory, the Fraïssé limit (also called the Fraïssé construction or Fraïssé amalgamation) is a method used to construct (infinite) mathematical structures from their (finite) substructures. It is a special example of the more general concept of a direct limit in a category. The technique was developed in the 1950s by its namesake, French logician Roland Fraïssé.The main point of Fraïssé's construction is to show how one can approximate a (countable) structure by its finitely generated substructures.
|
https://en.wikipedia.org/wiki/Fraïssé_limit
|
Given a class K {\displaystyle \mathbf {K} } of finite relational structures, if K {\displaystyle \mathbf {K} } satisfies certain properties (described below), then there exists a unique countable structure Flim ( K ) {\displaystyle \operatorname {Flim} (\mathbf {K} )} , called the Fraïssé limit of K {\displaystyle \mathbf {K} } , which contains all the elements of K {\displaystyle \mathbf {K} } as substructures. The general study of Fraïssé limits and related notions is sometimes called Fraïssé theory. This field has seen wide applications to other parts of mathematics, including topological dynamics, functional analysis, and Ramsey theory.
|
https://en.wikipedia.org/wiki/Fraïssé_limit
|
In mathematical logic, stratification is any consistent assignment of numbers to predicate symbols guaranteeing that a unique formal interpretation of a logical theory exists. Specifically, we say that a set of clauses of the form Q 1 ∧ ⋯ ∧ Q n ∧ ¬ Q n + 1 ∧ ⋯ ∧ ¬ Q n + m → P {\displaystyle Q_{1}\wedge \dots \wedge Q_{n}\wedge \neg Q_{n+1}\wedge \dots \wedge \neg Q_{n+m}\rightarrow P} is stratified if and only if there is a stratification assignment S that fulfills the following conditions: If a predicate P is positively derived from a predicate Q (i.e., P is the head of a rule, and Q occurs positively in the body of the same rule), then the stratification number of P must be greater than or equal to the stratification number of Q, in short S ( P ) ≥ S ( Q ) {\displaystyle S(P)\geq S(Q)} . If a predicate P is derived from a negated predicate Q (i.e., P is the head of a rule, and Q occurs negatively in the body of the same rule), then the stratification number of P must be greater than the stratification number of Q, in short S ( P ) > S ( Q ) {\displaystyle S(P)>S(Q)} .The notion of stratified negation leads to a very effective operational semantics for stratified programs in terms of the stratified least fixpoint, that is obtained by iteratively applying the fixpoint operator to each stratum of the program, from the lowest one up. Stratification is not only useful for guaranteeing unique interpretation of Horn clause theories.
|
https://en.wikipedia.org/wiki/Stratified_formula
|
In mathematical logic, structural proof theory is the subdiscipline of proof theory that studies proof calculi that support a notion of analytic proof, a kind of proof whose semantic properties are exposed. When all the theorems of a logic formalised in a structural proof theory have analytic proofs, then the proof theory can be used to demonstrate such things as consistency, provide decision procedures, and allow mathematical or computational witnesses to be extracted as counterparts to theorems, the kind of task that is more often given to model theory.
|
https://en.wikipedia.org/wiki/Display_logic
|
In mathematical logic, such concepts as primitive recursive functions and μ-recursive functions represent integer-valued functions of several natural variables or, in other words, functions on Nn. Gödel numbering, defined on well-formed formulae of some formal language, is a natural-valued function. Computability theory is essentially based on natural numbers and natural (or integer) functions on them.
|
https://en.wikipedia.org/wiki/Integer-valued_function
|
In mathematical logic, the Barwise compactness theorem, named after Jon Barwise, is a generalization of the usual compactness theorem for first-order logic to a certain class of infinitary languages. It was stated and proved by Barwise in 1967.
|
https://en.wikipedia.org/wiki/Barwise_compactness_theorem
|
In mathematical logic, the Borel hierarchy is a stratification of the Borel algebra generated by the open subsets of a Polish space; elements of this algebra are called Borel sets. Each Borel set is assigned a unique countable ordinal number called the rank of the Borel set. The Borel hierarchy is of particular interest in descriptive set theory. One common use of the Borel hierarchy is to prove facts about the Borel sets using transfinite induction on rank. Properties of sets of small finite ranks are important in measure theory and analysis.
|
https://en.wikipedia.org/wiki/Borel_hierarchy
|
In mathematical logic, the Brouwer–Heyting–Kolmogorov interpretation, or BHK interpretation, of intuitionistic logic was proposed by L. E. J. Brouwer and Arend Heyting, and independently by Andrey Kolmogorov. It is also sometimes called the realizability interpretation, because of the connection with the realizability theory of Stephen Kleene. It is the standard explanation of intuitionistic logic.
|
https://en.wikipedia.org/wiki/Brouwer–Heyting–Kolmogorov_interpretation
|
In mathematical logic, the Cantor–Dedekind axiom is the thesis that the real numbers are order-isomorphic to the linear continuum of geometry. In other words, the axiom states that there is a one-to-one correspondence between real numbers and points on a line. This axiom became a theorem proved by Emil Artin in his book Geometric Algebra. More precisely, Euclidean spaces defined over the field of real numbers satisfy the axioms of Euclidean geometry, and, from the axioms of Euclidean geometry, one can construct a field that is isomorphic to the real numbers.
|
https://en.wikipedia.org/wiki/Cantor–Dedekind_axiom
|
Analytic geometry was developped from the Cartesian coordinate system introduced by René Descartes. It implicitly assumed this axiom by blending the distinct concepts of real numbers and points on a line, sometimes referred to as the real number line. Artin's proof, not only makes this blend explicity, but also that analytic geometry is strictly equivalent with the traditional synthetic geometry, in the sense that exactly the same theorems can be proved in the two frameworks. Another consequence is that Alfred Tarski's proof of the decidability of first-order theories of the real numbers could be seen as an algorithm to solve any first-order problem in Euclidean geometry.
|
https://en.wikipedia.org/wiki/Cantor–Dedekind_axiom
|
In mathematical logic, the De Bruijn index is a tool invented by the Dutch mathematician Nicolaas Govert de Bruijn for representing terms of lambda calculus without naming the bound variables. Terms written using these indices are invariant with respect to α-conversion, so the check for α-equivalence is the same as that for syntactic equality. Each De Bruijn index is a natural number that represents an occurrence of a variable in a λ-term, and denotes the number of binders that are in scope between that occurrence and its corresponding binder. The following are some examples: The term λx.
|
https://en.wikipedia.org/wiki/De_Bruijn_index
|
λy. x, sometimes called the K combinator, is written as λ λ 2 with De Bruijn indices. The binder for the occurrence x is the second λ in scope.
|
https://en.wikipedia.org/wiki/De_Bruijn_index
|
The term λx. λy. λz.
|
https://en.wikipedia.org/wiki/De_Bruijn_index
|
x z (y z) (the S combinator), with De Bruijn indices, is λ λ λ 3 1 (2 1). The term λz. (λy.
|
https://en.wikipedia.org/wiki/De_Bruijn_index
|
y (λx. x)) (λx. z x) is λ (λ 1 (λ 1)) (λ 2 1). See the following illustration, where the binders are coloured and the references are shown with arrows. De Bruijn indices are commonly used in higher-order reasoning systems such as automated theorem provers and logic programming systems.
|
https://en.wikipedia.org/wiki/De_Bruijn_index
|
In mathematical logic, the De Bruijn notation is a syntax for terms in the λ calculus invented by the Dutch mathematician Nicolaas Govert de Bruijn. It can be seen as a reversal of the usual syntax for the λ calculus where the argument in an application is placed next to its corresponding binder in the function instead of after the latter's body.
|
https://en.wikipedia.org/wiki/De_Bruijn_notation
|
In mathematical logic, the Friedberg–Muchnik theorem is a theorem about Turing reductions that was proven independently by Albert Muchnik and Richard Friedberg in the middle of the 1950s. It is a more general view of the Kleene–Post theorem. The Kleene–Post theorem states that there exist incomparable languages A and B below K. The Friedberg–Muchnik theorem states that there exist incomparable, computably enumerable languages A and B. Incomparable meaning that there does not exist a Turing reduction from A to B or a Turing reduction from B to A. It is notable for its use of the priority finite injury approach.
|
https://en.wikipedia.org/wiki/Friedberg–Muchnik_theorem
|
In mathematical logic, the Friedman translation is a certain transformation of intuitionistic formulas. Among other things it can be used to show that the Π02-theorems of various first-order theories of classical mathematics are also theorems of intuitionistic mathematics. It is named after its discoverer, Harvey Friedman.
|
https://en.wikipedia.org/wiki/Friedman_translation
|
In mathematical logic, the Hilbert–Bernays provability conditions, named after David Hilbert and Paul Bernays, are a set of requirements for formalized provability predicates in formal theories of arithmetic (Smith 2007:224). These conditions are used in many proofs of Kurt Gödel's second incompleteness theorem. They are also closely related to axioms of provability logic.
|
https://en.wikipedia.org/wiki/Hilbert–Bernays_provability_conditions
|
In mathematical logic, the Kanamori–McAloon theorem, due to Kanamori & McAloon (1987), gives an example of an incompleteness in Peano arithmetic, similar to that of the Paris–Harrington theorem. They showed that a certain finitistic theorem in Ramsey theory is not provable in Peano arithmetic (PA).
|
https://en.wikipedia.org/wiki/Kanamori–McAloon_theorem
|
In mathematical logic, the Lindenbaum–Tarski algebra (or Lindenbaum algebra) of a logical theory T consists of the equivalence classes of sentences of the theory (i.e., the quotient, under the equivalence relation ~ defined such that p ~ q exactly when p and q are provably equivalent in T). That is, two sentences are equivalent if the theory T proves that each implies the other. The Lindenbaum–Tarski algebra is thus the quotient algebra obtained by factoring the algebra of formulas by this congruence relation.
|
https://en.wikipedia.org/wiki/Lindenbaum_algebra
|
The algebra is named for logicians Adolf Lindenbaum and Alfred Tarski. Starting in the academic year 1926-1927, Lindenbaum pioneered his method in Jan Łukasiewicz's mathematical logic seminar, and the method was popularized and generalized in subsequent decades through work by Tarski. The Lindenbaum–Tarski algebra is considered the origin of the modern algebraic logic.
|
https://en.wikipedia.org/wiki/Lindenbaum_algebra
|
In mathematical logic, the Löwenheim–Skolem theorem is a theorem on the existence and cardinality of models, named after Leopold Löwenheim and Thoralf Skolem. The precise formulation is given below. It implies that if a countable first-order theory has an infinite model, then for every infinite cardinal number κ it has a model of size κ, and that no first-order theory with an infinite model can have a unique model up to isomorphism.
|
https://en.wikipedia.org/wiki/Löwenheim–Skolem_theorem
|
As a consequence, first-order theories are unable to control the cardinality of their infinite models. The (downward) Löwenheim–Skolem theorem is one of the two key properties, along with the compactness theorem, that are used in Lindström's theorem to characterize first-order logic. In general, the Löwenheim–Skolem theorem does not hold in stronger logics such as second-order logic.
|
https://en.wikipedia.org/wiki/Löwenheim–Skolem_theorem
|
In mathematical logic, the Mostowski collapse lemma, also known as the Shepherdson–Mostowski collapse, is a theorem of set theory introduced by Andrzej Mostowski (1949, theorem 3) and John Shepherdson (1953).
|
https://en.wikipedia.org/wiki/Mostowski_collapse
|
In mathematical logic, the Paris–Harrington theorem states that a certain combinatorial principle in Ramsey theory, namely the strengthened finite Ramsey theorem, which is expressible in Peano arithmetic, is not provable in this system. The combinatorial principle is however provable in slightly stronger systems. This result has been described by some (such as the editor of the Handbook of Mathematical Logic in the references below) as the first "natural" example of a true statement about the integers that could be stated in the language of arithmetic, but not proved in Peano arithmetic; it was already known that such statements existed by Gödel's first incompleteness theorem.
|
https://en.wikipedia.org/wiki/Paris–Harrington_theorem
|
In mathematical logic, the Peano axioms, also known as the Dedekind–Peano axioms or the Peano postulates, are axioms for the natural numbers presented by the 19th-century Italian mathematician Giuseppe Peano. These axioms have been used nearly unchanged in a number of metamathematical investigations, including research into fundamental questions of whether number theory is consistent and complete. The need to formalize arithmetic was not well appreciated until the work of Hermann Grassmann, who showed in the 1860s that many facts in arithmetic could be derived from more basic facts about the successor operation and induction.
|
https://en.wikipedia.org/wiki/Peano_axiom
|
In 1881, Charles Sanders Peirce provided an axiomatization of natural-number arithmetic. In 1888, Richard Dedekind proposed another axiomatization of natural-number arithmetic, and in 1889, Peano published a simplified version of them as a collection of axioms in his book The principles of arithmetic presented by a new method (Latin: Arithmetices principia, nova methodo exposita). The nine Peano axioms contain three types of statements.
|
https://en.wikipedia.org/wiki/Peano_axiom
|
The first axiom asserts the existence of at least one member of the set of natural numbers. The next four are general statements about equality; in modern treatments these are often not taken as part of the Peano axioms, but rather as axioms of the "underlying logic". The next three axioms are first-order statements about natural numbers expressing the fundamental properties of the successor operation. The ninth, final axiom is a second-order statement of the principle of mathematical induction over the natural numbers, which makes this formulation close to second-order arithmetic. A weaker first-order system called Peano arithmetic is obtained by explicitly adding the addition and multiplication operation symbols and replacing the second-order induction axiom with a first-order axiom schema.
|
https://en.wikipedia.org/wiki/Peano_axiom
|
In mathematical logic, the Scott–Curry theorem is a result in lambda calculus stating that if two non-empty sets of lambda terms A and B are closed under beta-convertibility then they are recursively inseparable.
|
https://en.wikipedia.org/wiki/Scott–Curry_theorem
|
In mathematical logic, the ancestral relation (often shortened to ancestral) of a binary relation R is its transitive closure, however defined in a different way, see below. Ancestral relations make their first appearance in Frege's Begriffsschrift. Frege later employed them in his Grundgesetze as part of his definition of the finite cardinals. Hence the ancestral was a key part of his search for a logicist foundation of arithmetic.
|
https://en.wikipedia.org/wiki/Ancestral_relation
|
In mathematical logic, the arithmetical hierarchy, arithmetic hierarchy or Kleene–Mostowski hierarchy (after mathematicians Stephen Cole Kleene and Andrzej Mostowski) classifies certain sets based on the complexity of formulas that define them. Any set that receives a classification is called arithmetical. The arithmetical hierarchy was invented independently by Kleene (1943) and Mostowski (1946).The arithmetical hierarchy is important in computability theory, effective descriptive set theory, and the study of formal theories such as Peano arithmetic. The Tarski–Kuratowski algorithm provides an easy way to get an upper bound on the classifications assigned to a formula and the set it defines. The hyperarithmetical hierarchy and the analytical hierarchy extend the arithmetical hierarchy to classify additional formulas and sets.
|
https://en.wikipedia.org/wiki/Arithmetical_reducibility
|
In mathematical logic, the compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. This theorem is an important tool in model theory, as it provides a useful (but generally not effective) method for constructing models of any set of sentences that is finitely consistent. The compactness theorem for the propositional calculus is a consequence of Tychonoff's theorem (which says that the product of compact spaces is compact) applied to compact Stone spaces, hence the theorem's name.
|
https://en.wikipedia.org/wiki/Compactness_theorem
|
Likewise, it is analogous to the finite intersection property characterization of compactness in topological spaces: a collection of closed sets in a compact space has a non-empty intersection if every finite subcollection has a non-empty intersection. The compactness theorem is one of the two key properties, along with the downward Löwenheim–Skolem theorem, that is used in Lindström's theorem to characterize first-order logic. Although there are some generalizations of the compactness theorem to non-first-order logics, the compactness theorem itself does not hold in them, except for a very limited number of examples.
|
https://en.wikipedia.org/wiki/Compactness_theorem
|
In mathematical logic, the cut rule is an inference rule of sequent calculus. It is a generalisation of the classical modus ponens inference rule. Its meaning is that, if a formula A appears as a conclusion in one proof and a hypothesis in another, then another proof in which the formula A does not appear can be deduced. In the particular case of the modus ponens, for example occurrences of man are eliminated of Every man is mortal, Socrates is a man to deduce Socrates is mortal.
|
https://en.wikipedia.org/wiki/Cut_rule
|
In mathematical logic, the diagonal lemma (also known as diagonalization lemma, self-reference lemma or fixed point theorem) establishes the existence of self-referential sentences in certain formal theories of the natural numbers—specifically those theories that are strong enough to represent all computable functions. The sentences whose existence is secured by the diagonal lemma can then, in turn, be used to prove fundamental limitative results such as Gödel's incompleteness theorems and Tarski's undefinability theorem.
|
https://en.wikipedia.org/wiki/Diagonal_lemma
|
In mathematical logic, the disjunction and existence properties are the "hallmarks" of constructive theories such as Heyting arithmetic and constructive set theories (Rathjen 2005).
|
https://en.wikipedia.org/wiki/Disjunction_property
|
In mathematical logic, the implicational propositional calculus is a version of classical propositional calculus which uses only one connective, called implication or conditional. In formulas, this binary operation is indicated by "implies", "if ..., then ...", "→", " → {\displaystyle \rightarrow } ", etc..
|
https://en.wikipedia.org/wiki/Implicational_propositional_calculus
|
In mathematical logic, the intersection type discipline is a branch of type theory encompassing type systems that use the intersection type constructor ( ∩ ) {\displaystyle (\cap )} to assign multiple types to a single term. In particular, if a term M {\displaystyle M} can be assigned both the type φ 1 {\displaystyle \varphi _{1}} and the type φ 2 {\displaystyle \varphi _{2}} , then M {\displaystyle M} can be assigned the intersection type φ 1 ∩ φ 2 {\displaystyle \varphi _{1}\cap \varphi _{2}} (and vice versa). Therefore, the intersection type constructor can be used to express finite heterogeneous ad hoc polymorphism (as opposed to parametric polymorphism). For example, the λ-term λ x .
|
https://en.wikipedia.org/wiki/Intersection_type_discipline
|
( x x ) {\displaystyle \lambda x.\! (x\;x)} can be assigned the type ( ( α → β ) ∩ α ) → β {\displaystyle ((\alpha \to \beta )\cap \alpha )\to \beta } in most intersection type systems, assuming for the term variable x {\displaystyle x} both the function type α → β {\displaystyle \alpha \to \beta } and the corresponding argument type α {\displaystyle \alpha } .
|
https://en.wikipedia.org/wiki/Intersection_type_discipline
|
Prominent intersection type systems include the Coppo–Dezani type assignment system, the Barendregt-Coppo–Dezani type assignment system, and the essential intersection type assignment system. Most strikingly, intersection type systems are closely related to (and often exactly characterize) normalization properties of λ-terms under β-reduction. In programming languages, such as TypeScript and Scala, intersection types are used to express ad hoc polymorphism.
|
https://en.wikipedia.org/wiki/Intersection_type_discipline
|
In mathematical logic, the primitive recursive functionals are a generalization of primitive recursive functions into higher type theory. They consist of a collection of functions in all pure finite types. The primitive recursive functionals are important in proof theory and constructive mathematics. They are a central part of the Dialectica interpretation of intuitionistic arithmetic developed by Kurt Gödel. In recursion theory, the primitive recursive functionals are an example of higher-type computability, as primitive recursive functions are examples of Turing computability.
|
https://en.wikipedia.org/wiki/Primitive_recursive_functional
|
In mathematical logic, the quantifier rank of a formula is the depth of nesting of its quantifiers. It plays an essential role in model theory. Notice that the quantifier rank is a property of the formula itself (i.e. the expression in a language). Thus two logically equivalent formulae can have different quantifier ranks, when they express the same thing in different ways.
|
https://en.wikipedia.org/wiki/Quantifier_rank
|
In mathematical logic, the rules of passage govern how quantifiers distribute over the basic logical connectives of first-order logic. The rules of passage govern the "passage" (translation) from any formula of first-order logic to the equivalent formula in prenex normal form, and vice versa.
|
https://en.wikipedia.org/wiki/Rules_of_passage_(logic)
|
In mathematical logic, the spectrum of a sentence is the set of natural numbers occurring as the size of a finite model in which a given sentence is true. By a result in descriptive complexity, a set of natural numbers is a spectrum if and only if it can be recognized in non-deterministic exponential time.
|
https://en.wikipedia.org/wiki/Spectrum_of_a_sentence
|
In mathematical logic, the theory of infinite sets was first developed by Georg Cantor. Although this work has become a thoroughly standard fixture of classical set theory, it has been criticized in several areas by mathematicians and philosophers. Cantor's theorem implies that there are sets having cardinality greater than the infinite cardinality of the set of natural numbers. Cantor's argument for this theorem is presented with one small change.
|
https://en.wikipedia.org/wiki/Controversy_over_Cantor's_theory
|
This argument can be improved by using a definition he gave later. The resulting argument uses only five axioms of set theory.
|
https://en.wikipedia.org/wiki/Controversy_over_Cantor's_theory
|
Cantor's set theory was controversial at the start, but later became largely accepted. Most modern mathematics textbooks implicitly use Cantor's views on mathematical infinity. For example, a line is generally presented as the infinite set of its points, and it is commonly taught that there are more real numbers than rational numbers (see cardinality of the continuum).
|
https://en.wikipedia.org/wiki/Controversy_over_Cantor's_theory
|
In mathematical logic, there are several formal systems of "fuzzy logic", most of which are in the family of t-norm fuzzy logics.
|
https://en.wikipedia.org/wiki/Fuzzy_logics
|
In mathematical logic, true arithmetic is the set of all true first-order statements about the arithmetic of natural numbers. This is the theory associated with the standard model of the Peano axioms in the language of the first-order Peano axioms. True arithmetic is occasionally called Skolem arithmetic, though this term usually refers to the different theory of natural numbers with multiplication.
|
https://en.wikipedia.org/wiki/True_arithmetic
|
In mathematical logic, two theories are equiconsistent if the consistency of one theory implies the consistency of the other theory, and vice versa. In this case, they are, roughly speaking, "as consistent as each other". In general, it is not possible to prove the absolute consistency of a theory T. Instead we usually take a theory S, believed to be consistent, and try to prove the weaker statement that if S is consistent then T must also be consistent—if we can do this we say that T is consistent relative to S. If S is also consistent relative to T then we say that S and T are equiconsistent.
|
https://en.wikipedia.org/wiki/Consistency_strength
|
In mathematical logic, various sublanguages of set theory are decidable. These include: Sets with Monotone, Additive, and Multiplicative Functions. Sets with restricted quantifiers. == References ==
|
https://en.wikipedia.org/wiki/Decidable_sublanguages_of_set_theory
|
In mathematical logic, weak interpretability is a notion of translation of logical theories, introduced together with interpretability by Alfred Tarski in 1953. Let T and S be formal theories. Slightly simplified, T is said to be weakly interpretable in S if, and only if, the language of T can be translated into the language of S in such a way that the translation of every theorem of T is consistent with S. Of course, there are some natural conditions on admissible translations here, such as the necessity for a translation to preserve the logical structure of formulas. A generalization of weak interpretability, tolerance, was introduced by Giorgi Japaridze in 1992.
|
https://en.wikipedia.org/wiki/Weak_interpretability
|
In mathematical measure theory, for every positive integer n the ham sandwich theorem states that given n measurable "objects" in n-dimensional Euclidean space, it is possible to divide each one of them in half (with respect to their measure, e.g. volume) with a single (n − 1)-dimensional hyperplane. This is even possible if the objects overlap. It was proposed by Hugo Steinhaus and proved by Stefan Banach (explicitly in dimension 3, without taking the trouble to state the theorem in the n-dimensional case), and also years later called the Stone–Tukey theorem after Arthur H. Stone and John Tukey.
|
https://en.wikipedia.org/wiki/Ham_sandwich_theorem
|
In mathematical modeling of social networks, link-centric preferential attachment is a node's propensity to re-establish links to nodes it has previously been in contact with in time-varying networks. This preferential attachment model relies on nodes keeping memory of previous neighbors up to the current time.
|
https://en.wikipedia.org/wiki/Link-centric_preferential_attachment
|
In mathematical modeling, a guess value is more commonly called a starting value or initial value. These are necessary for most optimization problems which use search algorithms, because those algorithms are mainly deterministic and iterative, and they need to start somewhere. One common type of application is nonlinear regression.
|
https://en.wikipedia.org/wiki/Guess_value
|
In mathematical modeling, deterministic simulations contain no random variables and no degree of randomness, and consist mostly of equations, for example difference equations. These simulations have known inputs and they result in a unique set of outputs. Contrast stochastic (probability) simulation, which includes random variables.
|
https://en.wikipedia.org/wiki/Deterministic_simulation
|
Deterministic simulation models are usually designed to capture some underlying mechanism or natural process. They are different from statistical models (for example linear regression) whose aim is to empirically estimate the relationships between variables. The deterministic model is viewed as a useful approximation of reality that is easier to build and interpret than a stochastic model. However, such models can be extremely complicated with large numbers of inputs and outputs, and therefore are often noninvertible; a fixed single set of outputs can be generated by multiple sets of inputs. Thus taking reliable account of parameter and model uncertainty is crucial, perhaps even more so than for standard statistical models, yet this is an area that has received little attention from statisticians.
|
https://en.wikipedia.org/wiki/Deterministic_simulation
|
In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitted model is a mathematical model that contains more parameters than can be justified by the data. In a mathematical sense, these parameters represent the degree of a polynomial. The essence of overfitting is to have unknowingly extracted some of the residual variation (i.e., the noise) as if that variation represented underlying model structure.
|
https://en.wikipedia.org/wiki/Underfitting
|
: 45 Underfitting occurs when a mathematical model cannot adequately capture the underlying structure of the data. An under-fitted model is a model where some parameters or terms that would appear in a correctly specified model are missing.
|
https://en.wikipedia.org/wiki/Underfitting
|
Under-fitting would occur, for example, when fitting a linear model to non-linear data. Such a model will tend to have poor predictive performance. The possibility of over-fitting exists because the criterion used for selecting the model is not the same as the criterion used to judge the suitability of a model.
|
https://en.wikipedia.org/wiki/Underfitting
|
For example, a model might be selected by maximizing its performance on some set of training data, and yet its suitability might be determined by its ability to perform well on unseen data; then over-fitting occurs when a model begins to "memorize" training data rather than "learning" to generalize from a trend. As an extreme example, if the number of parameters is the same as or greater than the number of observations, then a model can perfectly predict the training data simply by memorizing the data in its entirety. (For an illustration, see Figure 2.)
|
https://en.wikipedia.org/wiki/Underfitting
|
Such a model, though, will typically fail severely when making predictions. The potential for overfitting depends not only on the number of parameters and data but also the conformability of the model structure with the data shape, and the magnitude of model error compared to the expected level of noise or error in the data. Even when the fitted model does not have an excessive number of parameters, it is to be expected that the fitted relationship will appear to perform less well on a new data set than on the data set used for fitting (a phenomenon sometimes known as shrinkage).
|
https://en.wikipedia.org/wiki/Underfitting
|
In particular, the value of the coefficient of determination will shrink relative to the original data. To lessen the chance or amount of overfitting, several techniques are available (e.g., model comparison, cross-validation, regularization, early stopping, pruning, Bayesian priors, or dropout). The basis of some techniques is either (1) to explicitly penalize overly complex models or (2) to test the model's ability to generalize by evaluating its performance on a set of data not used for training, which is assumed to approximate the typical unseen data that a model will encounter.
|
https://en.wikipedia.org/wiki/Underfitting
|
In mathematical modeling, resilience refers to the ability of a dynamical system to recover from perturbations and return to its original stable steady state. It is a measure of the stability and robustness of a system in the face of changes or disturbances. If a system is not resilient enough, it is more susceptible to perturbations and can more easily undergo a critical transition. A common analogy used to explain the concept of resilience of an equilibrium is one of a ball in a valley.
|
https://en.wikipedia.org/wiki/Resilience_(mathematics)
|
A resilient steady state corresponds to a ball in a deep valley, so any push or perturbation will very quickly lead the ball to return to the resting point where it started. On the other hand, a less resilient steady state corresponds to a ball in a shallow valley, so the ball will take a much longer time to return to the equilibrium after a perturbation. The concept of resilience is particularly useful in systems that exhibit tipping points, whose study has a long history that can be traced back to catastrophe theory. While this theory was initially overhyped and fell out of favor, its mathematical foundation remains strong and is now recognized as relevant to many different systems.
|
https://en.wikipedia.org/wiki/Resilience_(mathematics)
|
In mathematical modeling, the dependent variable is studied to see if and how much it varies as the independent variables vary. In the simple stochastic linear model yi = a + bxi + ei the term yi is the ith value of the dependent variable and xi is the ith value of the independent variable. The term ei is known as the "error" and contains the variability of the dependent variable not explained by the independent variable. With multiple independent variables, the model is yi = a + bxi,1 + bxi,2 + ... + bxi,n + ei, where n is the number of independent variables.In statistics, more specifically in linear regression, a scatter plot of data is generated with X as the independent variable and Y as the dependent variable.
|
https://en.wikipedia.org/wiki/Extraneous_variables
|
This is also called a bivariate dataset, (x1, y1)(x2, y2) ...(xi, yi). The simple linear regression model takes the form of Yi = a + Bxi + Ui, for i = 1, 2, ... , n. In this case, Ui, ... ,Un are independent random variables. This occurs when the measurements do not influence each other.
|
https://en.wikipedia.org/wiki/Extraneous_variables
|
Through propagation of independence, the independence of Ui implies independence of Yi, even though each Yi has a different expectation value. Each Ui has an expectation value of 0 and a variance of σ2. Expectation of Yi Proof: E = E = α + β x i + E = α + β x i .
|
https://en.wikipedia.org/wiki/Extraneous_variables
|
{\displaystyle E=E=\alpha +\beta x_{i}+E=\alpha +\beta x_{i}.} The line of best fit for the bivariate dataset takes the form y = α + βx and is called the regression line. α and β correspond to the intercept and slope, respectively.In an experiment, the variable manipulated by an experimenter is something that is proven to work, called an independent variable.
|
https://en.wikipedia.org/wiki/Extraneous_variables
|
The dependent variable is the event expected to change when the independent variable is manipulated.In data mining tools (for multivariate statistics and machine learning), the dependent variable is assigned a role as target variable (or in some tools as label attribute), while an independent variable may be assigned a role as regular variable. Known values for the target variable are provided for the training data set and test data set, but should be predicted for other data. The target variable is used in supervised learning algorithms but not in unsupervised learning.
|
https://en.wikipedia.org/wiki/Extraneous_variables
|
In mathematical modelling of infectious disease, the dynamics of spreading is usually described through a set of non-linear ordinary differential equations (ODE). So there is always n {\displaystyle n} coupled equations of form C i ˙ = d C i d t = f ( C 1 , C 2 , . . .
|
https://en.wikipedia.org/wiki/Effective_reproduction_number
|
, C n ) {\displaystyle {\dot {C_{i}}}={\operatorname {d} \!C_{i} \over \operatorname {d} \!t}=f(C_{1},C_{2},...,C_{n})} which shows how the number of people in compartment C i {\displaystyle C_{i}} changes over time. For example, in a SIR model, C 1 = S {\displaystyle C_{1}=S} , C 2 = I {\displaystyle C_{2}=I} , and C 3 = R {\displaystyle C_{3}=R} . Compartmental models have a disease-free equilibrium (DFE) meaning that it is possible to find an equilibrium while setting the number of infected people to zero, I = 0 {\displaystyle I=0} .
|
https://en.wikipedia.org/wiki/Effective_reproduction_number
|
In other words, as a rule, there is an infection-free steady state. This solution, also usually ensures that the disease-free equilibrium is also an equilibrium of the system. There is another fixed point known as an Endemic Equilibrium (EE) where the disease is not totally eradicated and remains in the population.
|
https://en.wikipedia.org/wiki/Effective_reproduction_number
|
Mathematically, R 0 {\displaystyle R_{0}} is a threshold for stability of a disease-free equilibrium such that: R 0 ≤ 1 ⇒ lim t → ∞ ( C 1 ( t ) , C 2 ( t ) , ⋯ , C n ( t ) ) = DFE {\displaystyle R_{0}\leq 1\Rightarrow \lim _{t\to \infty }(C_{1}(t),C_{2}(t),\cdots ,C_{n}(t))={\textrm {DFE}}} R 0 > 1 , I ( 0 ) > 0 ⇒ lim t → ∞ ( C 1 ( t ) , C 2 ( t ) , ⋯ , C n ( t ) ) = EE . {\displaystyle R_{0}>1,I(0)>0\Rightarrow \lim _{t\to \infty }(C_{1}(t),C_{2}(t),\cdots ,C_{n}(t))={\textrm {EE}}.} To calculate R 0 {\displaystyle R_{0}} , the first step is to linearise around the disease-free equilibrium (DFE), but for the infected subsystem of non-linear ODEs which describe the production of new infections and changes in state among infected individuals.
|
https://en.wikipedia.org/wiki/Effective_reproduction_number
|
Epidemiologically, the linearisation reflects that R 0 {\displaystyle R_{0}} characterizes the potential for initial spread of an infectious person in a naive population, assuming the change in the susceptible population is negligible during the initial spread. A linear system of ODEs can always be described by a matrix. So, the next step is to construct a linear positive operator that provides the next generation of infected people when applied to the present generation.
|
https://en.wikipedia.org/wiki/Effective_reproduction_number
|
Note that this operator (matrix) is responsible for the number of infected people, not all the compartments. Iteration of this operator describes the initial progression of infection within the heterogeneous population. So comparing the spectral radius of this operator to unity determines whether the generations of infected people grow or not. R 0 {\displaystyle R_{0}} can be written as a product of the infection rate near the disease-free equilibrium and average duration of infectiousness. It is used to find the peak and final size of an epidemic.
|
https://en.wikipedia.org/wiki/Effective_reproduction_number
|
In mathematical morphology and digital image processing, a morphological gradient is the difference between the dilation and the erosion of a given image. It is an image where each pixel value (typically non-negative) indicates the contrast intensity in the close neighborhood of that pixel. It is useful for edge detection and segmentation applications.
|
https://en.wikipedia.org/wiki/Morphological_Gradient
|
In mathematical morphology and digital image processing, a top-hat transform is an operation that extracts small elements and details from given images. There exist two types of top-hat transform: the white top-hat transform is defined as the difference between the input image and its opening by some structuring element, while the black top-hat transform is defined dually as the difference between the closing and the input image. Top-hat transforms are used for various image processing tasks, such as feature extraction, background equalization, image enhancement, and others.
|
https://en.wikipedia.org/wiki/Top-hat_transform
|
In mathematical morphology, a structuring element is a shape, used to probe or interact with a given image, with the purpose of drawing conclusions on how this shape fits or misses the shapes in the image. It is typically used in morphological operations, such as dilation, erosion, opening, and closing, as well as the hit-or-miss transform. According to Georges Matheron, knowledge about an object (e.g., an image) depends on the manner in which we probe (observe) it. In particular, the choice of a certain structuring element for a particular morphological operation influences the information one can obtain.
|
https://en.wikipedia.org/wiki/Structuring_element
|
There are two main characteristics that are directly related to structuring elements: Shape. For example, the structuring element can be a "ball" or a line; convex or a ring, etc. By choosing a particular structuring element, one sets a way of differentiating some objects (or parts of objects) from others, according to their shape or spatial orientation. Size. For example, one structuring element can be a 3 × 3 {\displaystyle 3\times 3} square or a 21 × 21 {\displaystyle 21\times 21} square. Setting the size of the structuring element is similar to setting the observation scale, and setting the criterion to differentiate image objects or features according to size.
|
https://en.wikipedia.org/wiki/Structuring_element
|
In mathematical morphology, hit-or-miss transform is an operation that detects a given configuration (or pattern) in a binary image, using the morphological erosion operator and a pair of disjoint structuring elements. The result of the hit-or-miss transform is the set of positions where the first structuring element fits in the foreground of the input image, and the second structuring element misses it completely.
|
https://en.wikipedia.org/wiki/Hit-or-miss_transform
|
In mathematical morphology, the closing of a set (binary image) A by a structuring element B is the erosion of the dilation of that set, A ∙ B = ( A ⊕ B ) ⊖ B , {\displaystyle A\bullet B=(A\oplus B)\ominus B,\,} where ⊕ {\displaystyle \oplus } and ⊖ {\displaystyle \ominus } denote the dilation and erosion, respectively. In image processing, closing is, together with opening, the basic workhorse of morphological noise removal. Opening removes small objects, while closing removes small holes.
|
https://en.wikipedia.org/wiki/Closing_(morphology)
|
In mathematical morphology, the h-maxima transform is a morphological operation used to filter local maxima of an image based on local contrast information. First, all local maxima are defined as connected pixels in a given neighborhood with intensity level greater than pixels outside the neighborhood. Second, all local maxima that have height f {\displaystyle f} lower or equal to a given threshold are suppressed. The height f of the remaining maxima is decreased by h {\displaystyle h} . The h-maxima transform is defined as the reconstruction by dilation of f {\displaystyle f} from f − h {\displaystyle f-h}: HMAX h ( f ) = R f δ ( f − h ) {\displaystyle \operatorname {HMAX} _{h}(f)=R_{f}^{\delta }(f-h)}
|
https://en.wikipedia.org/wiki/H-maxima_transform
|
In mathematical notation for numbers, a signed-digit representation is a positional numeral system with a set of signed digits used to encode the integers. Signed-digit representation can be used to accomplish fast addition of integers because it can eliminate chains of dependent carries. In the binary numeral system, a special case signed-digit representation is the non-adjacent form, which can offer speed benefits with minimal space overhead.
|
https://en.wikipedia.org/wiki/Signed-digit_representation
|
In mathematical notation the inverse square law can be expressed as an intensity (I) varying as a function of distance (d) from some centre. The intensity is proportional (see ∝) to the reciprocal of the square of the distance thus: It can also be mathematically expressed as: or as the formulation of a constant quantity: The divergence of a vector field which is the resultant of radial inverse-square law fields with respect to one or more sources is proportional to the strength of the local sources, and hence zero outside sources. Newton's law of universal gravitation follows an inverse-square law, as do the effects of electric, light, sound, and radiation phenomena.
|
https://en.wikipedia.org/wiki/Inverse-square_law
|
In mathematical notation, ordered set operators indicate whether an object precedes or succeeds another. These relationship operators are denoted by the unicode symbols U+227A-F, along with symbols located unicode blocks U+228x through U+22Ex.
|
https://en.wikipedia.org/wiki/Ordered_set_operators
|
In mathematical numeral systems the radix r is usually the number of unique digits, including zero, that a positional numeral system uses to represent numbers. In some cases, such as with a negative base, the radix is the absolute value r = | b | {\displaystyle r=|b|} of the base b. For example, for the decimal system the radix (and base) is ten, because it uses the ten digits from 0 through 9. When a number "hits" 9, the next number will not be another different symbol, but a "1" followed by a "0". In binary, the radix is two, since after it hits "1", instead of "2" or another written symbol, it jumps straight to "10", followed by "11" and "100".
|
https://en.wikipedia.org/wiki/Place_value_system
|
The highest symbol of a positional numeral system usually has the value one less than the value of the radix of that numeral system. The standard positional numeral systems differ from one another only in the base they use. The radix is an integer that is greater than 1, since a radix of zero would not have any digits, and a radix of 1 would only have the zero digit.
|
https://en.wikipedia.org/wiki/Place_value_system
|
Negative bases are rarely used. In a system with more than | b | {\displaystyle |b|} unique digits, numbers may have many different possible representations.
|
https://en.wikipedia.org/wiki/Place_value_system
|
It is important that the radix is finite, from which follows that the number of digits is quite low. Otherwise, the length of a numeral would not necessarily be logarithmic in its size. (In certain non-standard positional numeral systems, including bijective numeration, the definition of the base or the allowed digits deviates from the above.)
|
https://en.wikipedia.org/wiki/Place_value_system
|
In mathematical optimization and computer science, heuristic (from Greek εὑρίσκω "I find, discover") is a technique designed for problem solving more quickly when classic methods are too slow for finding an exact or approximate solution, or when classic methods fail to find any exact solution. This is achieved by trading optimality, completeness, accuracy, or precision for speed. In a way, it can be considered a shortcut. A heuristic function, also simply called a heuristic, is a function that ranks alternatives in search algorithms at each branching step based on available information to decide which branch to follow. For example, it may approximate the exact solution.
|
https://en.wikipedia.org/wiki/Heuristic_search
|
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized.
|
https://en.wikipedia.org/wiki/Zero-one_loss_function
|
The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century.
|
https://en.wikipedia.org/wiki/Zero-one_loss_function
|
In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss.
|
https://en.wikipedia.org/wiki/Zero-one_loss_function
|
In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides information about the original problem. For example, a linear programming relaxation of an integer programming problem removes the integrality constraint and so allows non-integer rational solutions.
|
https://en.wikipedia.org/wiki/Relaxation_(approximation)
|
A Lagrangian relaxation of a complicated problem in combinatorial optimization penalizes violations of some constraints, allowing an easier relaxed problem to be solved. Relaxation techniques complement or supplement branch and bound algorithms of combinatorial optimization; linear programming and Lagrangian relaxations are used to obtain bounds in branch-and-bound algorithms for integer programming.The modeling strategy of relaxation should not be confused with iterative methods of relaxation, such as successive over-relaxation (SOR); iterative methods of relaxation are used in solving problems in differential equations, linear least-squares, and linear programming. However, iterative methods of relaxation have been used to solve Lagrangian relaxations.
|
https://en.wikipedia.org/wiki/Relaxation_(approximation)
|
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then the dual is a maximization problem (and vice versa). Any feasible solution to the primal (minimization) problem is at least as large as any feasible solution to the dual (maximization) problem. Therefore, the solution to the primal is an upper bound to the solution of the dual, and the solution of the dual is a lower bound to the solution of the primal.
|
https://en.wikipedia.org/wiki/Lagrange_duality
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.