text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
Incomputer science,Scott encodingis a way to represent(recursive) data typesin thelambda calculus.Church encodingperforms a similar function. The data and operators form a mathematical structure which isembeddedin the lambda calculus.
Whereas Church encoding starts with representations of the basic data types, and builds up from it, Scott encoding starts from the simplest method to composealgebraic data types.
Mogensen–Scott encodingextends and slightly modifies Scott encoding by applying the encoding toMetaprogramming[citation needed]. This encoding allows the representation oflambda calculusterms, as data, to be operated on by a meta program.
Scott encoding appears first in a set of unpublished lecture notes byDana Scott[1]whose first citation occurs in the bookCombinatorial Logic, Volume II.[2]Michel Parigotgave a logical interpretation of and stronglynormalizingrecursor for Scott-encoded numerals,[3]referring to them as the "Stack type" representation of numbers.Torben Mogensenlater extended Scott encoding for the encoding of Lambda terms as data.[4]
Lambda calculus allows data to be stored asparametersto a function that does not yet have all the parameters required for application. For example,
May be thought of as a record or struct where the fieldsx1…xn{\displaystyle x_{1}\ldots x_{n}}have been initialized with the valuesv1…vn{\displaystyle v_{1}\ldots v_{n}}. These values may then be accessed by applying the term to a functionf. This reduces to,
cmay represent a constructor for an algebraic data type in functional languages such asHaskell. Now suppose there areNconstructors, each withAi{\displaystyle A_{i}}arguments;
Each constructor selects a different function from the function parametersf1…fN{\displaystyle f_{1}\ldots f_{N}}. This provides branching in the process flow, based on the constructor. Each constructor may have a differentarity(number of parameters). If the constructors have no parameters then the set of constructors acts like anenum; a type with a fixed number of values. If the constructors have parameters, recursive data structures may be constructed.
LetDbe a datatype withNconstructors,{ci}i=1N{\displaystyle \{c_{i}\}_{i=1}^{N}}, such that constructorci{\displaystyle c_{i}}hasarityAi{\displaystyle A_{i}}.
The Scott encoding of constructorci{\displaystyle c_{i}}of the data typeDis
Mogensen extends Scott encoding to encode any untyped lambda term as data. This allows a lambda term to be represented as data, within a Lambda calculusmeta program. The meta functionmseconverts a lambda term into the corresponding data representation of the lambda term;
The "lambda term" is represented as atagged unionwith three cases:
For example,
The Scott encoding coincides with theChurch encodingfor booleans. Church encoding of pairs may be generalized to arbitrary data types by encodingci{\displaystyle c_{i}}ofDabove as[citation needed]
compare this to the Mogensen Scott encoding,
With this generalization, the Scott and Church encodings coincide on all enumerated datatypes (such as the boolean datatype) because each constructor is a constant (no parameters).
Concerning the practicality of using either the Church or Scott encoding for programming, there is a symmetric trade-off:[5]Church-encoded numerals support a constant-time addition operation and have no better than a linear-time predecessor operation; Scott-encoded numerals support a constant-time predecessor operation and have no better than a linear-time addition operation.
Church-encoded data and operations on them are typable insystem F, as are Scott-encoded data and operations. However, the encoding is significantly more complicated.[6]
The type of the Scott encoding of the natural numbers is the positive recursive type:
Full recursive types are not part of System F, but positive recursive types are expressible in System F via the encoding:
Combining these two facts yields the System F type of the Scott encoding:
This can be contrasted with the type of the Church encoding:
The Church encoding is a second-order type, but the Scott encoding is fourth-order!
|
https://en.wikipedia.org/wiki/Mogensen%E2%80%93Scott_encoding
|
Inset theory, anordinal number, orordinal, is a generalization ofordinal numerals(first, second,nth, etc.) aimed to extendenumerationtoinfinite sets.[1]
A finite set can be enumerated by successively labeling each element with the leastnatural numberthat has not been previously used. To extend this process to variousinfinite sets, ordinal numbers are defined more generally usinglinearly orderedgreek lettervariablesthat include the natural numbers and have the property that every set of ordinals has aleast or "smallest" element(this is needed for giving a meaning to "the least unused element").[2]This more general definition allows us to define an ordinal numberω{\displaystyle \omega }(omega) to be the least element that is greater than every natural number, along with ordinal numbersω+1{\displaystyle \omega +1},ω+2{\displaystyle \omega +2}, etc., which are even greater thanω{\displaystyle \omega }.
A linear order such that every non-empty subset has a least element is called awell-order. Theaxiom of choiceimplies that every set can be well-ordered, and given two well-ordered sets, one isisomorphicto aninitial segmentof the other. So ordinal numbers exist and are essentially unique.
Ordinal numbers are distinct fromcardinal numbers, which measure the size of sets. Although the distinction between ordinals and cardinals is not always apparent on finite sets (one can go from one to the other just by counting labels), they are very different in the infinite case, where different infinite ordinals can correspond to sets having the same cardinal. Like other kinds of numbers, ordinals can beadded, multiplied, and exponentiated, although none of these operations arecommutative.
Ordinals were introduced byGeorg Cantorin 1883[3]in order to accommodate infinite sequences and classifyderived sets, which he had previously introduced in 1872 while studying the uniqueness oftrigonometric series.[4]
Anatural number(which, in this context, includes the number0) can be used for two purposes: to describe thesizeof aset, or to describe thepositionof an element in a sequence. When restricted to finite sets, these two concepts coincide, since alllinear ordersof a finite set areisomorphic.
When dealing with infinite sets, however, one has to distinguish between the notion of size, which leads tocardinal numbers, and the notion of position, which leads to the ordinal numbers described here. This is because while any set has only one size (itscardinality), there are many nonisomorphicwell-orderingsof any infinite set, as explained below.
Whereas the notion of cardinal number is associated with a set with no particular structure on it, the ordinals are intimately linked with the special kind of sets that are calledwell-ordered. A well-ordered set is atotally orderedset (anorderedset such that, given two distinct elements, one is less than the other) in which every non-empty subset has a least element. Equivalently, assuming theaxiom of dependent choice, it is a totally ordered set without any infinite decreasing sequence — though there may be infinite increasing sequences. Ordinals may be used to label the elements of any given well-ordered set (the smallest element being labelled 0, the one after that 1, the next one 2, "and so on"), and to measure the "length" of the whole set by the least ordinal that is not a label for an element of the set. This "length" is called theorder typeof the set.
Any ordinal is defined by the set of ordinals that precede it. In fact, the most common definition of ordinalsidentifieseach ordinalasthe set of ordinals that precede it. For example, the ordinal 42 is generally identified as the set {0, 1, 2, ..., 41}. Conversely, any setSof ordinals that isdownward closed— meaning that for any ordinal α inSand any ordinal β < α, β is also inS— is (or can be identified with) an ordinal.
This definition of ordinals in terms of sets allows for infinite ordinals. The smallest infinite ordinal isω{\displaystyle \omega }, which can be identified with the set of natural numbers (so that the ordinal associated with every natural number precedesω{\displaystyle \omega }). Indeed, the set of natural numbers is well-ordered—as is any set of ordinals—and since it is downward closed, it can be identified with the ordinal associated with it.
Perhaps a clearer intuition of ordinals can be formed by examining a first few of them: as mentioned above, they start with the natural numbers, 0, 1, 2, 3, 4, 5, ... Afterallnatural numbers comes the first infinite ordinal, ω, and after that come ω+1, ω+2, ω+3, and so on. (Exactly what addition means will be defined later on: just consider them as names.) After all of these come ω·2 (which is ω+ω), ω·2+1, ω·2+2, and so on, then ω·3, and then later on ω·4. Now the set of ordinals formed in this way (the ω·m+n, wheremandnare natural numbers) must itself have an ordinal associated with it: and that is ω2. Further on, there will be ω3, then ω4, and so on, and ωω, then ωωω, then later ωωωω, and even later ε0(epsilon nought) (to give a few examples of relatively small—countable—ordinals). This can be continued indefinitely (as every time one says "and so on" when enumerating ordinals, it defines a larger ordinal). The smallestuncountableordinal is the set of all countable ordinals, expressed asω1orΩ{\displaystyle \Omega }.[5]
In awell-orderedset, every non-empty subset contains a distinct smallest element. Given theaxiom of dependent choice, this is equivalent to saying that the set istotally orderedand there is no infinite decreasing sequence (the latter being easier to visualize). In practice, the importance of well-ordering is justified by the possibility of applyingtransfinite induction, which says, essentially, that any property that passes on from the predecessors of an element to that element itself must be true of all elements (of the given well-ordered set). If the states of a computation (computer program or game) can be well-ordered—in such a way that each step is followed by a "lower" step—then the computation will terminate.
It is inappropriate to distinguish between two well-ordered sets if they only differ in the "labeling of their elements", or more formally: if the elements of the first set can be paired off with the elements of the second set such that if one element is smaller than another in the first set, then the partner of the first element is smaller than the partner of the second element in the second set, and vice versa. Such a one-to-one correspondence is called anorder isomorphism, and the two well-ordered sets are said to be order-isomorphic orsimilar(with the understanding that this is anequivalence relation).
Formally, if apartial order≤ is defined on the setS, and a partial order ≤' is defined on the setS', then theposets(S,≤) and (S',≤') areorder isomorphicif there is abijectionfthat preserves the ordering. That is,f(a) ≤'f(b) if and only ifa≤b. Provided there exists an order isomorphism between two well-ordered sets, the order isomorphism is unique: this makes it quite justifiable to consider the two sets as essentially identical, and to seek a"canonical" representativeof the isomorphism type (class). This is exactly what the ordinals provide, and it also provides a canonical labeling of the elements of any well-ordered set. Everywell-orderedset (S,<) is order-isomorphic to the set of ordinals less than one specific ordinal number under their natural ordering. This canonical set is theorder typeof (S,<).
Essentially, an ordinal is intended to be defined as anisomorphism classof well-ordered sets: that is, as anequivalence classfor theequivalence relationof "being order-isomorphic". There is a technical difficulty involved, however, in the fact that the equivalence class is too large to be a set in the usualZermelo–Fraenkel(ZF) formalization of set theory. But this is not a serious difficulty. The ordinal can be said to be theorder typeof any set in the class.
The original definition of ordinal numbers, found for example in thePrincipia Mathematica, defines the order type of a well-ordering as the set of all well-orderings similar (order-isomorphic) to that well-ordering: in other words, an ordinal number is genuinely an equivalence class of well-ordered sets. This definition must be abandoned inZFand related systems ofaxiomatic set theorybecause these equivalence classes are too large to form a set. However, this definition still can be used intype theoryand in Quine's axiomatic set theoryNew Foundationsand related systems (where it affords a rather surprising alternative solution to theBurali-Forti paradoxof the largest ordinal).
Rather than defining an ordinal as anequivalence classof well-ordered sets, it will be defined as a particular well-ordered set that (canonically) represents the class. Thus, an ordinal number will be a well-ordered set; and every well-ordered set will be order-isomorphic to exactly one ordinal number.
For each well-ordered setT,a↦T<a{\displaystyle a\mapsto T_{<a}}defines anorder isomorphismbetweenTand the set of all subsets ofThaving the formT<a:={x∈T∣x<a}{\displaystyle T_{<a}:=\{x\in T\mid x<a\}}ordered by inclusion. This motivates the standard definition, suggested byJohn von Neumannat the age of 19, now called definition ofvon Neumann ordinals: "each ordinal is the well-ordered set of all smaller ordinals". In symbols,λ=[0,λ){\displaystyle \lambda =[0,\lambda )}.[6][7]Formally:
The natural numbers are thus ordinals by this definition. For instance, 2 is an element of4 = {0, 1, 2, 3},and 2 is equal to{0, 1}and so it is a subset of{0, 1, 2, 3}.
It can be shown bytransfinite inductionthat every well-ordered set is order-isomorphic to exactly one of these ordinals, that is, there is an order preservingbijective functionbetween them.
Furthermore, the elements of every ordinal are ordinals themselves. Given two ordinalsSandT,Sis an element ofTif and only ifSis aproper subsetofT. Moreover, eitherSis an element ofT, orTis an element ofS, or they are equal. So every set of ordinals istotally ordered. Further, every set of ordinals is well-ordered. This generalizes the fact that every set of natural numbers is well-ordered.
Consequently, every ordinalSis a set having as elements precisely the ordinals smaller thanS. For example, every set of ordinals has asupremum, the ordinal obtained by taking the union of all the ordinals in the set. This union exists regardless of the set's size, by theaxiom of union.
The class of all ordinals is not a set. If it were a set, one could show that it was an ordinal and thus a member of itself, which would contradict itsstrictordering by membership. This is theBurali-Forti paradox. The class of all ordinals is variously called "Ord", "ON", or "∞".
An ordinal isfiniteif and only if the opposite order is also well-ordered, which is the case if and only if each of its non-empty subsets has agreatest element.
There are other modern formulations of the definition of ordinal. For example, assuming theaxiom of regularity, the following are equivalent for a setx:
These definitions cannot be used innon-well-founded set theories. In set theories withurelements, one has to further make sure that the definition excludes urelements from appearing in ordinals.
If α is any ordinal andXis a set, an α-indexed sequence of elements ofXis a function from α toX. This concept, atransfinite sequence(if α is infinite) orordinal-indexed sequence, is a generalization of the concept of asequence. An ordinary sequence corresponds to the case α = ω, while a finite α corresponds to atuple, a.k.a.string.
Transfinite induction holds in anywell-orderedset, but it is so important in relation to ordinals that it is worth restating here.
That is, ifP(α) is true wheneverP(β) is true for allβ < α, thenP(α) is true forallα. Or, more practically: in order to prove a propertyPfor all ordinals α, one can assume that it is already known for all smallerβ < α.
Transfinite induction can be used not only to prove things, but also to define them. Such a definition is normally said to be bytransfinite recursion– the proof that the result is well-defined uses transfinite induction. LetFdenote a (class) functionFto be defined on the ordinals. The idea now is that, in definingF(α) for an unspecified ordinal α, one may assume thatF(β) is already defined for allβ < αand thus give a formula forF(α) in terms of theseF(β). It then follows by transfinite induction that there is one and only one function satisfying the recursion formula up to and including α.
Here is an example of definition by transfinite recursion on the ordinals (more will be given later): define functionFby lettingF(α) be the smallest ordinal not in the set{F(β) | β < α}, that is, the set consisting of allF(β) forβ < α. This definition assumes theF(β) known in the very process of definingF; this apparent vicious circle is exactly what definition by transfinite recursion permits. In fact,F(0) makes sense since there is no ordinalβ < 0, and the set{F(β) | β < 0}is empty. SoF(0) is equal to 0 (the smallest ordinal of all). Now thatF(0) is known, the definition applied toF(1) makes sense (it is the smallest ordinal not in the singleton set{F(0)} = {0}), and so on (theand so onis exactly transfinite induction). It turns out that this example is not very exciting, since provablyF(α) = αfor all ordinals α, which can be shown, precisely, by transfinite induction.
Any nonzero ordinal has the minimum element, zero. It may or may not have a maximum element. For example, 42 has maximum 41 and ω+6 has maximum ω+5. On the other hand, ω does not have a maximum since there is no largest natural number. If an ordinal has a maximum α, then it is the next ordinal after α, and it is called asuccessor ordinal, namely the successor of α, written α+1. In the von Neumann definition of ordinals, the successor of α isα∪{α}{\displaystyle \alpha \cup \{\alpha \}}since its elements are those of α and α itself.[6]
A nonzero ordinal that isnota successor is called alimit ordinal. One justification for this term is that a limit ordinal is thelimitin a topological sense of all smaller ordinals (under theorder topology).
When⟨αι|ι<γ⟩{\displaystyle \langle \alpha _{\iota }|\iota <\gamma \rangle }is an ordinal-indexed sequence, indexed by a limitγ{\displaystyle \gamma }and the sequence isincreasing, i.e.αι<αρ{\displaystyle \alpha _{\iota }<\alpha _{\rho }}wheneverι<ρ,{\displaystyle \iota <\rho ,}itslimitis defined as the least upper bound of the set{αι|ι<γ},{\displaystyle \{\alpha _{\iota }|\iota <\gamma \},}that is, the smallest ordinal (it always exists) greater than any term of the sequence. In this sense, a limit ordinal is the limit of all smaller ordinals (indexed by itself). Put more directly, it is the supremum of the set of smaller ordinals.
Another way of defining a limit ordinal is to say that α is a limit ordinal if and only if:
So in the following sequence:
ω is a limit ordinal because for any smaller ordinal (in this example, a natural number) there is another ordinal (natural number) larger than it, but still less than ω.
Thus, every ordinal is either zero, or a successor (of a well-defined predecessor), or a limit. This distinction is important, because many definitions by transfinite recursion rely upon it. Very often, when defining a functionFby transfinite recursion on all ordinals, one definesF(0), andF(α+1) assumingF(α) is defined, and then, for limit ordinals δ one definesF(δ) as the limit of theF(β) for all β<δ (either in the sense of ordinal limits, as previously explained, or for some other notion of limit ifFdoes not take ordinal values). Thus, the interesting step in the definition is the successor step, not the limit ordinals. Such functions (especially forFnondecreasing and taking ordinal values) are called continuous. Ordinal addition, multiplication and exponentiation are continuous as functions of their second argument (but can be defined non-recursively).
Any well-ordered set is similar (order-isomorphic) to a unique ordinal numberα{\displaystyle \alpha }; in other words, its elements can be indexed in increasing fashion by the ordinals less thanα{\displaystyle \alpha }. This applies, in particular, to any set of ordinals: any set of ordinals is naturally indexed by the ordinals less than someα{\displaystyle \alpha }. The same holds, with a slight modification, forclassesof ordinals (a collection of ordinals, possibly too large to form a set, defined by some property): any class of ordinals can be indexed by ordinals (and, when the class is unbounded in the class of all ordinals, this puts it in class-bijection with the class of all ordinals). So theγ{\displaystyle \gamma }-th element in the class (with the convention that the "0-th" is the smallest, the "1-st" is the next smallest, and so on) can be freely spoken of. Formally, the definition is by transfinite induction: theγ{\displaystyle \gamma }-th element of the class is defined (provided it has already been defined for allβ<γ{\displaystyle \beta <\gamma }), as the smallest element greater than theβ{\displaystyle \beta }-th element for allβ<γ{\displaystyle \beta <\gamma }.
This could be applied, for example, to the class of limit ordinals: theγ{\displaystyle \gamma }-th ordinal, which is either a limit or zero isω⋅γ{\displaystyle \omega \cdot \gamma }(seeordinal arithmeticfor the definition of multiplication of ordinals). Similarly, one can consideradditively indecomposable ordinals(meaning a nonzero ordinal that is not the sum of two strictly smaller ordinals): theγ{\displaystyle \gamma }-th additively indecomposable ordinal is indexed asωγ{\displaystyle \omega ^{\gamma }}. The technique of indexing classes of ordinals is often useful in the context of fixed points: for example, theγ{\displaystyle \gamma }-th ordinalα{\displaystyle \alpha }such thatωα=α{\displaystyle \omega ^{\alpha }=\alpha }is writtenεγ{\displaystyle \varepsilon _{\gamma }}. These are called the "epsilon numbers".
A classC{\displaystyle C}of ordinals is said to beunbounded, orcofinal, when given any ordinalα{\displaystyle \alpha }, there is aβ{\displaystyle \beta }inC{\displaystyle C}such thatα<β{\displaystyle \alpha <\beta }(then the class must be a proper class, i.e., it cannot be a set). It is said to beclosedwhen the limit of a sequence of ordinals in the class is again in the class: or, equivalently, when the indexing (class-)functionF{\displaystyle F}is continuous in the sense that, forδ{\displaystyle \delta }a limit ordinal,F(δ){\displaystyle F(\delta )}(theδ{\displaystyle \delta }-th ordinal in the class) is the limit of allF(γ){\displaystyle F(\gamma )}forγ<δ{\displaystyle \gamma <\delta }; this is also the same as being closed, in thetopologicalsense, for theorder topology(to avoid talking of topology on proper classes, one can demand that the intersection of the class with any given ordinal is closed for the order topology on that ordinal, this is again equivalent).
Of particular importance are those classes of ordinals that areclosed and unbounded, sometimes calledclubs. For example, the class of all limit ordinals is closed and unbounded: this translates the fact that there is always a limit ordinal greater than a given ordinal, and that a limit of limit ordinals is a limit ordinal (a fortunate fact if the terminology is to make any sense at all!). The class of additively indecomposable ordinals, or the class ofε⋅{\displaystyle \varepsilon _{\cdot }}ordinals, or the class ofcardinals, are all closed unbounded; the set ofregularcardinals, however, is unbounded but not closed, and any finite set of ordinals is closed but not unbounded.
A class is stationary if it has a nonempty intersection with every closed unbounded class. All superclasses of closed unbounded classes are stationary, and stationary classes are unbounded, but there are stationary classes that are not closed and stationary classes that have no closed unbounded subclass (such as the class of all limit ordinals with countable cofinality). Since the intersection of two closed unbounded classes is closed and unbounded, the intersection of a stationary class and a closed unbounded class is stationary. But the intersection of two stationary classes may be empty, e.g. the class of ordinals with cofinality ω with the class of ordinals with uncountable cofinality.
Rather than formulating these definitions for (proper) classes of ordinals, one can formulate them for sets of ordinals below a given ordinalα{\displaystyle \alpha }: A subset of a limit ordinalα{\displaystyle \alpha }is said to be unbounded (or cofinal) underα{\displaystyle \alpha }provided any ordinal less thanα{\displaystyle \alpha }is less than some ordinal in the set. More generally, one can call a subset of any ordinalα{\displaystyle \alpha }cofinal inα{\displaystyle \alpha }provided every ordinal less thanα{\displaystyle \alpha }is less thanor equal tosome ordinal in the set. The subset is said to be closed underα{\displaystyle \alpha }provided it is closed for the order topologyinα{\displaystyle \alpha }, i.e. a limit of ordinals in the set is either in the set or equal toα{\displaystyle \alpha }itself.
There are three usual operations on ordinals: addition, multiplication, and exponentiation. Each can be defined in essentially two different ways: either by constructing an explicit well-ordered set that represents the operation or by using transfinite recursion. TheCantor normal formprovides a standardized way of writing ordinals. It uniquely represents each ordinal as a finite sum of ordinal powers of ω. However, this cannot form the basis of a universal ordinal notation due to such self-referential representations as ε0= ωε0.
Ordinals are a subclass of the class ofsurreal numbers, and the so-called "natural" arithmetical operations for surreal numbers are an alternative way to combine ordinals arithmetically. They retain commutativity at the expense of continuity.
Interpreted asnimbers, a game-theoretic variant of numbers, ordinals can also be combined via nimber arithmetic operations. These operations are commutative but the restriction to natural numbers is generally not the same as ordinary addition of natural numbers.
Each ordinal associates with onecardinal, its cardinality. If there is a bijection between two ordinals (e.g.ω = 1 + ωandω + 1 > ω), then they associate with the same cardinal. Any well-ordered set having an ordinal as its order-type has the same cardinality as that ordinal. The least ordinal associated with a given cardinal is called theinitial ordinalof that cardinal. Every finite ordinal (natural number) is initial, and no other ordinal associates with its cardinal. But most infinite ordinals are not initial, as many infinite ordinals associate with the same cardinal. Theaxiom of choiceis equivalent to the statement that every set can be well-ordered, i.e. that every cardinal has an initial ordinal. In theories with the axiom of choice, the cardinal number of any set has an initial ordinal, and one may employ theVon Neumann cardinal assignmentas the cardinal's representation. (However, we must then be careful to distinguish between cardinal arithmetic and ordinal arithmetic.) In set theories without the axiom of choice, a cardinal may be represented by the set of sets with that cardinality having minimal rank (seeScott's trick).
One issue with Scott's trick is that it identifies the cardinal number0{\displaystyle 0}with{∅}{\displaystyle \{\emptyset \}}, which in some formulations is the ordinal number1{\displaystyle 1}. It may be clearer to apply Von Neumann cardinal assignment to finite cases and to use Scott's trick for sets which are infinite or do not admit well orderings. Note that cardinal and ordinal arithmetic agree for finite numbers.
The α-th infinite initial ordinal is writtenωα{\displaystyle \omega _{\alpha }}, it is always a limit ordinal. Its cardinality is writtenℵα{\displaystyle \aleph _{\alpha }}. For example, the cardinality of ω0= ω isℵ0{\displaystyle \aleph _{0}}, which is also the cardinality of ω2or ε0(all are countable ordinals). So ω can be identified withℵ0{\displaystyle \aleph _{0}}, except that the notationℵ0{\displaystyle \aleph _{0}}is used when writing cardinals, and ω when writing ordinals (this is important since, for example,ℵ02{\displaystyle \aleph _{0}^{2}}=ℵ0{\displaystyle \aleph _{0}}whereasω2>ω{\displaystyle \omega ^{2}>\omega }). Also,ω1{\displaystyle \omega _{1}}is the smallest uncountable ordinal (to see that it exists, consider the set of equivalence classes of well-orderings of the natural numbers: each such well-ordering defines a countable ordinal, andω1{\displaystyle \omega _{1}}is the order type of that set),ω2{\displaystyle \omega _{2}}is the smallest ordinal whose cardinality is greater thanℵ1{\displaystyle \aleph _{1}}, and so on, andωω{\displaystyle \omega _{\omega }}is the limit of theωn{\displaystyle \omega _{n}}for natural numbersn(any limit of cardinals is a cardinal, so this limit is indeed the first cardinal after all theωn{\displaystyle \omega _{n}}).
Thecofinalityof an ordinalα{\displaystyle \alpha }is the smallest ordinalδ{\displaystyle \delta }that is the order type of acofinalsubset ofα{\displaystyle \alpha }. Notice that a number of authors define cofinality or use it only for limit ordinals. The cofinality of a set of ordinals or any other well-ordered set is the cofinality of the order type of that set.
Thus for a limit ordinal, there exists aδ{\displaystyle \delta }-indexed strictly increasing sequence with limitα{\displaystyle \alpha }. For example, the cofinality of ω2is ω, because the sequence ω·m(wheremranges over the natural numbers) tends to ω2; but, more generally, any countable limit ordinal has cofinality ω. An uncountable limit ordinal may have either cofinality ω as doesωω{\displaystyle \omega _{\omega }}or an uncountable cofinality.
The cofinality of 0 is 0. And the cofinality of any successor ordinal is 1. The cofinality of any limit ordinal is at leastω{\displaystyle \omega }.
An ordinal that is equal to its cofinality is called regular and it is always an initial ordinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial even if it is not regular, which it usually is not. If the Axiom of Choice, thenωα+1{\displaystyle \omega _{\alpha +1}}is regular for each α. In this case, the ordinals 0, 1,ω{\displaystyle \omega },ω1{\displaystyle \omega _{1}}, andω2{\displaystyle \omega _{2}}are regular, whereas 2, 3,ωω{\displaystyle \omega _{\omega }}, and ωω·2are initial ordinals that are not regular.
The cofinality of any ordinalαis a regular ordinal, i.e. the cofinality of the cofinality ofαis the same as the cofinality ofα. So the cofinality operation isidempotent.
As mentioned above (seeCantor normal form), the ordinal ε0is the smallest satisfying the equationωα=α{\displaystyle \omega ^{\alpha }=\alpha }, so it is the limit of the sequence 0, 1,ω{\displaystyle \omega },ωω{\displaystyle \omega ^{\omega }},ωωω{\displaystyle \omega ^{\omega ^{\omega }}}, etc. Many ordinals can be defined in such a manner as fixed points of certain ordinal functions (theι{\displaystyle \iota }-th ordinal such thatωα=α{\displaystyle \omega ^{\alpha }=\alpha }is calledει{\displaystyle \varepsilon _{\iota }}, then one could go on trying to find theι{\displaystyle \iota }-th ordinal such thatεα=α{\displaystyle \varepsilon _{\alpha }=\alpha }, "and so on", but all the subtlety lies in the "and so on"). One could try to do this systematically, but no matter what system is used to define and construct ordinals, there is always an ordinal that lies just above all the ordinals constructed by the system. Perhaps the most important ordinal that limits a system of construction in this manner is theChurch–Kleene ordinal,ω1CK{\displaystyle \omega _{1}^{\mathrm {CK} }}(despite theω1{\displaystyle \omega _{1}}in the name, this ordinal is countable), which is the smallest ordinal that cannot in any way be represented by acomputable function(this can be made rigorous, of course). Considerably large ordinals can be defined belowω1CK{\displaystyle \omega _{1}^{\mathrm {CK} }}, however, which measure the "proof-theoretic strength" of certainformal systems(for example,ε0{\displaystyle \varepsilon _{0}}measures the strength ofPeano arithmetic). Large countable ordinals such as countableadmissible ordinalscan also be defined above the Church-Kleene ordinal, which are of interest in various parts of logic.[citation needed]
Any ordinal number can be made into atopological spaceby endowing it with theorder topology; this topology isdiscreteif and only if it is less than or equal to ω. A subset of ω + 1 is open in the order topology if and only if either it iscofiniteor it does not contain ω as an element.
See theTopology and ordinalssection of the "Order topology" article.
The transfinite ordinal numbers, which first appeared in 1883,[8]originated in Cantor's work withderived sets. IfPis a set of real numbers, the derived setP′is the set oflimit pointsofP. In 1872, Cantor generated the setsP(n)by applying the derived set operationntimes toP. In 1880, he pointed out that these sets form the sequenceP'⊇ ··· ⊇P(n)⊇P(n+ 1)⊇ ···,and he continued the derivation process by definingP(∞)as the intersection of these sets. Then he iterated the derived set operation and intersections to extend his sequence of sets into the infinite:P(∞)⊇P(∞ + 1)⊇P(∞ + 2)⊇ ··· ⊇P(2∞)⊇ ··· ⊇P(∞2)⊇ ···.[9]The superscripts containing ∞ are just indices defined by the derivation process.[10]
Cantor used these sets in the theorems:
These theorems are proved by partitioningP′intopairwise disjointsets:P′= (P′\P(2)) ∪ (P(2)\P(3)) ∪ ··· ∪ (P(∞)\P(∞ + 1)) ∪ ··· ∪P(α). Forβ<α:sinceP(β+ 1)contains the limit points ofP(β), the setsP(β)\P(β+ 1)have no limit points. Hence, they arediscrete sets, so they are countable. Proof of first theorem: IfP(α)= ∅for some indexα, thenP′is the countable union of countable sets. Therefore,P′is countable.[11]
The second theorem requires proving the existence of anαsuch thatP(α)= ∅. To prove this, Cantor considered the set of allαhaving countably many predecessors. To define this set, he defined the transfinite ordinal numbers and transformed the infinite indices into ordinals by replacing ∞ withω, the first transfinite ordinal number. Cantor called the set of finite ordinals the firstnumber class. The second number class is the set of ordinals whose predecessors form a countably infinite set. The set of allαhaving countably many predecessors—that is, the set of countable ordinals—is the union of these two number classes. Cantor proved that the cardinality of the second number class is the first uncountable cardinality.[12]
Cantor's second theorem becomes: IfP′is countable, then there is a countable ordinalαsuch thatP(α)= ∅. Its proof usesproof by contradiction. LetP′be countable, and assume there is no such α. This assumption produces two cases.
In both cases,P′is uncountable, which contradictsP′being countable. Therefore, there is a countable ordinalαsuch thatP(α)= ∅. Cantor's work with derived sets and ordinal numbers led to theCantor-Bendixson theorem.[14]
Using successors, limits, and cardinality, Cantor generated an unbounded sequence of ordinal numbers and number classes.[15]The(α+ 1)-th number class is the set of ordinals whose predecessors form a set of the same cardinality as theα-th number class. The cardinality of the(α+ 1)-th number class is the cardinality immediately following that of theα-th number class.[16]For a limit ordinalα, theα-th number class is the union of theβ-th number classes forβ<α.[17]Its cardinality is the limit of the cardinalities of these number classes.
Ifnis finite, then-th number class has cardinalityℵn−1{\displaystyle \aleph _{n-1}}. Ifα≥ω, theα-th number class has cardinalityℵα{\displaystyle \aleph _{\alpha }}.[18]Therefore, the cardinalities of the number classes correspond one-to-one with thealeph numbers. Also, theα-th number class consists of ordinals different from those in the preceding number classes if and only ifαis a non-limit ordinal. Therefore, the non-limit number classes partition the ordinals into pairwise disjoint sets.
|
https://en.wikipedia.org/wiki/Ordinal_number#Von_Neumann_definition_of_ordinals
|
Inmathematics, anantimatroidis aformal systemthat describes processes in which asetis built up by including elements one at a time, and in which an element, once available for inclusion, remains available until it is included.[1]Antimatroids are commonlyaxiomatized in two equivalent ways, either as aset systemmodeling the possible states of such a process, or as aformal languagemodeling the different sequences in which elements may be included.Dilworth(1940) was the first to study antimatroids, using yet another axiomatization based onlattice theory, and they have been frequently rediscovered in other contexts.[2]
The axioms defining antimatroids as set systems are very similar to those ofmatroids, but whereas matroids are defined by anexchange axiom, antimatroids are defined instead by ananti-exchange axiom, from which their name derives.
Antimatroids can be viewed as a special case ofgreedoidsand ofsemimodular lattices, and as a generalization ofpartial ordersand ofdistributive lattices.
Antimatroids are equivalent, bycomplementation, toconvex geometries, a combinatorial abstraction ofconvex setsingeometry.
Antimatroids have been applied to model precedence constraints inscheduling problems, potential event sequences in simulations, task planning inartificial intelligence, and the states of knowledge of human learners.
An antimatroid can be defined as a finite familyF{\displaystyle {\mathcal {F}}}of finite sets, calledfeasible sets, with the following two properties:[3]
Antimatroids also have an equivalent definition as aformal language, that is, as a set ofstringsdefined from a finite alphabet ofsymbols. A string that belongs to this set is called awordof the language. A languageL{\displaystyle {\mathcal {L}}}defining an antimatroid must satisfy the following properties:[4]
The equivalence of these two forms of definition can be seen as follows. IfL{\displaystyle {\mathcal {L}}}is an antimatroid defined as a formal language, then the sets of symbols in words ofL{\displaystyle {\mathcal {L}}}form an accessible union-closed set system. It is accessible by the hereditary property of strings, and it can be shown to be union-closed by repeated application of the concatenation property of strings. In the other direction, from an accessible union-closed set systemF{\displaystyle {\mathcal {F}}}, the language of normal strings whose prefixes all have sets of symbols belonging toF{\displaystyle {\mathcal {F}}}meets the requirements for a formal language to be an antimatroid. These two transformations are the inverses of each other: transforming a formal language into a set family and back, or vice versa, produces the same system. Thus, these two definitions lead to mathematically equivalent classes of objects.[6]
The following systems provide examples of antimatroids:
In the set theoretic axiomatization of an antimatroid there are certain special sets calledpathsthat determine the whole antimatroid, in the sense that the sets of the antimatroid are exactly the unions of paths.[11]IfS{\displaystyle S}is any feasible set of the antimatroid, an elementx{\displaystyle x}that can be removed fromS{\displaystyle S}to form another feasible set is called anendpointofS{\displaystyle S}, and a feasible set that has only one endpoint is called apathof the antimatroid.[12]The family of paths can be partially ordered by set inclusion, forming thepath posetof the antimatroid.[13]
For every feasible setS{\displaystyle S}in the antimatroid, and every elementx{\displaystyle x}ofS{\displaystyle S}, one may find a path subset ofS{\displaystyle S}for whichx{\displaystyle x}is an endpoint: to do so, remove one at a time elements other thanx{\displaystyle x}until no such removal leaves a feasible subset. Therefore, each feasible set in an antimatroid is the union of its path subsets.[11]IfS{\displaystyle S}is not a path, each subset in this union is aproper subsetofS{\displaystyle S}. But, ifS{\displaystyle S}is itself a path with endpointx{\displaystyle x}, each proper subset ofS{\displaystyle S}that belongs to the antimatroid excludesx{\displaystyle x}. Therefore, the paths of an antimatroid are exactly the feasible sets that do not equal the unions of their proper feasible subsets. Equivalently, a given family of setsP{\displaystyle {\mathcal {P}}}forms the family of paths of an antimatroid if and only if, for eachS{\displaystyle S}inP{\displaystyle {\mathcal {P}}}, the union of subsets ofS{\displaystyle S}inP{\displaystyle {\mathcal {P}}}has one fewer element thanS{\displaystyle S}itself.[14]If so,F{\displaystyle {\mathcal {F}}}itself is the family of unions of subsets ofP{\displaystyle {\mathcal {P}}}.[11]
In the formal language formalization of an antimatroid, the longest strings are calledbasic words. Each basic word forms a permutation of the whole alphabet.[15]IfB{\displaystyle B}is the set of basic words,L{\displaystyle {\mathcal {L}}}can be defined fromB{\displaystyle B}as the set of prefixes of words inB{\displaystyle B}.[16]
IfF{\displaystyle {\mathcal {F}}}is the set system defining an antimatroid, withU{\displaystyle U}equal to the union of the sets inF{\displaystyle {\mathcal {F}}}, then the family of setsG={U∖S∣S∈F}{\displaystyle {\mathcal {G}}=\{U\setminus S\mid S\in {\mathcal {F}}\}}complementaryto the sets inF{\displaystyle {\mathcal {F}}}is sometimes called aconvex geometryand the sets inG{\displaystyle {\mathcal {G}}}are calledconvex sets. For instance, in a shelling antimatroid, the convex sets are intersections of the given point set with convex subsets of Euclidean space. The set system defining a convex geometry must be closed under intersections. For any setS{\displaystyle S}inG{\displaystyle {\mathcal {G}}}that is not equal toU{\displaystyle U}there must be an elementx{\displaystyle x}not inS{\displaystyle S}that can be added toS{\displaystyle S}to form another set inG{\displaystyle {\mathcal {G}}}.[17]
A convex geometry can also be defined in terms of aclosure operatorτ{\displaystyle \tau }that maps any subset ofU{\displaystyle U}to its minimal closed superset. To be a closure operator,τ{\displaystyle \tau }should have the following properties:[18]
The family of closed sets resulting from a closure operation of this type is necessarily closed under intersections, but might not be a convex geometry. The closure operators that define convex geometries also satisfy an additionalanti-exchange axiom:
A closure operation satisfying this axiom is called ananti-exchange closure. IfS{\displaystyle S}is a closed set in an anti-exchange closure, then the anti-exchange axiom determines a partial order on the elements not belonging toS{\displaystyle S}, wherex≤y{\displaystyle x\leq y}in the partial order whenx{\displaystyle x}belongs toτ(S∪{y}){\displaystyle \tau (S\cup \{y\})}. Ifx{\displaystyle x}is a minimal element of this partial order, thenS∪{x}{\displaystyle S\cup \{x\}}is closed. That is, the family of closed sets of an anti-exchange closure has the property that for any set other than the universal set there is an elementx{\displaystyle x}that can be added to it to produce another closed set. This property is complementary to the accessibility property of antimatroids, and the fact that intersections of closed sets are closed is complementary to the property that unions of feasible sets in an antimatroid are feasible. Therefore, the complements of the closed sets of any anti-exchange closure form an antimatroid.[17]
Theundirected graphsin which the convex sets (subsets of vertices that contain allshortest pathsbetween vertices in the subset) form a convex geometry are exactly thePtolemaic graphs.[19]
Every two feasible sets of an antimatroid have a uniqueleast upper bound(their union) and a uniquegreatest lower bound(the union of the sets in the antimatroid that are contained in both of them). Therefore, the feasible sets of an antimatroid,partially orderedby set inclusion, form alattice. Various important features of an antimatroid can be interpreted in lattice-theoretic terms; for instance the paths of an antimatroid are thejoin-irreducibleelements of the corresponding lattice, and the basic words of the antimatroid correspond tomaximal chainsin the lattice. The lattices that arise from antimatroids in this way generalize the finitedistributive lattices, and can be characterized in several different ways.
These three characterizations are equivalent: any lattice with unique meet-irreducible decompositions has boolean atomistic intervals and is join-distributive, any lattice with boolean atomistic intervals has unique meet-irreducible decompositions and is join-distributive, and any join-distributive lattice has unique meet-irreducible decompositions and boolean atomistic intervals.[20]Thus, we may refer to a lattice with any of these three properties as join-distributive. Any antimatroid gives rise to a finite join-distributive lattice, and any finite join-distributive lattice comes from an antimatroid in this way.[21]Another equivalent characterization of finite join-distributive lattices is that they aregraded(any two maximal chains have the same length), and the length of a maximal chain equals the number of meet-irreducible elements of the lattice.[22]The antimatroid representing a finite join-distributive lattice can be recovered from the lattice: the elements of the antimatroid can be taken to be the meet-irreducible elements of the lattice, and the feasible set corresponding to any elementx{\displaystyle x}of the lattice consists of the set of meet-irreducible elementsy{\displaystyle y}such thaty{\displaystyle y}is not greater than or equal tox{\displaystyle x}in the lattice.
This representation of any finite join-distributive lattice as an accessible family of sets closed under unions (that is, as an antimatroid) may be viewed as an analogue ofBirkhoff's representation theoremunder which any finite distributive lattice has a representation as a family of sets closed under unions and intersections.
Motivated by a problem of defining partial orders on the elements of aCoxeter group,Armstrong (2009)studied antimatroids which are alsosupersolvable lattices. A supersolvable antimatroid is defined by atotally orderedcollection of elements, and afamily of setsof these elements. The family must include the empty set. Additionally, it must have the property that if two setsA{\displaystyle A}andB{\displaystyle B}belong to the family, if theset-theoretic differenceB∖A{\displaystyle B\setminus A}is nonempty, and ifx{\displaystyle x}is the smallest element ofB∖A{\displaystyle B\setminus A}, thenA∪{x}{\displaystyle A\cup \{x\}}also belongs to the family. As Armstrong observes, any family of sets of this type forms an antimatroid. Armstrong also provides a lattice-theoretic characterization of the antimatroids that this construction can form.[23]
IfA{\displaystyle {\mathcal {A}}}andB{\displaystyle {\mathcal {B}}}are two antimatroids, both described as a family of sets over the same universe of elements, then another antimatroid, thejoinofA{\displaystyle {\mathcal {A}}}andB{\displaystyle {\mathcal {B}}}, can be formed as follows:A∨B={S∪T∣S∈A∧T∈B}.{\displaystyle {\mathcal {A}}\vee {\mathcal {B}}=\{S\cup T\mid S\in {\mathcal {A}}\wedge T\in {\mathcal {B}}\}.}This is a different operation than the join considered in the lattice-theoretic characterizations of antimatroids: it combines two antimatroids to form another antimatroid, rather than combining two sets in an antimatroid to form another set.
The family of all antimatroids over the same universe forms asemilatticewith this join operation.[24]
Joins are closely related to a closure operation that maps formal languages to antimatroids, where the closure of a languageL{\displaystyle {\mathcal {L}}}is the intersection of all antimatroids containingL{\displaystyle {\mathcal {L}}}as a sublanguage. This closure has as its feasible sets the unions of prefixes of strings inL{\displaystyle {\mathcal {L}}}. In terms of this closure operation, the join is the closure of the union of the languages ofA{\displaystyle {\mathcal {A}}}andB{\displaystyle {\mathcal {B}}}. Every antimatroid can be represented as a join of a family of chain antimatroids, or equivalently as the closure of a set of basic words; theconvex dimensionof an antimatroidA{\displaystyle {\mathcal {A}}}is the minimum number of chain antimatroids (or equivalently the minimum number of basic words) in such a representation. IfF{\displaystyle {\mathfrak {F}}}is a family of chain antimatroids whose basic words all belong toA{\displaystyle {\mathcal {A}}}, thenF{\displaystyle {\mathfrak {F}}}generatesA{\displaystyle {\mathcal {A}}}if and only if the feasible sets ofF{\displaystyle {\mathfrak {F}}}include all paths ofA{\displaystyle {\mathcal {A}}}. The paths ofA{\displaystyle {\mathcal {A}}}belonging to a single chain antimatroid must form achainin the path poset ofA{\displaystyle {\mathcal {A}}}, so the convex dimension of an antimatroid equals the minimum number of chains needed to cover the path poset, which byDilworth's theoremequals the width of the path poset.[25]
If one has a representation of an antimatroid as the closure of a set ofd{\displaystyle d}basic words, then this representation can be used to map the feasible sets of the antimatroid to points ind{\displaystyle d}-dimensional Euclidean space: assign one coordinate per basic wordW{\displaystyle W}, and make the coordinate value of a feasible setS{\displaystyle S}be the length of the longest prefix ofW{\displaystyle W}that is a subset ofS{\displaystyle S}. With this embedding,S{\displaystyle S}is a subset of another feasible setT{\displaystyle T}if and only if the coordinates forS{\displaystyle S}are all less than or equal to the corresponding coordinates ofT{\displaystyle T}. Therefore, theorder dimensionof the inclusion ordering of the feasible sets is at most equal to the convex dimension of the antimatroid.[26]However, in general these two dimensions may be very different: there exist antimatroids with order dimension three but with arbitrarily large convex dimension.[27]
The number of possible antimatroids on a set of elements grows rapidly with the number of elements in the set. For sets of one, two, three, etc. elements, the number of distinct antimatroids is[28]1,3,22,485,59386,133059751,….{\displaystyle 1,3,22,485,59386,133059751,\dots \,.}
Both the precedence and release time constraints in the standardnotation for theoretic scheduling problemsmay be modeled by antimatroids.Boyd & Faigle (1990)use antimatroids to generalize agreedy algorithmofEugene Lawlerfor optimally solving single-processor scheduling problems with precedence constraints in which the goal is to minimize the maximum penalty incurred by the late scheduling of a task.
Glasserman & Yao (1994)use antimatroids to model the ordering of events indiscrete event simulationsystems.
Parmar (2003)uses antimatroids to model progress towards a goal inartificial intelligenceplanningproblems.
InOptimality Theory, a mathematical model for the development ofnatural languagebased on optimization under constraints, grammars are logically equivalent to antimatroids.[29]
Inmathematical psychology, antimatroids have been used to describefeasible states of knowledgeof a human learner. Each element of the antimatroid represents a concept that is to be understood by the learner, or a class of problems that he or she might be able to solve correctly, and the sets of elements that form the antimatroid represent possible sets of concepts that could be understood by a single person. The axioms defining an antimatroid may be phrased informally as stating that learning one concept can never prevent the learner from learning another concept, and that any feasible state of knowledge can be reached by learning a single concept at a time. The task of a knowledge assessment system is to infer the set of concepts known by a given learner by analyzing his or her responses to a small and well-chosen set of problems. In this context antimatroids have also been called "learning spaces" and "well-graded knowledge spaces".[30]
|
https://en.wikipedia.org/wiki/Antimatroid
|
In mathematics,Coxeter matroidsare generalization ofmatroidsdepending on a choice of aCoxeter groupWand aparabolic subgroupP. Ordinary matroids correspond to the case whenPis a maximal parabolic subgroup of a symmetric groupW. They were introduced by Gelfand and Serganova (1987,1987b), who named them afterH. S. M. Coxeter.
Borovik, Gelfand & White (2003)give a detailed account of Coxeter matroids.
Suppose thatWis a Coxeter group, generated by a setSof involutions, andPis a parabolic subgroup (the subgroup generated by some subset ofS). ACoxeter matroidis a subsetMofW/Pthat for everywinW,Mcontains a unique minimal element with respect to thew-Bruhat order.
Suppose that the Coxeter groupWis thesymmetric groupSnandPis the parabolic subgroupSk×Sn–k. ThenW/Pcan be identified with thek-element subsets of then-element set {1,2,...,n} and the elementswofWcorrespond to the linear orderings of this set. A Coxeter matroid consists ofkelements sets such that for eachwthere is a unique minimal element in the corresponding Bruhat ordering ofk-element subsets. This is exactly the definition of a matroid of rankkon ann-element set in terms of bases: a matroid can be defined as somek-element subsets called bases of ann-element set such that for each linear ordering of the set there is a unique minimal base in theGale orderingofk-element subsets.
|
https://en.wikipedia.org/wiki/Coxeter_matroid
|
Incombinatorics, agreedoidis a type ofset system. It arises from the notion of thematroid, which was originally introduced byWhitneyin 1935 to studyplanar graphsand was later used byEdmondsto characterize a class of optimization problems that can be solved bygreedy algorithms. Around 1980,KorteandLovászintroduced the greedoid to further generalize this characterization of greedy algorithms; hence the name greedoid. Besidesmathematical optimization, greedoids have also been connected tograph theory, language theory,order theory, and otherareas of mathematics.
Aset system(F,E)is a collectionFofsubsetsof a ground setE(i.e.Fis a subset of thepower setofE). When considering a greedoid, a member ofFis called afeasible set. When considering amatroid, a feasible set is also known as anindependent set.
Anaccessible set system(F,E)is a set system in which every nonempty feasible setXcontains an elementxsuch thatX∖{x}{\displaystyle X\setminus \{x\}}is feasible. This implies that any nonempty,finite, accessible set system necessarily contains theempty set∅.[1]
Agreedoid(F,E)is a finite accessible set system that satisfies theexchange property:
(Note: Some people reserve the termexchange propertyfor a condition on the bases of a greedoid, and prefer to call the above condition the “augmentation property”.)
Abasisof a greedoid is a maximal feasible set, meaning it is a feasible set but not contained in any other one. A basis of a subsetXofEis a maximal feasible set contained inX.
Therankof a greedoid is the size of a basis.
By the exchange property, all bases have the same size.
Thus, the rank function iswell defined. The rank of a subsetXofEis the size of a basis ofX. Just as with matroids, greedoids have acryptomorphismin terms of rank functions.[2]A functionr:2E→Z{\displaystyle r:2^{E}\to \mathbb {Z} }is the rank function of a greedoid on the ground setEif and only ifrissubcardinal,monotonic, and locallysemimodular, that is, for anyX,Y⊆E{\displaystyle X,Y\subseteq E}and anye,f∈E{\displaystyle e,f\in E}we have:
Most classes of greedoids have many equivalent definitions in terms of set system, language, poset,simplicial complex, and so on. The following description takes the traditional route of listing only a couple of the more well-known characterizations.
Aninterval greedoid(F,E)is a greedoid that satisfies theInterval Property:
A∪{x}∈FC∪{x}∈F⟹B∪{x}∈F.{\displaystyle {\begin{matrix}A\cup \{x\}\in F\\C\cup \{x\}\in F\end{matrix}}\implies B\cup \{x\}\in F.}
Equivalently, an interval greedoid is a greedoid such that the union of any two feasible sets is feasible if it is contained in another feasible set.
Anantimatroid(F,E)is a greedoid that satisfies theInterval Property without Upper Bounds:
Equivalently, an antimatroid is (i) a greedoid with a unique basis; or (ii) an accessible set system closed under union. It is easy to see that an antimatroid is also an interval greedoid.
Amatroid(F,E)is a non-empty greedoid that satisfies theInterval Property without Lower Bounds:
It is easy to see that a matroid is also an interval greedoid.
In general, agreedy algorithmis just an iterative process in which alocally best choice, usually an input of maximum weight, is chosen each round until all available choices have been exhausted.
In order to describe a greedoid-based condition in which a greedy algorithm is optimal (i.e., obtains a basis of maximum value), we need some more common terminologies in greedoid theory.Without loss of generality, we consider a greedoidG= (F,E)withEfinite.
A subsetXofEisrank feasibleif the largest intersection ofXwith any feasible set has size equal to the rank ofX.
In a matroid, every subset ofEis rank feasible.
But the equality does not hold for greedoids in general.
A functionw:E→R{\displaystyle w:E\to \mathbb {R} }isR-compatibleif{x∈E:w(x)≥c}{\displaystyle \{x\in E:w(x)\geq c\}}is rank feasible for allreal numbersc.
An objective functionf:2S→R{\displaystyle f:2^{S}\to \mathbb {R} }islinearover a setS{\displaystyle S}if, for allX⊆S,{\displaystyle X\subseteq S,}we havef(X)=∑x∈Xw(x){\displaystyle f(X)=\sum _{x\in X}w(x)}for someweight functionw:S→ℜ.{\displaystyle w:S\to \Re .}
Proposition.A greedy algorithm is optimal for everyR-compatible linear objective function over a greedoid.
The intuition behind this proposition is that, during the iterative process, each optimal exchange of minimum weight is made possible by the exchange property, and optimal results are obtainable from the feasible sets in the underlying greedoid. This result guarantees the optimality of many well-known algorithms. For example, aminimum spanning treeof aweighted graphmay be obtained usingKruskal's algorithm, which is a greedy algorithm for the cycle matroid.Prim's algorithmcan be explained by taking the line search greedoid instead.
|
https://en.wikipedia.org/wiki/Greedoid
|
Anoriented matroidis amathematicalstructurethat abstracts the properties ofdirected graphs,vectorarrangements over ordered fields, andhyperplane arrangementsoverordered fields.[1]In comparison, an ordinary (i.e., non-oriented)matroidabstracts thedependenceproperties that are common both tographs, which are not necessarilydirected, and to arrangements of vectors overfields, which are not necessarilyordered.[2][3]
All oriented matroids have an underlyingmatroid. Thus, results on ordinary matroids can be applied to oriented matroids. However, theconverseis false; some matroids cannot become an oriented matroid byorientingan underlying structure (e.g., circuits or independent sets).[4]The distinction between matroids and oriented matroids is discussed further below.
Matroids are often useful in areas such asdimension theoryandalgorithms.
Because of an oriented matroid's inclusion of additional details about theorientednature of a structure,
its usefulness extends further into several areas includinggeometryandoptimization.
The first appearance of oriented matroids was in a 1966 article byGeorge J. Mintyand was confined toregular matroids.[5]
Subsequently R.T. Rockefellar (1969) suggested the problem of generalizing Minty's concept to real vector spaces. His proposal helped lead to the development of the general theory.
In order to abstract the concept oforientationon the edges of a graph to sets, one needs the ability to assign "direction" to the elements of a set. The way this achieved is with the following definition ofsigned sets.
Given an elementx{\displaystyle x}of the support, we will writex{\displaystyle x}for a positive element and−x{\displaystyle -x}for a negative element. In this way, a signed set is just adding negative signs to distinguished elements. This will make sense as a "direction" only when we consider orientations of larger structures. Then the sign of each element will encode its direction relative to this orientation.
Like ordinary matroids, several equivalentsystems of axiomsexist. (Such structures that possess multiple equivalent axiomatizations are calledcryptomorphic.)
LetE{\displaystyle E}be any set. We refer toE{\displaystyle E}as theground set. LetC{\displaystyle {\mathcal {C}}}be a collection ofsigned sets, each of which issupportedby a subset ofE{\displaystyle E}.
If the following axioms hold forC{\displaystyle {\mathcal {C}}}, then equivalentlyC{\displaystyle {\mathcal {C}}}is the set ofsigned circuitsfor anoriented matroidonE{\displaystyle E}.
Thecompositionof signed setsX{\displaystyle X}andY{\displaystyle Y}is the signed setX∘Y{\displaystyle X\circ Y}defined byX∘Y_=X_∪Y_{\displaystyle {\underline {X\circ Y}}={\underline {X}}\cup {\underline {Y}}},(X∘Y)+=X+∪(Y+∖X−){\displaystyle (X\circ Y)^{+}=X^{+}\cup \left(Y^{+}\setminus X^{-}\right)}, and(X∘Y)−=X−∪(Y−∖X+){\displaystyle (X\circ Y)^{-}=X^{-}\cup \left(Y^{-}\setminus X^{+}\right)}. Thevectorsof an oriented matroid are the compositions of circuits. The vectorsV{\displaystyle {\mathcal {V}}}of an oriented matroid satisfy the following axioms:
Thecovectorsof an oriented matroid are the vectors of its dual oriented matroid.
LetE{\displaystyle E}be as above. For each non-negative integerr{\displaystyle r}, achirotope of rankr{\displaystyle r}is a functionχ:Er→{−1,0,1}{\displaystyle \chi \colon E^{r}\to \{-1,0,1\}}that satisfies the following axioms:
The termchirotopeis derived from the mathematical notion ofchirality, which is a concept abstracted fromchemistry, where it is used to distinguish molecules that have the same structure except for a reflection.
Every chirotope of rankr{\displaystyle r}gives rise to a set of bases of a matroid onE{\displaystyle E}consisting of thoser{\displaystyle r}-element subsets thatχ{\displaystyle \chi }assigns a nonzero value.[6]The chirotope can then sign the circuits of that matroid. IfC{\displaystyle C}is a circuit of the described matroid, thenC⊂{x1,…,xr,xr+1}{\displaystyle C\subset \{x_{1},\dots ,x_{r},x_{r+1}\}}where{x1,…,xr}{\displaystyle \{x_{1},\dots ,x_{r}\}}is a basis. ThenC{\displaystyle C}can be signed with positive elements
and negative elements the complement. Thus a chirotope gives rise to theoriented basesof an oriented matroid. In this sense, (B0) is the nonempty axiom for bases and (B2) is the basis exchange property.
Oriented matroids are often introduced (e.g., Bachem and Kern) as an abstraction for directed graphs or systems of linear inequalities. Below are the explicit constructions.
Given adigraph, we define a signed circuit from the standardcircuitof the graph by the following method. The support of the signed circuitX_{\displaystyle \textstyle {\underline {X}}}is the standard set of edges in a minimal cycle. We go along the cycle in the clockwise or anticlockwise direction assigning those edges whose orientation agrees with the direction to the positive elementsX+{\displaystyle \textstyle X^{+}}and those edges whose orientation disagrees with the direction to the negative elementsX−{\displaystyle \textstyle X^{-}}. IfC{\displaystyle \textstyle {\mathcal {C}}}is the set of all suchX{\displaystyle \textstyle X}, thenC{\displaystyle \textstyle {\mathcal {C}}}is the set of signed circuits of an oriented matroid on the set of edges of the directed graph.
If we consider the directed graph on the right, then we can see that there are only two circuits, namely{(1,2),(1,3),(3,2)}{\displaystyle \textstyle \{(1,2),(1,3),(3,2)\}}and{(3,4),(4,3)}{\displaystyle \textstyle \{(3,4),(4,3)\}}. Then there are only four possible signed circuits corresponding to clockwise and anticlockwise orientations, namely{(1,2),−(1,3),−(3,2)}{\displaystyle \textstyle \{(1,2),-(1,3),-(3,2)\}},{−(1,2),(1,3),(3,2)}{\displaystyle \textstyle \{-(1,2),(1,3),(3,2)\}},{(3,4),(4,3)}{\displaystyle \textstyle \{(3,4),(4,3)\}}, and{−(3,4),−(4,3)}{\displaystyle \textstyle \{-(3,4),-(4,3)\}}. These four sets form the set of signed circuits of an oriented matroid on the set{(1,2),(1,3),(3,2),(3,4),(4,3)}{\displaystyle \textstyle \{(1,2),(1,3),(3,2),(3,4),(4,3)\}}.
IfE{\displaystyle \textstyle E}is any finite subset ofRn{\displaystyle \textstyle \mathbb {R} ^{n}}, then the set of minimal linearly dependent sets forms the circuit set of a matroid onE{\displaystyle \textstyle E}. To extend this construction to oriented matroids, for each circuit{v1,…,vm}{\displaystyle \textstyle \{v_{1},\dots ,v_{m}\}}there is a minimal linear dependence
withλi∈R{\displaystyle \textstyle \lambda _{i}\in \mathbb {R} }. Then the signed circuitX={X+,X−}{\displaystyle \textstyle X=\{X^{+},X^{-}\}}has positive elementsX+={vi:λi>0}{\displaystyle \textstyle X^{+}=\{v_{i}:\lambda _{i}>0\}}and negative elementsX−={vi:λi<0}{\displaystyle \textstyle X^{-}=\{v_{i}:\lambda _{i}<0\}}. The set of all suchX{\displaystyle \textstyle X}forms the set of signed circuits of an oriented matroid onE{\displaystyle \textstyle E}. Oriented matroids that can be realized this way are calledrepresentable.
Given the same set of vectorsE{\displaystyle E}, we can define the same oriented matroid with a chirotopeχ:Er→{−1,0,1}{\displaystyle \chi :E^{r}\rightarrow \{-1,0,1\}}. For anyx1,…,xr∈E{\displaystyle x_{1},\dots ,x_{r}\in E}let
where the right hand side of the equation is the sign of thedeterminant. Thenχ{\displaystyle \chi }is the chirotope of the same oriented matroid on the setE{\displaystyle E}.
A real hyperplane arrangementA={H1,…,Hn}{\displaystyle {\mathcal {A}}=\{H_{1},\ldots ,H_{n}\}}is a finite set of hyperplanes inRd{\displaystyle \mathbb {R} ^{d}}, each containing the origin. By picking one side of each hyperplane to be the positive side, we obtain an arrangement of half-spaces. A half-space arrangement breaks down the ambient space into a finite collection of cells, each defined by which side of each hyperplane it lands on. That is, assign each pointx∈Rd{\displaystyle x\in \mathbb {R} ^{d}}to the signed setX=(X+,X−){\displaystyle X=(X^{+},X^{-})}withi∈X+{\displaystyle i\in X^{+}}ifx{\displaystyle x}is on the positive side ofHi{\displaystyle H_{i}}andi∈X−{\displaystyle i\in X^{-}}ifx{\displaystyle x}is on the negative side ofHi{\displaystyle H_{i}}. This collection of signed sets defines the set of covectors of the oriented matroid, which are the vectors of the dual oriented matroid.[7]
Günter M. Zieglerintroduces oriented matroids via convex polytopes.
A standard matroid is calledorientableif its circuits are the supports of signed circuits of some oriented matroid. It is known that all real representable matroids are orientable. It is also known that the class of orientable matroids is closed under takingminors, however the list offorbidden minorsfor orientable matroids is known to be infinite.[8]In this sense, oriented matroids is a much stricter formalization than regular matroids.
Just as a matroid has a uniqueduals, an oriented matroid has a unique dual, often called its "orthogonal dual". What this means is that the underlying matroids are dual and that the cocircuits are signed so that they are "orthogonal" to every circuit. Two signed sets are said to beorthogonalif the intersection of their supports is empty or if the restriction of their positive elements to the intersection and negative elements to the intersection form two nonidentical and non-opposite signed sets. The existence and uniqueness of the dual oriented matroid depends on the fact that every signed circuit is orthogonal to every signed cocircuit.[9]
To see why orthogonality is necessary for uniqueness one needs only to look to the digraph example above. We know that for planar graphs the dual of the circuit matroid is the circuit matroid of the graph'splanar dual. Thus there are as many different dual pairs of oriented matroids based on the matroid of the graph as there are ways to orient the graph and in a corresponding way its dual.
To see the explicit construction of this unique orthogonal dual oriented matroid, consider an oriented matroid's chirotopeχ:Er→{−1,0,1}{\displaystyle \chi :E^{r}\rightarrow \{-1,0,1\}}. If we consider a list of elements ofx1,…,xk∈E{\displaystyle x_{1},\dots ,x_{k}\in E}as a cyclic permutation then we definesgn(x1,…,xk){\displaystyle \operatorname {sgn} (x_{1},\dots ,x_{k})}to be the sign of the associated permutation. Ifχ∗:E|E|−r→{−1,0,1}{\displaystyle \chi ^{*}:E^{|E|-r}\rightarrow \{-1,0,1\}}is defined as
thenχ∗{\displaystyle \chi ^{*}}is the chirotope of the unique orthogonal dual oriented matroid.[10]
Not all oriented matroids are representable—that is, not all have realizations as point configurations, or, equivalently, hyperplane arrangements. However, in some sense, all oriented matroids come close to having realizations as hyperplane arrangements. In particular, theFolkman–Lawrence topological representation theoremstates that any oriented matroid has a realization as anarrangement of pseudospheres. Ad{\displaystyle d}-dimensionalpseudosphereis an embedding ofe:Sd↪Sd+1{\displaystyle e:S^{d}\hookrightarrow S^{d+1}}such that there exists a homeomorphismh:Sd+1→Sd+1{\displaystyle h:S^{d+1}\rightarrow S^{d+1}}so thath∘e{\displaystyle h\circ e}embedsSd{\displaystyle S^{d}}as an equator ofSd+1{\displaystyle S^{d+1}}. In this sense a pseudosphere is just atamesphere (as opposed towild spheres). Apseudosphere arrangement inSd{\displaystyle S^{d}}is a collection of pseudospheres that intersect along pseudospheres. Finally, the Folkman–Lawrence topological representation theorem states that every oriented matroid of rankd+1{\displaystyle d+1}can be obtained from a pseudosphere arrangement inSd{\displaystyle S^{d}}.[11]It is named afterJon FolkmanandJim Lawrence, who published it in 1978.
The theory of oriented matroids has influenced the development ofcombinatorial geometry, especially the theory ofconvex polytopes,zonotopes, and configurations of vectors (equivalently,arrangements of hyperplanes).[12]Many results—Carathéodory's theorem,Helly's theorem,Radon's theorem, theHahn–Banach theorem, theKrein–Milman theorem, thelemma of Farkas—can be formulated using appropriate oriented matroids.[13]
The development of an axiom system for oriented matroids was initiated byR. Tyrrell Rockafellarto describe the sign patterns of the matrices arising through the pivoting operations of Dantzig's simplex algorithm; Rockafellar was inspired byAlbert W. Tucker's studies of such sign patterns in "Tucker tableaux".[14]
The theory of oriented matroids has led to breakthroughs incombinatorial optimization. Inlinear programming, it was the language in whichRobert G. Blandformulated hispivoting rule, by which thesimplex algorithmavoids cycles. Similarly, it was used by Terlaky and Zhang to prove that theircriss-cross algorithmshave finite termination forlinear programmingproblems. Similar results were made in convexquadratic programmingby Todd and Terlaky.[15]It has been applied tolinear-fractional programming,[16]quadratic-programmingproblems, andlinear complementarity problems.[17][18][19]
Outside ofcombinatorial optimization, oriented matroid theory also appears inconvex minimizationin Rockafellar's theory of "monotropic programming" and related notions of "fortified descent".[20]Similarly,matroidtheory has influenced the development of combinatorial algorithms, particularly thegreedy algorithm.[21]More generally, agreedoidis useful for studying the finite termination of algorithms.
|
https://en.wikipedia.org/wiki/Oriented_matroid
|
In mathematics, apolymatroidis apolytopeassociated with asubmodular function. The notion was introduced byJack Edmondsin 1970.[1]It is also a generalization of the notion of amatroid.
LetE{\displaystyle E}be a finitesetandf:2E→R≥0{\displaystyle f:2^{E}\rightarrow \mathbb {R} _{\geq 0}}a non-decreasingsubmodular function, that is, for eachA⊆B⊆E{\displaystyle A\subseteq B\subseteq E}we havef(A)≤f(B){\displaystyle f(A)\leq f(B)}, and for eachA,B⊆E{\displaystyle A,B\subseteq E}we havef(A)+f(B)≥f(A∪B)+f(A∩B){\displaystyle f(A)+f(B)\geq f(A\cup B)+f(A\cap B)}. We define thepolymatroidassociated tof{\displaystyle f}to be the followingpolytope:
Pf={x∈R≥0E|∑e∈Ux(e)≤f(U),∀U⊆E}{\displaystyle P_{f}={\Big \{}{\textbf {x}}\in \mathbb {R} _{\geq 0}^{E}~{\Big |}~\sum _{e\in U}{\textbf {x}}(e)\leq f(U),\forall U\subseteq E{\Big \}}}.
When we allow the entries ofx{\displaystyle {\textbf {x}}}to be negative we denote this polytope byEPf{\displaystyle EP_{f}}, and call it the extended polymatroid associated tof{\displaystyle f}.[2]
In matroid theory, polymatroids are defined as the pair consisting of the set and the function as in the above definition. That is, apolymatroidis a pair(E,f){\displaystyle (E,f)}whereE{\displaystyle E}is a finite set andf:2E→R≥0{\displaystyle f:2^{E}\rightarrow \mathbb {R} _{\geq 0}}, orZ≥0,{\displaystyle \mathbb {Z} _{\geq 0},}is a non-decreasing submodular function. If the codomain isZ≥0,{\displaystyle \mathbb {Z} _{\geq 0},}we say that(E,f){\displaystyle (E,f)}is aninteger polymatroid. We callE{\displaystyle E}theground setandf{\displaystyle f}therank functionof the polymatroid. This definition generalizes the definition of a matroid in terms of its rank function. A vectorx∈R≥0E{\displaystyle x\in \mathbb {R} _{\geq 0}^{E}}isindependentif∑e∈Ux(e)≤f(U){\displaystyle \sum _{e\in U}x(e)\leq f(U)}for allU⊆E{\displaystyle U\subseteq E}. LetP{\displaystyle P}denote the set of independent vectors. ThenP{\displaystyle P}is the polytope in the previous definition, called theindependence polytopeof the polymatroid.[3]
Under this definition, a matroid is a special case of integer polymatroid. While the rank of an element in a matroid can be either0{\displaystyle 0}or1{\displaystyle 1}, the rank of an element in a polymatroid can be any nonnegative real number, or nonnegative integer in the case of an integer polymatroid. In this sense, a polymatroid can be considered a multiset analogue of a matroid.
LetE{\displaystyle E}be a finiteset. Ifu,v∈RE{\displaystyle {\textbf {u}},{\textbf {v}}\in \mathbb {R} ^{E}}then we denote by|u|{\displaystyle |{\textbf {u}}|}the sum of the entries ofu{\displaystyle {\textbf {u}}}, and writeu≤v{\displaystyle {\textbf {u}}\leq {\textbf {v}}}wheneverv(i)−u(i)≥0{\displaystyle {\textbf {v}}(i)-{\textbf {u}}(i)\geq 0}for everyi∈E{\displaystyle i\in E}(notice that this gives apartial ordertoR≥0E{\displaystyle \mathbb {R} _{\geq 0}^{E}}). Apolymatroidon the ground setE{\displaystyle E}is a nonemptycompactsubsetP{\displaystyle P}, the set of independent vectors, ofR≥0E{\displaystyle \mathbb {R} _{\geq 0}^{E}}such that:
This definition is equivalent to the one described before,[4]wheref{\displaystyle f}is the function defined by
The second property may be simplified to
Then compactness is implied ifP{\displaystyle P}is assumed to be bounded.
Adiscrete polymatroidorintegral polymatroidis a polymatroid for which the codomain off{\displaystyle f}isZ≥0{\displaystyle \mathbb {Z} _{\geq 0}}, so the vectors are inZ≥0E{\displaystyle \mathbb {Z} _{\geq 0}^{E}}instead ofR≥0E{\displaystyle \mathbb {R} _{\geq 0}^{E}}. Discrete polymatroids can be understood by focusing on thelattice pointsof a polymatroid, and are of great interest because of their relationship tomonomial ideals.
Given a positive integerk{\displaystyle k}, a discrete polymatroid(E,f){\displaystyle (E,f)}(using the matroidal definition) is ak{\displaystyle k}-polymatroidiff(e)≤k{\displaystyle f(e)\leq k}for alle∈E{\displaystyle e\in E}. Thus, a1{\displaystyle 1}-polymatroid is a matroid.
Becausegeneralized permutahedracan be constructed from submodular functions, and every generalized permutahedron has an associated submodular function, there should be a correspondence between generalized permutahedra and polymatroids. In fact every polymatroid is a generalized permutahedron that has been translated to have a vertex in the origin. This result suggests that the combinatorial information of polymatroids is shared with generalized permutahedra.
Pf{\displaystyle P_{f}}is nonempty if and only iff≥0{\displaystyle f\geq 0}and thatEPf{\displaystyle EP_{f}}is nonempty if and only iff(∅)≥0{\displaystyle f(\emptyset )\geq 0}.
Given any extended polymatroidEP{\displaystyle EP}there is a unique submodular functionf{\displaystyle f}such thatf(∅)=0{\displaystyle f(\emptyset )=0}andEPf=EP{\displaystyle EP_{f}=EP}.
For asupermodularfone analogously may define thecontrapolymatroid
This analogously generalizes the dominant of thespanning setpolytopeof matroids.
|
https://en.wikipedia.org/wiki/Polymatroid
|
Pregeometry, and in fullcombinatorial pregeometry, are essentially synonyms for "matroid". They were introduced byGian-Carlo Rotawith the intention of providing a less "ineffably cacophonous" alternative term. Also, the termcombinatorial geometry, sometimes abbreviated togeometry, was intended to replace "simple matroid". These terms are now infrequently used in the study of matroids.
It turns out that many fundamental concepts oflinear algebra– closure, independence, subspace, basis, dimension – are available in the general framework of pregeometries.
In the branch ofmathematical logiccalledmodel theory, infinite finitary matroids, there called "pregeometries" (and "geometries" if they are simple matroids), are used in the discussion of independence phenomena. The study of how pregeometries, geometries, and abstractclosure operatorsinfluence the structure offirst-ordermodels is calledgeometric stability theory.
IfV{\displaystyle V}is avector spaceover some field andA⊆V{\displaystyle A\subseteq V}, we definecl(A){\displaystyle {\text{cl}}(A)}to be the set of alllinear combinationsof vectors fromA{\displaystyle A}, also known as thespanofA{\displaystyle A}. Then we haveA⊆cl(A){\displaystyle A\subseteq {\text{cl}}(A)}andcl(cl(A))=cl(A){\displaystyle {\text{cl}}({\text{cl}}(A))={\text{cl}}(A)}andA⊆B⇒cl(A)⊆cl(B){\displaystyle A\subseteq B\Rightarrow {\text{cl}}(A)\subseteq {\text{cl}}(B)}. TheSteinitz exchange lemmais equivalent to the statement: ifb∈cl(A∪{c})∖cl(A){\displaystyle b\in {\text{cl}}(A\cup \{c\})\smallsetminus {\text{cl}}(A)}, thenc∈cl(A∪{b}).{\displaystyle c\in {\text{cl}}(A\cup \{b\}).}
The linear algebra concepts of independent set, generating set, basis and dimension can all be expressed using thecl{\displaystyle {\text{cl}}}-operator alone. A pregeometry is an abstraction of this situation: we start with an arbitrary setS{\displaystyle S}and an arbitrary operatorcl{\displaystyle {\text{cl}}}which assigns to each subsetA{\displaystyle A}ofS{\displaystyle S}a subsetcl(A){\displaystyle {\text{cl}}(A)}ofS{\displaystyle S}, satisfying the properties above. Then we can define the "linear algebra" concepts also in this more general setting.
This generalized notion of dimension is very useful in model theory, where in certain situation one can argue as follows: two models with the same cardinality must have the same dimension and two models with the same dimension must be isomorphic.
Acombinatorial pregeometry(also known as afinitary matroid) is a pair(S,cl){\displaystyle (S,{\text{cl}})}, whereS{\displaystyle S}is a set andcl:P(S)→P(S){\displaystyle {\text{cl}}:{\mathcal {P}}(S)\to {\mathcal {P}}(S)}(called theclosure map) satisfies the following axioms. For alla,b,c∈S{\displaystyle a,b,c\in S}andA,B⊆S{\displaystyle A,B\subseteq S}:
Sets of the formcl(A){\displaystyle {\text{cl}}(A)}for someA⊆S{\displaystyle A\subseteq S}are calledclosed. It is then clear that finite intersections of closed sets are closed and thatcl(A){\displaystyle {\text{cl}}(A)}is the smallest closed set containingA{\displaystyle A}.
Ageometryis a pregeometry in which the closure of singletons are singletons and the closure of the empty set is the empty set.
Given setsA,D⊆S{\displaystyle A,D\subseteq S},A{\displaystyle A}isindependent overD{\displaystyle D}ifa∉cl((A∖{a})∪D){\displaystyle a\notin {\text{cl}}((A\setminus \{a\})\cup D)}for anya∈A{\displaystyle a\in A}. We say thatA{\displaystyle A}isindependentif it is independent over the empty set.
A setB⊆A{\displaystyle B\subseteq A}is abasis forA{\displaystyle A}overD{\displaystyle D}if it is independent overD{\displaystyle D}andA⊆cl(B∪D){\displaystyle A\subseteq {\text{cl}}(B\cup D)}.
A basis is the same as a maximal independent subset, and usingZorn's lemmaone can show that every set has a basis. Since a pregeometry satisfies theSteinitz exchange propertyall bases are of the same cardinality, hence we may define thedimensionofA{\displaystyle A}overD{\displaystyle D}, written asdimDA{\displaystyle {\text{dim}}_{D}A}, as the cardinality of any basis ofA{\displaystyle A}overD{\displaystyle D}. Again, the dimensiondimA{\displaystyle {\text{dim}}A}ofA{\displaystyle A}is defined to be the dimension over the empty set.
The setsA,B{\displaystyle A,B}areindependentoverD{\displaystyle D}ifdimB∪DA′=dimDA′{\displaystyle {\text{dim}}_{B\cup D}A'=\dim _{D}A'}wheneverA′{\displaystyle A'}is a finite subset ofA{\displaystyle A}. Note that this relation is symmetric.
Anautomorphismof a pregeometry(S,cl){\displaystyle (S,{\text{cl}})}is a bijectionσ:S→S{\displaystyle \sigma :S\to S}such thatσ(cl(X))=cl(σ(X)){\displaystyle \sigma ({\text{cl}}(X))={\text{cl}}(\sigma (X))}for anyX⊆S{\displaystyle X\subseteq S}.
A pregeometryS{\displaystyle S}is said to behomogeneousif for any closedX⊆S{\displaystyle X\subseteq S}and any two elementsa,b∈S∖X{\displaystyle a,b\in S\setminus X}there is an automorphism ofS{\displaystyle S}which mapsa{\displaystyle a}tob{\displaystyle b}and fixesX{\displaystyle X}pointwise.
Given a pregeometry(S,cl){\displaystyle (S,{\text{cl}})}itsassociated geometry(sometimes referred in the literature as thecanonical geometry) is the geometry(S′,cl′){\displaystyle (S',{\text{cl}}')}where
Its easy to see that the associated geometry of a homogeneous pregeometry is homogeneous.
GivenA⊆S{\displaystyle A\subseteq S}thelocalizationofS{\displaystyle S}is the pregeometry(S,clA){\displaystyle (S,{\text{cl}}_{A})}whereclA(X)=cl(X∪A){\displaystyle {\text{cl}}_{A}(X)={\text{cl}}(X\cup A)}.
The pregeometry(S,cl){\displaystyle (S,{\text{cl}})}is said to be:
Triviality, modularity and local modularity pass to the associated geometry and are preserved under localization.
IfS{\displaystyle S}is a locally modular homogeneous pregeometry anda∈S∖cl(∅){\displaystyle a\in S\setminus {\text{cl}}(\varnothing )}then the localization ofS{\displaystyle S}inb{\displaystyle b}is modular.
The geometryS{\displaystyle S}is modular if and only if whenevera,b∈S{\displaystyle a,b\in S},A⊆S{\displaystyle A\subseteq S},dim{a,b}=2{\displaystyle {\text{dim}}\{a,b\}=2}anddimA{a,b}≤1{\displaystyle {\text{dim}}_{A}\{a,b\}\leq 1}then(cl{a,b}∩cl(A))∖cl(∅)≠∅{\displaystyle ({\text{cl}}\{a,b\}\cap {\text{cl}}(A))\setminus {\text{cl}}(\varnothing )\neq \varnothing }.
IfS{\displaystyle S}is any set we may definecl(A)=A{\displaystyle {\text{cl}}(A)=A}for allA⊆S{\displaystyle A\subseteq S}. This pregeometry is a trivial, homogeneous, locally finite geometry.
LetF{\displaystyle F}be afield(a division ring actually suffices) and letV{\displaystyle V}be a vector space overF{\displaystyle F}. ThenV{\displaystyle V}is a pregeometry where closures of sets are defined to be theirspan. The closed sets are the linear subspaces ofV{\displaystyle V}and the notion of dimension from linear algebra coincides with the pregeometry dimension.
This pregeometry is homogeneous and modular. Vector spaces are considered to be the prototypical example of modularity.
V{\displaystyle V}is locally finite if and only ifF{\displaystyle F}is finite.
V{\displaystyle V}is not a geometry, as the closure of any nontrivial vector is a subspace of size at least2{\displaystyle 2}.
The associated geometry of aκ{\displaystyle \kappa }-dimensional vector space overF{\displaystyle F}is the(κ−1){\displaystyle (\kappa -1)}-dimensionalprojective spaceoverF{\displaystyle F}. It is easy to see that this pregeometry is a projective geometry.
LetV{\displaystyle V}be aκ{\displaystyle \kappa }-dimensionalaffine spaceover a fieldF{\displaystyle F}. Given a set define its closure to be itsaffine hull(i.e. the smallest affine subspace containing it).
This forms a homogeneous(κ+1){\displaystyle (\kappa +1)}-dimensional geometry.
An affine space is not modular (for example, ifX{\displaystyle X}andY{\displaystyle Y}are parallel lines then the formula in the definition of modularity fails). However, it is easy to check that all localizations are modular.
LetL/K{\displaystyle L/K}be afield extension. The setL{\displaystyle L}becomes a pregeometry if we definecl(A)={x∈L:xis algebraic overK(A)}{\displaystyle {\text{cl}}(A)=\{x\in L:x{\text{ is algebraic over }}K(A)\}}forA⊆L{\displaystyle A\subseteq L}. The setA{\displaystyle A}is independent in this pregeometry if and only if it isalgebraically independentoverK{\displaystyle K}. The dimension ofA{\displaystyle A}coincides with thetranscendence degreetrdeg(K(A)/K){\displaystyle {\text{trdeg}}(K(A)/K)}.
In model theory, the case ofL{\displaystyle L}beingalgebraically closedandK{\displaystyle K}itsprime fieldis especially important.
While vector spaces are modular and affine spaces are "almost" modular (i.e. everywhere locally modular), algebraically closed fields are examples of the other extremity, not being even locally modular (i.e. none of the localizations is modular).
Given a countable first-order languageLand anL-structureM,any definable subsetDofMthat isstrongly minimalgives rise to a pregeometry on the setD. The closure operator here is given by the algebraic closure in the model-theoretic sense.
A model of a strongly minimal theory is determined up to isomorphism by its dimension as a pregeometry; this fact is used in the proof ofMorley's categoricity theorem.
In minimal sets overstable theoriesthe independence relation coincides with the notion of forking independence.
|
https://en.wikipedia.org/wiki/Pregeometry_(model_theory)
|
Ingraph theory, amatchingin ahypergraphis a set ofhyperedges, in which every two hyperedges aredisjoint. It is an extension of the notion ofmatching in a graph.[1]: 466–470[2]
Recall that ahypergraphHis a pair(V,E), whereVis asetofverticesandEis a set ofsubsetsofVcalledhyperedges. Each hyperedge may contain one or more vertices.
AmatchinginHis a subsetMofE, such that every two hyperedgese1ande2inMhave an empty intersection (have no vertex in common).
Thematching numberof a hypergraphHis the largest size of a matching inH. It is often denoted byν(H).[1]: 466[3]
As an example, letVbe the set{1,2,3,4,5,6,7}.Consider a 3-uniform hypergraph onV(a hypergraph in which each hyperedge contains exactly 3 vertices). LetHbe a 3-uniform hypergraph with 4 hyperedges:
ThenHadmits several matchings of size 2, for example:
However, in any subset of 3 hyperedges, at least two of them intersect, so there is no matching of size 3. Hence, the matching number ofHis 2.
A hypergraphH= (V,E)is calledintersectingif every two hyperedges inEhave a vertex in common. A hypergraphHis intersectingif and only ifit has no matching with two or more hyperedges, if and only ifν(H) = 1.[4]
A graph withoutself-loopsis just a 2-uniform hypergraph: each edge can be considered as a set of the two vertices that it connects. For example, this 2-uniform hypergraph represents a graph with 4 vertices{1,2,3,4}and 3 edges:
By the above definition, a matching in a graph is a setMof edges, such that each two edges inMhave an empty intersection. This is equivalent to saying that no two edges inMare adjacent to the same vertex; this is exactly the definition of amatching in a graph.
Afractional matchingin a hypergraph is a function that assigns a fraction in[0,1]to each hyperedge, such that for every vertexvinV, the sum of fractions of hyperedges containingvis at most 1. A matching is a special case of a fractional matching in which all fractions are either 0 or 1. Thesizeof a fractional matching is the sum of fractions of all hyperedges.
Thefractional matching numberof a hypergraphHis the largest size of a fractional matching inH. It is often denoted byν*(H).[3]
Since a matching is a special case of a fractional matching, for every hypergraphH:
Matching-number(H) ≤ fractional-matching-number(H)
Symbolically, this principle is written:
In general, the fractional matching number may be larger than the matching number. A theorem byZoltán Füredi[4]provides upper bounds on the fractional-matching-number(H) / matching-number(H) ratio:
ν∗(H)ν(H)≤r−1+1r.{\displaystyle {\frac {\nu ^{*}(H)}{\nu (H)}}\leq r-1+{\frac {1}{r}}.}
In particular, in a simple graph:[5]
ν∗(H)ν(H)≤32.{\displaystyle {\frac {\nu ^{*}(H)}{\nu (H)}}\leq {\frac {3}{2}}.}
ν∗(H)ν(H)≤r−1.{\displaystyle {\frac {\nu ^{*}(H)}{\nu (H)}}\leq r-1.}
ν∗(H)ν(H)≤r−1.{\displaystyle {\frac {\nu ^{*}(H)}{\nu (H)}}\leq r-1.}
In particular, in a bipartite graph,ν*(H) =ν(H). This was proved byAndrás Gyárfás.[4]
A matchingMis calledperfectif every vertexvinVis contained inexactlyone hyperedge ofM. This is the natural extension of the notion ofperfect matchingin a graph.
A fractional matchingMis calledperfectif for every vertexvinV, the sum of fractions of hyperedges inMcontainingvisexactly1.
Consider a hypergraphHin which each hyperedge contains at mostnvertices. IfHadmits a perfect fractional matching, then its fractional matching number is at least|V|⁄n. If each hyperedge inHcontains exactlynvertices, then its fractional matching number is at exactly|V|⁄n.[6]: sec.2This is a generalization of the fact that, in a graph, the size of a perfect matching is|V|⁄2.
Given a setVof vertices, a collectionEof subsets ofVis calledbalancedif the hypergraph(V,E)admits a perfect fractional matching.
For example, ifV= {1,2,3,a,b,c}andE= { {1,a}, {2,a}, {1,b}, {2,b}, {3,c} },thenEis balanced, with the perfect fractional matching{ 1/2, 1/2, 1/2, 1/2, 1 }.
There are various sufficient conditions for the existence of a perfect matching in a hypergraph:
Aset-familyEover a ground setVis calledbalanced(with respect toV) if the hypergraphH= (V,E)admits a perfect fractional matching.[6]: sec.2
For example, consider the vertex setV= {1,2,3,a,b,c}and the edge setE= {1-a, 2-a, 1-b, 2-b, 3-c}.Eis balanced, since there is a perfect fractional matching with weights{1/2, 1/2, 1/2, 1/2, 1}.
The problem of finding a maximum-cardinality matching in a hypergraph, thus calculatingν(H){\displaystyle \nu (H)}, is NP-hard even for 3-uniform hypergraphs (see3-dimensional matching). This is in contrast to the case of simple (2-uniform) graphs in which computing amaximum-cardinality matchingcan be done in polynomial time.
Avertex-cover in a hypergraphH= (V,E)is a subsetTofV, such that every hyperedge inEcontains at least one vertex ofT(it is also called atransversalor ahitting set, and is equivalent to aset cover). It is a generalization of the notion of avertex coverin a graph.
Thevertex-cover numberof a hypergraphHis the smallest size of a vertex cover inH. It is often denoted byτ(H),[1]: 466for transversal.
Afractional vertex-coveris a function assigning a weight to each vertex inV, such that for every hyperedgeeinE, the sum of fractions of vertices ineis at least 1. A vertex cover is a special case of a fractional vertex cover in which all weights are either 0 or 1. Thesizeof a fractional vertex-cover is the sum of fractions of all vertices.
Thefractional vertex-cover numberof a hypergraphHis the smallest size of a fractional vertex-cover inH. It is often denoted byτ*(H).
Since a vertex-cover is a special case of a fractional vertex-cover, for every hypergraphH:
fractional-vertex-cover-number (H) ≤ vertex-cover-number (H).
Linear programming dualityimplies that, for every hypergraphH:
fractional-matching-number (H) = fractional-vertex-cover-number(H).
Hence, for every hypergraphH:[4]
If the size of each hyperedge inHis at mostrthen the union of all hyperedges in a maximum matching is a vertex-cover (if there was an uncovered hyperedge, we could have added it to the matching). Therefore:
This inequality is tight: equality holds, for example, whenVcontainsr⋅ν(H) +r– 1vertices andEcontains all subsets ofrvertices.
However, in generalτ*(H) <r⋅ν(H), sinceν*(H) <r⋅ν(H); seeFractional matchingabove.
Ryser's conjecturesays that, in everyr-partiter-uniform hypergraph:
Some special cases of the conjecture have been proved; seeRyser's conjecture.
A hypergraph has theKőnig propertyif its maximum matching number equals its minimum vertex-cover number, namely ifν(H) =τ(H). TheKőnig-Egerváry theoremshows that everybipartite graphhas the Kőnig property. To extend this theorem to hypergraphs, we need to extend the notion of bipartiteness to hypergraphs.[1]: 468
A natural generalization is as follows. A hypergraph is called2-colorableif its vertices can be 2-colored so that every hyperedge (of size at least 2) contains at least one vertex of each color. An alternative term isProperty B. A simple graph is bipartite iff it is 2-colorable. However, there are 2-colorable hypergraphs without Kőnig's property. For example, consider the hypergraph withV= {1,2,3,4}with all tripletsE= { {1,2,3} , {1,2,4} , {1,3,4} , {2,3,4} }.It is 2-colorable, for example, we can color{1,2}blue and{3,4}white. However, its matching number is 1 and its vertex-cover number is 2.
A stronger generalization is as follows. Given a hypergraphH= (V,E)and a subsetV'ofV, therestrictionofHtoV'is the hypergraph whose vertices areV, and for every hyperedgeeinEthat intersectsV', it has a hyperedgee'that is the intersection ofeandV'. A hypergraph is calledbalancedif all its restrictions areessentially 2-colorable, meaning that we ignore singleton hyperedges in the restriction.[8]A simple graph is bipartite iff it is balanced.
A simple graph is bipartite iff it has no odd-length cycles. Similarly, a hypergraph is balanced iff it has no odd-lengthcircuits. A circuit of lengthkin a hypergraph is an alternating sequence(v1,e1,v2,e2, …,vk,ek,vk+1=v1), where theviare distinct vertices and theeiare distinct hyperedges, and each hyperedge contains the vertex to its left and the vertex to its right. The circuit is calledunbalancedif each hyperedge contains no other vertices in the circuit.Claude Bergeproved that a hypergraph is balanced if and only if it does not contain an unbalanced odd-length circuit. Every balanced hypergraph has Kőnig's property.[9][1]: 468–470
The following are equivalent:[1]: 470–471
The problem ofset packingis equivalent to hypergraph matching.
Avertex-packingin a (simple) graph is a subsetPof its vertices, such that no two vertices inPare adjacent.
The problem of finding a maximum vertex-packing in a graph is equivalent to the problem of finding a maximum matching in a hypergraph:[1]: 467
|
https://en.wikipedia.org/wiki/Matching_in_hypergraphs
|
Ingraph theory, afractional matchingis a generalization of amatchingin which, intuitively, each vertex may be broken into fractions that are matched to different neighbor vertices.
Given agraphG= (V,E), a fractional matching inGis a function that assigns, to each edgeeinE, a fractionf(e) in [0, 1], such that for every vertexvinV, the sum of fractions of edges adjacent tovis at most 1:[1]∀v∈V:∑e∋vf(e)≤1{\displaystyle \forall v\in V:\sum _{e\ni v}f(e)\leq 1}A matching in the traditional sense is a special case of a fractional matching, in which the fraction of every edge is either 0 or 1:f(e) = 1 ifeis in the matching, andf(e) = 0 if it is not. For this reason, in the context of fractional matchings, usual matchings are sometimes calledintegral matchings.
The size of an integral matching is the number of edges in the matching, and the matching numberν(G){\displaystyle \nu (G)}of a graphGis the largest size of a matching inG. Analogously, thesizeof a fractional matching is the sum of fractions of all edges. Thefractional matching numberof a graphGis the largest size of a fractional matching inG. It is often denoted byν∗(G){\displaystyle \nu ^{*}(G)}.[2]Since a matching is a special case of a fractional matching, for every graphGone has that the integral matching number ofGis less than or equal to the fractional matching number ofG; in symbols:ν(G)≤ν∗(G).{\displaystyle \nu (G)\leq \nu ^{*}(G).}A graph in whichν(G)=ν∗(G){\displaystyle \nu (G)=\nu ^{*}(G)}is called astable graph.[3]Everybipartite graphis stable; this means that in every bipartite graph, the fractional matching number is an integer and it equals the integral matching number.
In a general graph,ν(G)>23ν∗(G).{\displaystyle \nu (G)>{\frac {2}{3}}\nu ^{*}(G).}The fractional matching number is either an integer or ahalf-integer.[4]
For a bipartite graphG= (X+Y,E), a fractional matching can be presented as a matrix with |X| rows and |Y| columns. The value of the entry in rowxand columnyis the fraction of the edge (x,y).
A fractional matching is calledperfectif the sum of weights adjacent to each vertex is exactly 1. The size of a perfect matching is exactly |V|/2.
In abipartite graphG= (X+Y,E), a fractional matching is calledX-perfectif the sum of weights adjacent to each vertex ofXis exactly 1. The size of anX-perfect fractional matching is exactly |X|.
For a bipartite graphG= (X+Y,E), the following are equivalent:
The first condition implies the second because an integral matching is a fractional matching. The second implies the third because, for each subsetWofX, the sum of weights near vertices ofWis |W|, so the edges adjacent to them are necessarily adjacent to at least |W| vertices ofY. ByHall's marriage theorem, the last condition implies the first one.[5][better source needed]
In a general graph, the above conditions are not equivalent - the largest fractional matching can be larger than the largest integral matching. For example, a 3-cycle admits a perfect fractional matching of size 3/2 (the fraction of every edge is 1/2), but does not admit perfect integral matching - the largest integral matching is of size 1.
A largest fractional matching in a graph can be easily found bylinear programming, or alternatively by amaximum flow algorithm. In a bipartite graph, it is possible to convert a maximum fractional matching to a maximum integral matching of the same size. This leads to a simple polynomial-time algorithm for finding a maximum matching in a bipartite graph.[6]
IfGis a bipartite graph with |X| = |Y| =n, andMis a perfect fractional matching, then the matrix representation ofMis adoubly stochastic matrix- the sum of elements in each row and each column is 1.Birkhoff's algorithmcan be used to decompose the matrix into a convex sum of at mostn2-2n+2 permutation matrices. This corresponds to decomposingMinto a convex sum of at mostn2-2n+2 perfect matchings.
A fractional matching of maximum cardinality (i.e., maximum sum of fractions) can be found bylinear programming. There is also a strongly-polynomial time algorithm,[7]usingaugmenting paths, that runs in timeO(|V||E|){\displaystyle O(|V||E|)}.
Suppose each edge on the graph has a weight. A fractional matching of maximum weight in a graph can be found bylinear programming. In a bipartite graph, it is possible to convert a maximum-weight fractional matching to a maximum-weight integral matching of the same size, in the following way:[8]
Given a graphG= (V,E), thefractional matching polytopeofGis aconvex polytopethat represents all possible fractional matchings ofG. It is a polytope inR|E|- the |E|-dimensionalEuclidean space. Each point (x1,...,x|E|) in the polytope represents a matching in which the fraction of each edgeeisxe. The polytope is defined by |E| non-negativity constraints (xe≥ 0 for alleinE) and |V| vertex constraints (the sum ofxe, for all edgesethat are adjacent to a vertexv, is at most 1). In a bipartite graph, the vertices of the fractionalmatching polytopeare all integral.
|
https://en.wikipedia.org/wiki/Fractional_matching
|
Ingraph theory, theDulmage–Mendelsohn decompositionis a partition of the vertices of abipartite graphinto subsets, with the property that two adjacent vertices belong to the same subset if and only if they are paired with each other in aperfect matchingof the graph. It is named after A. L. Dulmage andNathan Mendelsohn, who published it in 1958.[1]A generalization to any graph is theEdmonds–Gallai decomposition, using theBlossom algorithm.
The Dulmage-Mendelshon decomposition can be constructed as follows.[2](it is attributed to[3]who in turn attribute it to[4]).
LetGbe a bipartite graph,Mamaximum-cardinality matchinginG, andV0the set of vertices ofGunmatched byM(the "free vertices"). ThenGcan be partitioned into three parts:
An illustration is shown on the left. The bold lines are the edges ofM. The weak lines are other edges ofG. The red dots are the vertices ofV0. Note thatV0is contained inE, since it is reachable fromV0by a path of length 0.
Based on this decomposition, the edges in G can be partitioned into six parts according to their endpoints:E-U, E-E, O-O, O-U, E-O, U-U. This decomposition has the following properties:[3]
LetG= (X+Y,E) be a bipartite graph, and letDbe the set of vertices inGthat are not matched in at least onemaximum matchingofG. ThenDis necessarily anindependent set. SoGcan be partitioned into three parts:
Every maximum matching inGconsists of matchings in the first and second part that match all neighbors ofD, together with aperfect matchingof the remaining vertices. IfGhas a perfect matching, then the third set containsallvertices of G.
The third set of vertices in the coarse decomposition (or all vertices in a graph with a perfect matching) may additionally be partitioned into subsets by the following steps:
To see that this subdivision into subsets characterizes the edges that belong to perfect matchings, suppose that two verticesxandyinGbelong to the same subset of the decomposition, but are not already matched by the initial perfect matching. Then there exists a strongly connected component inHcontaining edgex,y. This edge must belong to asimple cycleinH(by the definition of strong connectivity) which necessarily corresponds to an alternating cycle inG(a cycle whose edges alternate between matched and unmatched edges). This alternating cycle may be used to modify the initial perfect matching to produce a new matching containing edgex,y.
An edgex,yof the graphGbelongs to all perfect matchings ofG, if and only ifxandyare the only members of their set in the decomposition. Such an edge exists if and only if thematching preclusionnumber of the graph is one.
As another component of the Dulmage–Mendelsohn decomposition, Dulmage and Mendelsohn defined thecoreof a graph to be the union of its maximum matchings.[5]However, this concept should be distinguished from thecorein the sense of graph homomorphisms, and from thek-coreformed by the removal of low-degree vertices.
This decomposition has been used to partition meshes infinite element analysis, and to determine specified, underspecified and overspecified equations in systems of nonlinear equations. It was also used for an algorithm forrank-maximal matching.
In[6]there is a different decomposition of a bipartite graph, which is asymmetric - it distinguishes between vertices in one side of the graph and the vertices on the other side. It can be used to find a maximum-cardinalityenvy-free matchingin an unweighted bipartite graph, as well as a minimum-cost maximum-cardinality matching in a weighted bipartite graph.[6]
|
https://en.wikipedia.org/wiki/Dulmage%E2%80%93Mendelsohn_decomposition
|
Ingraph theory, aproper edge coloringof agraphis an assignment of "colors" to the edges of the graph so that no two incident edges have the same color. For example, the figure to the right shows an edge coloring of a graph by the colors red, blue, and green. Edge colorings are one of several different types ofgraph coloring. Theedge-coloring problemasks whether it is possible to color the edges of a given graph using at mostkdifferent colors, for a given value ofk, or with the fewest possible colors. The minimum required number of colors for the edges of a given graph is called thechromatic indexof the graph. For example, the edges of the graph in the illustration can be colored by three colors but cannot be colored by two colors, so the graph shown has chromatic index three.
ByVizing's theorem, the number of colors needed to edge color a simple graph is either its maximumdegreeΔorΔ+1. For some graphs, such asbipartite graphsand high-degreeplanar graphs, the number of colors is alwaysΔ, and formultigraphs, the number of colors may be as large as3Δ/2. There are polynomial time algorithms that construct optimal colorings of bipartite graphs, and colorings of non-bipartite simple graphs that use at mostΔ+1colors; however, the general problem of finding an optimal edge coloring isNP-hardand the fastest known algorithms for it take exponential time. Many variations of the edge-coloring problem, in which an assignments of colors to edges must satisfy other conditions than non-adjacency, have been studied. Edge colorings have applications in scheduling problems and in frequency assignment forfiber opticnetworks.
Acycle graphmay have its edges colored with two colors if the length of the cycle is even: simply alternate the two colors around the cycle. However, if the length is odd, three colors are needed.[1]
Acomplete graphKnwithnvertices is edge-colorable withn− 1colors whennis an even number; this is a special case ofBaranyai's theorem.Soifer (2008)provides the following geometric construction of a coloring in this case: placenpoints at the vertices and center of a regular(n− 1)-sided polygon. For each color class, include one edge from the center to one of the polygon vertices, and all of the perpendicular edges connecting pairs of polygon vertices. However, whennis odd,ncolors are needed: each color can only be used for(n− 1)/2edges, a1/nfraction of the total.[2]
Several authors have studied edge colorings of theodd graphs,n-regular graphs in which the vertices represent teams ofn− 1players selected from a pool of2n− 1players, and in which the edges represent possible pairings of these teams (with one player left as "odd man out" to referee the game). The case thatn= 3gives the well-knownPetersen graph. AsBiggs (1972)explains the problem (forn= 6), the players wish to find a schedule for these pairings such that each team plays each of its six games on different days of the week, with Sundays off for all teams; that is, formalizing the problem mathematically, they wish to find a 6-edge-coloring of the 6-regular odd graphO6. Whennis 3, 4, or 8, an edge coloring ofOnrequiresn+ 1colors, but when it is 5, 6, or 7, onlyncolors are needed.[3]
As with itsvertex counterpart, anedge coloringof a graph, when mentioned without any qualification, is always assumed to be a proper coloring of the edges, meaning no two adjacent edges are assigned the same color. Here, two distinct edges are considered to be adjacent when they share a common vertex. An edge coloring of a graphGmay also be thought of as equivalent to a vertex coloring of theline graphL(G), the graph that has a vertex for every edge ofGand an edge for every pair of adjacent edges inG.
A proper edge coloring withkdifferent colors is called a (proper)k-edge-coloring. A graph that can be assigned ak-edge-coloring is said to bek-edge-colorable. The smallest number of colors needed in a (proper) edge coloring of a graphGis thechromatic index, or edge chromatic number,χ′(G). The chromatic index is also sometimes written using the notationχ1(G); in this notation, the subscript one indicates that edges are one-dimensional objects. A graph isk-edge-chromatic if its chromatic index is exactlyk. The chromatic index should not be confused with thechromatic numberχ(G)orχ0(G), the minimum number of colors needed in a proper vertex coloring ofG.
Unless stated otherwise all graphs are assumed to be simple, in contrast tomultigraphsin which two or more edges may be connecting the same pair of endpoints and in which there may be self-loops. For many problems in edge coloring, simple graphs behave differently from multigraphs, and additional care is needed to extend theorems about edge colorings of simple graphs to the multigraph case.
Amatchingin a graphGis a set of edges, no two of which are adjacent; aperfect matchingis a matching that includes edges touching all of the vertices of the graph, and amaximum matchingis a matching that includes as many edges as possible. In an edge coloring, the set of edges with any one color must all be non-adjacent to each other, so they form a matching. That is, a proper edge coloring is the same thing as a partition of the graph into disjoint matchings.
If the size of a maximum matching in a given graph is small, then many matchings will be needed in order to cover all of the edges of the graph. Expressed more formally, this reasoning implies that if a graph hasmedges in total, and if at mostβedges may belong to a maximum matching, then every edge coloring of the graph must use at leastm/βdifferent colors.[4]For instance, the 16-vertex planar graph shown in the illustration hasm= 24edges. In this graph, there can be no perfect matching; for, if the center vertex is matched, the remaining unmatched vertices may be grouped into three different connected components with four, five, and five vertices, and the components with an odd number of vertices cannot be perfectly matched. However, the graph has maximum matchings with seven edges, soβ = 7. Therefore, the number of colors needed to edge-color the graph is at least 24/7, and since the number of colors must be an integer it is at least four.
For aregular graphof degreekthat does not have a perfect matching, this lower bound can be used to show that at leastk+ 1colors are needed.[4]In particular, this is true for a regular graph with an odd number of vertices (such as the odd complete graphs); for such graphs, by thehandshaking lemma,kmust itself be even. However, the inequalityχ′ ≥m/βdoes not fully explain the chromatic index of every regular graph, because there are regular graphs that do have perfect matchings but that are notk-edge-colorable. For instance, thePetersen graphis regular, withm= 15and withβ = 5edges in its perfect matchings, but it does not have a 3-edge-coloring.
The edge chromatic number of a graphGis very closely related to themaximum degreeΔ(G), the largest number of edges incident to any single vertex ofG. Clearly,χ′(G) ≥ Δ(G), for ifΔdifferent edges all meet at the same vertexv, then all of these edges need to be assigned different colors from each other, and that can only be possible if there are at leastΔcolors available to be assigned.Vizing's theorem(named forVadim G. Vizingwho published it in 1964) states that this bound is almost tight: for any graph, the edge chromatic number is eitherΔ(G)orΔ(G) + 1.
Whenχ′(G) = Δ(G),Gis said to be of class 1; otherwise, it is said to be of class 2.
Every bipartite graph is of class 1,[5]andalmost allrandom graphsare of class 1.[6]However, it isNP-completeto determine whether an arbitrary graph is of class 1.[7]
Vizing (1965)proved thatplanar graphsof maximum degree at least eight are of class one and conjectured that the same is true for planar graphs of maximum degree seven or six. On the other hand, there exist planar graphs of maximum degree ranging from two through five that are of class two. The conjecture has since been proven for graphs of maximum degree seven.[8]Bridgelessplanarcubic graphsare all of class 1; this is an equivalent form of thefour color theorem.[9]
A1-factorizationof ak-regular graph, a partition of the edges of the graph intoperfect matchings, is the same thing as ak-edge-coloring of the graph. That is, a regular graph has a 1-factorization if and only if it is of class 1. As a special case of this, a 3-edge-coloring of acubic(3-regular) graph is sometimes called aTait coloring.
Not every regular graph has a 1-factorization; for instance, thePetersen graphdoes not. More generally thesnarksare defined as the graphs that, like the Petersen graph, are bridgeless, 3-regular, and of class 2.
According to the theorem ofKőnig (1916), every bipartite regular graph has a 1-factorization. The theorem was stated earlier in terms ofprojective configurationsand was proven byErnst Steinitz.
Formultigraphs, in which multiple parallel edges may connect the same two vertices, results that are similar to but weaker than Vizing's theorem are known relating the edge chromatic numberχ′(G), the maximum degreeΔ(G), and the multiplicityμ(G), the maximum number of edges in any bundle of parallel edges. As a simple example showing that Vizing's theorem does not generalize to multigraphs, consider aShannon multigraph, a multigraph with three vertices and three bundles ofμ(G)parallel edges connecting each of the three pairs of vertices. In this example,Δ(G) = 2μ(G)(each vertex is incident to only two out of the three bundles ofμ(G)parallel edges) but the edge chromatic number is3μ(G)(there are3μ(G)edges in total, and every two edges are adjacent, so all edges must be assigned different colors to each other). In a result that inspired Vizing,[10]Shannon (1949)showed that this is the worst case:χ′(G) ≤ (3/2)Δ(G)for any multigraphG. Additionally, for any multigraphG,χ′(G) ≤ Δ(G) + μ(G), an inequality that reduces to Vizing's theorem in the case of simple graphs (for whichμ(G) = 1).
Because the problem of testing whether a graph is class 1 isNP-complete, there is no known polynomial time algorithm for edge-coloring every graph with an optimal number of colors. Nevertheless, a number of algorithms have been developed that relax one or more of these criteria: they only work on a subset of graphs, or they do not always use an optimal number of colors, or they do not always run in polynomial time.
In the case ofbipartite graphsor multigraphs with maximum degreeΔ, the optimal number of colors is exactlyΔ.Cole, Ost & Schirra (2001)showed that an optimal edge coloring of these graphs can be found in the near-linear time boundO(mlog Δ), wheremis the number of edges in the graph; simpler, but somewhat slower, algorithms are described byCole & Hopcroft (1982)andAlon (2003). The algorithm ofAlon (2003)begins by making the input graph regular, without increasing its degree or significantly increasing its size, by merging pairs of vertices that belong to the same side of the bipartition and then adding a small number of additional vertices and edges. Then, if the degree is odd, Alon finds a single perfect matching in near-linear time, assigns it a color, and removes it from the graph, causing the degree to become even. Finally, Alon applies an observation ofGabow (1976), that selecting alternating subsets of edges in anEuler tourof the graph partitions it into two regular subgraphs, to split the edge coloring problem into two smaller subproblems, and his algorithm solves the two subproblemsrecursively. The total time for his algorithm isO(mlogm).
Forplanar graphswith maximum degreeΔ ≥ 7, the optimal number of colors is again exactlyΔ. With the stronger assumption thatΔ ≥ 9, it is possible to find an optimal edge coloring in linear time (Cole & Kowalik 2008).
For d-regular graphs which are pseudo-random in the sense that theiradjacency matrixhas second largest eigenvalue (in absolute value) at mostd1−ε, d is the optimal number of colors (Ferber & Jain 2020).
Misra & Gries (1992)andGabow et al. (1985)describe polynomial time algorithms for coloring any graph withΔ + 1colors, meeting the bound given by Vizing's theorem; seeMisra & Gries edge coloring algorithm.
For multigraphs,Karloff & Shmoys (1987)present the following algorithm, which they attribute toEli Upfal. Make the input multigraphGEulerianby adding a new vertex connected by an edge to every odd-degree vertex, find an Euler tour, and choose an orientation for the tour. Form a bipartite graphHin which there are two copies of each vertex ofG, one on each side of the bipartition, with an edge from a vertexuon the left side of the bipartition to a vertexvon the right side of the bipartition whenever the oriented tour has an edge fromutovinG. Apply a bipartite graph edge coloring algorithm toH. Each color class inHcorresponds to a set of edges inGthat form a subgraph with maximum degree two; that is, a disjoint union of paths and cycles, so for each color class inHit is possible to form three color classes inG. The time for the algorithm is bounded by the time to edge color a bipartite graph,O(mlog Δ)using the algorithm ofCole, Ost & Schirra (2001). The number of colors this algorithm uses is at most3⌈Δ2⌉{\displaystyle 3\left\lceil {\frac {\Delta }{2}}\right\rceil }, close to but not quite the same as Shannon's bound of⌊3Δ2⌋{\displaystyle \left\lfloor {\frac {3\Delta }{2}}\right\rfloor }. It may also be made into aparallel algorithmin a straightforward way. In the same paper, Karloff and Shmoys also present a linear time algorithm for coloring multigraphs of maximum degree three with four colors (matching both Shannon's and Vizing's bounds) that operates on similar principles: their algorithm adds a new vertex to make the graph Eulerian, finds an Euler tour, and then chooses alternating sets of edges on the tour to split the graph into two subgraphs of maximum degree two. The paths and even cycles of each subgraph may be colored with two colors per subgraph. After this step, each remaining odd cycle contains at least one edge that may be colored with one of the two colors belonging to the opposite subgraph. Removing this edge from the odd cycle leaves a path, which may be colored using the two colors for its subgraph.
Agreedy coloringalgorithm that considers the edges of a graph or multigraph one by one, assigning each edge thefirst availablecolor, may sometimes use as many as2Δ − 1colors, which may be nearly twice as many number of colors as is necessary. However, it has the advantage that it may be used in theonline algorithmsetting in which the input graph is not known in advance; in this setting, itscompetitive ratiois two, and this is optimal: no other online algorithm can achieve a better performance.[11]However, if edges arrive in a random order, and the input graph has a degree that is at least logarithmic, then smaller competitive ratios can be achieved.[12]
Several authors have made conjectures that imply that thefractional chromatic indexof any multigraph (a number that can be computed in polynomial time usinglinear programming) is within one of the chromatic index.[13]If these conjectures are true, it would be possible to compute a number that is never more than one off from the chromatic index in the multigraph case, matching what is known via Vizing's theorem for simple graphs. Although unproven in general, these conjectures are known to hold when the chromatic index is at leastΔ+Δ/2{\displaystyle \Delta +{\sqrt {\Delta /2}}}, as can happen for multigraphs with sufficiently large multiplicity.[14]
It is straightforward to test whether a graph may be edge colored with one or two colors, so the first nontrivial case of edge coloring is testing whether a graph has a 3-edge-coloring.
AsKowalik (2009)showed, it is possible to test whether a graph has a 3-edge-coloring in timeO(1.344n), while using only polynomial space. Although this time bound is exponential, it is significantly faster than a brute force search over all possible assignments of colors to edges. Everybiconnected3-regular graph withnvertices hasO(2n/2)3-edge-colorings; all of which can be listed in timeO(2n/2)(somewhat slower than the time to find a single coloring); asGreg Kuperbergobserved, the graph of aprismover ann/2-sided polygon hasΩ(2n/2)colorings (lower instead of upper bound), showing that this bound is tight.[15]
By applying exact algorithms for vertex coloring to theline graphof the input graph, it is possible to optimally edge-color any graph withmedges, regardless of the number of colors needed, in time2mmO(1)and exponential space, or in timeO(2.2461m)and only polynomial space (Björklund, Husfeldt & Koivisto 2009).
Because edge coloring is NP-complete even for three colors, it is unlikely to befixed parameter tractablewhen parametrized by the number of colors. However, it is tractable for other parameters. In particular,Zhou, Nakano & Nishizeki (1996)showed that for graphs oftreewidthw, an optimal edge coloring can be computed in timeO(nw(6w)w(w+ 1)/2), a bound that depends superexponentially onwbut only linearly on the numbernof vertices in the graph.
Nemhauser & Park (1991)formulate the edge coloring problem as aninteger programand describe their experience using an integer programming solver to edge color graphs. However, they did not perform any complexity analysis of their algorithm.
A graph isuniquelyk-edge-colorable if there is only one way of partitioning the edges intokcolor classes, ignoring thek!possible permutations of the colors. Fork≠ 3, the only uniquelyk-edge-colorable graphs are paths, cycles, andstars, but fork= 3other graphs may also be uniquelyk-edge-colorable. Every uniquely 3-edge-colorable graph has exactly threeHamiltonian cycles(formed by deleting one of the three color classes) but there exist 3-regular graphs that have three Hamiltonian cycles and are not uniquely 3-colorable, such as thegeneralized Petersen graphsG(6n+ 3, 2)forn≥ 2. The only known nonplanar uniquely 3-colorable graph is the generalized Petersen graphG(9,2), and it has been conjectured that no others exist.[16]
Folkman & Fulkerson (1969)investigated the non-increasing sequences of numbersm1,m2,m3, ...with the property that there exists a proper edge coloring of a given graphGwithm1edges of the first color,m2edges of the second color, etc. They observed that, if a sequencePis feasible in this sense, and is greater inlexicographic orderthan a sequenceQwith the same sum, thenQis also feasible. For, ifP>Qin lexicographic order, thenPcan be transformed intoQby a sequence of steps, each of which reduces one of the numbersmiby one unit and increases another later numbermjwithi<jby one unit. In terms of edge colorings, starting from a coloring that realizesP, each of these same steps may be performed by swapping colorsiandjon aKempe chain, a maximal path of edges that alternate between the two colors. In particular, any graph has anequitableedge coloring, an edge coloring with an optimal number of colors in which every two color classes differ in size by at most one unit.
TheDe Bruijn–Erdős theoremmay be used to transfer many edge coloring properties of finite graphs toinfinite graphs. For instance, Shannon's and Vizing's theorems relating the degree of a graph to its chromatic index both generalize straightforwardly to infinite graphs.[17]
Richter (2011)considers the problem of finding agraph drawingof a givencubic graphwith the properties that all of the edges in the drawing have one of three different slopes and that no two edges lie on the same line as each other. If such a drawing exists, then clearly the slopes of the edges may be used as colors in a 3-edge-coloring of the graph. For instance, the drawing of theutility graphK3,3as the edges and long diagonals of aregular hexagonrepresents a 3-edge-coloring of the graph in this way. As Richter shows, a 3-regular simple bipartite graph, with a given Tait coloring, has a drawing of this type that represents the given coloring if and only if the graph is3-edge-connected. For a non-bipartite graph, the condition is a little more complicated: a given coloring can be represented by a drawing if thebipartite double coverof the graph is 3-edge-connected, and if deleting any monochromatic pair of edges leads to a subgraph that is still non-bipartite. These conditions may all be tested easily in polynomial time; however, the problem of testing whether a 4-edge-colored 4-regular graph has a drawing with edges of four slopes, representing the colors by slopes, is complete for theexistential theory of the reals, a complexity class at least as difficult as being NP-complete.
As well as being related to the maximum degree and maximum matching number of a graph, the chromatic index is closely related to thelinear arboricityla(G)of a graphG, the minimum number of linear forests (disjoint unions of paths) into which the graph's edges may be partitioned. A matching is a special kind of linear forest, and in the other direction, any linear forest can be 2-edge-colored, so for everyGit follows thatla(G) ≤ χ′(G) ≤ 2 la(G).Akiyama's conjecture(named forJin Akiyama) states thatla(G)≤⌈Δ+12⌉{\displaystyle \mathop {\mathrm {la} } (G)\leq \left\lceil {\frac {\Delta +1}{2}}\right\rceil }, from which it would follow more strongly that2 la(G) − 2 ≤ χ′(G) ≤ 2 la(G). For graphs of maximum degree three,la(G)is always exactly two, so in this case the boundχ′(G) ≤ 2 la(G)matches the bound given by Vizing's theorem.[18]
TheThue numberof a graph is the number of colors required in an edge coloring meeting the stronger requirement that, in every even-length path, the first and second halves of the path form different sequences of colors.
Thearboricityof a graph is the minimum number of colors required so that the edges of each color have no cycles (rather than, in the standard edge coloring problem, having no adjacent pairs of edges). That is, it is the minimum number offorestsinto which the edges of the graph may be partitioned into.[19]Unlike the chromatic index, the arboricity of a graph may be computed in polynomial time.[20]
List edge-coloringis a problem in which one is given a graph in which each edge is associated with a list of colors, and must find a proper edge coloring in which the color of each edge is drawn from that edge's list. The list chromatic index of a graphGis the smallest numberkwith the property that, no matter how one chooses lists of colors for the edges, as long as each edge has at leastkcolors in its list, then a coloring is guaranteed to be possible. Thus, the list chromatic index is always at least as large as the chromatic index. TheDinitz conjectureon the completion of partialLatin squaresmay be rephrased as the statement that the list edge chromatic number of thecomplete bipartite graphKn,nequals its edge chromatic number,n.Galvin (1995)resolved the conjecture by proving, more generally, that in every bipartite graph the chromatic index and list chromatic index are equal. The equality between the chromatic index and the list chromatic index has been conjectured to hold, even more generally, for arbitrary multigraphs with no self-loops; this conjecture remains open.
Many other commonly studied variations of vertex coloring have also been extended to edge colorings. For instance, complete edge coloring is the edge-coloring variant ofcomplete coloring, a proper edge coloring in which each pair of colors must be represented by at least one pair of adjacent edges and in which the goal is to maximize the total number of colors.[21]Strong edge coloring is the edge-coloring variant ofstrong coloring, an edge coloring in which every two edges with adjacent endpoints must have different colors.[22]Strong edge coloring has applications inchannel allocation schemesforwireless networks.[23]
Acyclic edge coloring is the edge-coloring variant ofacyclic coloring, an edge coloring for which every two color classes form an acyclic subgraph (that is, a forest).[24]The acyclic chromatic index of a graphG{\displaystyle G}, denoted bya′(G){\displaystyle a'(G)}, is the smallest number of colors needed to have a proper acyclic edge coloring ofG{\displaystyle G}. It has been conjectured thata′(G)≤Δ+2{\displaystyle a'(G)\leq \Delta +2}, whereΔ{\displaystyle \Delta }is the maximum degree ofG{\displaystyle G}.[25]Currently the best known bound isa′(G)≤⌈3.74(Δ−1)⌉{\displaystyle a'(G)\leq \lceil 3.74(\Delta -1)\rceil }.[26]The problem becomes easier whenG{\displaystyle G}has largegirth. More specifically, there is a constantc{\displaystyle c}such that if the girth ofG{\displaystyle G}is at leastcΔlogΔ{\displaystyle c\Delta \log \Delta }, thena′(G)≤Δ+2{\displaystyle a'(G)\leq \Delta +2}.[27]A similar result is that for allϵ>0{\displaystyle \epsilon >0}there exists ang{\displaystyle g}such that ifG{\displaystyle G}has girth at leastg{\displaystyle g}, thena′(G)≤(1+ϵ)Δ{\displaystyle a'(G)\leq (1+\epsilon )\Delta }.[28]
Eppstein (2013)studied 3-edge-colorings of cubic graphs with the additional property that no two bichromatic cycles share more than a single edge with each other. He showed that the existence of such a coloring is equivalent to the existence of adrawing of the graphon a three-dimensional integer grid, with edges parallel to the coordinate axes and each axis-parallel line containing at most two vertices. However, like the standard 3-edge-coloring problem, finding a coloring of this type is NP-complete.
Total coloringis a form of coloring that combines vertex and edge coloring, by requiring both the vertices and edges to be colored. Any incident pair of a vertex and an edge, or an edge and an edge, must have distinct colors, as must any two adjacent vertices. It has been conjectured (combining Vizing's theorem andBrooks' theorem) that any graph has a total coloring in which the number of colors is at most the maximum degree plus two, but this remains unproven.
If a 3-regular graph on a surface is 3-edge-colored, itsdual graphforms atriangulationof the surface which is also edge colored (although not, in general, properly edge colored) in such a way that every triangle has one edge of each color. Other colorings and orientations of triangulations, with other local constraints on how the colors are arranged at the vertices or faces of the triangulation, may be used to encode several types of geometric object. For instance, rectangular subdivisions (partitions of a rectangular subdivision into smaller rectangles, with three rectangles meeting at every vertex) may be described combinatorially by a "regular labeling", a two-coloring of the edges of a triangulation dual to the subdivision, with the constraint that the edges incident to each vertex form four contiguous subsequences, within each of which the colors are the same. This labeling is dual to a coloring of the rectangular subdivision itself in which the vertical edges have one color and the horizontal edges have the other color. Similar local constraints on the order in which colored edges may appear around a vertex may also be used to encode straight-line grid embeddings of planar graphs and three-dimensional polyhedra with axis-parallel sides. For each of these three types of regular labelings, the set of regular labelings of a fixed graph forms adistributive latticethat may be used to quickly list all geometric structures based on the same graph (such as all axis-parallel polyhedra having the same skeleton) or to find structures satisfying additional constraints.[29]
Adeterministic finite automatonmay be interpreted as adirected graphin which each vertex has the same out-degreed, and in which the edges ared-colored in such a way that every two edges with the same source vertex have distinct colors. Theroad coloring problemis the problem of edge-coloring a directed graph with uniform out-degrees, in such a way that the resulting automaton has asynchronizing word.Trahtman (2009)solved the road coloring problem by proving that such a coloring can be found whenever the given graph isstrongly connectedandaperiodic.
Ramsey's theoremconcerns the problem ofk-coloring the edges of a largecomplete graphKnin order to avoid creating monochromatic complete subgraphsKsof some given sizes. According to the theorem, there exists a numberRk(s)such that, whenevern≥R(s), such a coloring is not possible. For instance,R2(3) = 6, that is, if the edges of the graphK6are 2-colored, there will always be a monochromatic triangle.
A path in an edge-colored graph is said to be arainbowpath if no color repeats on it. A graph is said to be rainbow colored if there is a rainbow path between any two pairs of vertices.
An edge-colouring of a graph G with colours 1. . . t is aninterval t coloringif all colours are used, and the colours of edges incident to each vertex of G are distinct and form an interval of integers.
Edge colorings of complete graphs may be used to schedule around-robin tournamentinto as few rounds as possible so that each pair of competitors plays each other in one of the rounds; in this application, the vertices of the graph correspond to the competitors in the tournament, the edges correspond to games, and the edge colors correspond to the rounds in which the games are played.[30]Similar coloring techniques may also be used to schedule other sports pairings that are not all-play-all; for instance, in theNational Football League, the pairs of teams that will play each other in a given year are determined, based on the teams' records from the previous year, and then an edge coloring algorithm is applied to the graph formed by the set of pairings in order to assign games to the weekends on which they are played.[31]For this application, Vizing's theorem implies that no matter what set of pairings is chosen (as long as no teams play each other twice in the same season), it is always possible to find a schedule that uses at most one more weekend than there are games per team.
Open shop schedulingis a problem ofscheduling production processes, in which there are a set of objects to be manufactured, each object has a set of tasks to be performed on it (in any order), and each task must be performed on a specific machine, preventing any other task that requires the same machine from being performed at the same time. If all tasks have the same length, then this problem may be formalized as one of edge coloring a bipartite multigraph, in which the vertices on one side of the bipartition represent the objects to be manufactured, the vertices on the other side of the bipartition represent the manufacturing machines, the edges represent tasks that must be performed, and the colors represent time steps in which each task may be performed. Since bipartite edge coloring may be performed in polynomial time, the same is true for this restricted case of open shop scheduling.[32]
Gandham, Dawande & Prakash (2005)study the problem of link scheduling fortime-division multiple accessnetwork communications protocols onsensor networksas a variant of edge coloring. In this problem, one must choose time slots for the edges of a wireless communications network so that each node of the network can communicate with each neighboring node without interference. Using a strong edge coloring (and using two time slots for each edge color, one for each direction) would solve the problem but might use more time slots than necessary. Instead, they seek a coloring of the directed graph formed by doubling each undirected edge of the network, with the property that each directed edgeuvhas a different color from the edges that go out fromvand from the neighbors ofv. They propose a heuristic for this problem based on a distributed algorithm for(Δ + 1)-edge-coloring together with a postprocessing phase that reschedules edges that might interfere with each other.
Infiber-optic communication, thepath coloringproblem is the problem of assigning colors (frequencies of light) to pairs of nodes that wish to communicate with each other, and paths through a fiber-optic communications network for each pair, subject to the restriction that no two paths that share a segment of fiber use the same frequency as each other. Paths that pass through the same communication switch but not through any segment of fiber are allowed to use the same frequency. When the communications network is arranged as astar network, with a single central switch connected by separate fibers to each of the nodes, the path coloring problem may be modeled exactly as a problem of edge coloring a graph or multigraph, in which the communicating nodes form the graph vertices, pairs of nodes that wish to communicate form the graph edges, and the frequencies that may be used for each pair form the colors of the edge coloring problem. For communications networks with a more general tree topology, local path coloring solutions for the star networks defined by each switch in the network may be patched together to form a single global solution.[33]
Jensen & Toft (1995)list 23 open problems concerning edge coloring. They include:
|
https://en.wikipedia.org/wiki/Edge_coloring
|
Ingraph theory, a branch of mathematics, thematching preclusion numberof a graphG(denoted mp(G)) is the minimum number of edges whose deletion results in the elimination of allperfect matchingsor near-perfect matchings (matchings that cover all but one vertex in a graph with an odd number of vertices).[1]Matching preclusion measures the robustness of a graph as acommunications networktopology fordistributed algorithmsthat require each node of the distributed system to be matched with a neighboring partner node.[2]
In many graphs, mp(G) is equal to the minimumdegreeof any vertex in the graph, because deleting all edges incident to a single vertex prevents that vertex from being matched. This set of edges is called a trivial matching preclusion set.[2]A variant definition, theconditional matching preclusion number, asks for the minimum number of edges the deletion of which results in a graph that has neither a perfect or near-perfect matching nor any isolated vertices.[3][4]
It isNP-completeto test whether the matching preclusion number of a given graph is below a given threshold.[5][6]
The strong matching preclusion number (or simply, SMP number) is a generalization of the matching preclusion number; the SMP number of a graphG, smp(G) is the minimum number of vertices and/or edges whose deletion results in a graph that has neither perfect matchings nor almost-perfect matchings.[7]
Other numbers defined in a similar way by edge deletion in an undirected graph include theedge connectivity, the minimum number of edges to delete in order to disconnect the graph, and thecyclomatic number, the minimum number of edges to delete in order to eliminate all cycles.
Thisgraph theory-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Matching_preclusion
|
In the mathematical discipline ofgraph theory, arainbow matchingin anedge-colored graphis amatchingin which all the edges have distinct colors.
Given an edge-colored graphG= (V,E), a rainbow matchingMinGis a set of pairwise non-adjacent edges, that is, no two edges share a common vertex, such that all the edges in the set have distinct colors.
A maximum rainbow matching is a rainbow matching that contains the largest possible number of edges.
Rainbow matchings are of particular interest given their connection to transversals ofLatin squares.
Denote byKn,nthecomplete bipartite graphonn+nvertices. Every propern-edge coloringofKn,ncorresponds to a Latin square of ordern. A rainbow matching then corresponds to atransversalof the Latin square, meaning a selection ofnpositions, one in each row and each column, containing distinct entries.
This connection between transversals of Latin squares and rainbow matchings inKn,nhas inspired additional interest in the study of rainbow matchings intriangle-free graphs.[1]
An edge-coloring is calledproperif each edge has a single color, and each two edges of the same color have no vertex in common.
A proper edge-coloring does not guarantee the existence of a perfect rainbow matching. For example, consider the graphK2,2: the complete bipartite graph on 2+2 vertices. Suppose the edges(x1,y1)and(x2,y2)are colored green, and the edges(x1,y2)and(x2,y1)are colored blue. This is a proper coloring, but there are only two perfect matchings, and each of them is colored by a single color. This invokes the question: when does a large rainbow matching is guaranteed to exist?
Much of the research on this question was published using the terminology ofLatin transversals in Latin squares. Translated into the rainbow matching terminology:
A more general conjecture of Stein is that a rainbow matching of sizen– 1exists not only for a proper edge-coloring, but for any coloring in which each color appears on exactlynedges.[2]
Some weaker versions of these conjectures have been proved:
Wang asked if there is a functionf(d)such that every properly edge-colored graphGwith minimumdegreedand at leastf(d)vertices must have a rainbow matching of sized.[9]Obviously at least2dvertices are necessary, but how many are sufficient?
Suppose that each edge may have several different colors, while each two edges of the same color must still have no vertex in common. In other words, each color is amatching. How many colors are needed in order to guarantee the existence of a rainbow matching?
Drisko[12]studied this question using the terminology ofLatin rectangles. He proved that, for anyn≤k, in the complete bipartite graphKn,k, any family of2n– 1matchings (=colors) of sizenhas a perfect rainbow matching (of sizen). He applied this theorem to questions aboutgroup actionsanddifference sets.
Drisko also showed that2n– 1matchings may be necessary: consider a family of2n– 2matchings, of whichn– 1are{ (x1,y1), (x2,y2), ..., (xn,yn)}and the othern– 1are{(x1,y2), (x2,y3), …, (xn,y1) }.Then the largest rainbow matching is of sizen– 1(e.g. take one edge from each of the firstn– 1matchings).
Alon[13]showed that Drisko's theorem implies an older result[14]inadditive number theory.
Aharoni and Berger[15]generalized Drisko's theorem to any bipartite graph, namely: any family of2n– 1matchings of sizenin a bipartite graph has a rainbow matching of sizen.
Aharoni, Kotlar and Ziv[16]showed that Drisko's extremal example is unique in any bipartite graph.
In general graphs,2n– 1matchings are no longer sufficient. Whennis even, one can add to Drisko's example the matching{ (x1,x2), (y1,y2), (x2,x3), (y2,y3), … }and get a family of2n– 1matchings without any rainbow matching.
Aharoni, Berger, Chudnovsky, Howard and Seymour[17]proved that, in a general graph,3n– 2matchings (=colors) are always sufficient. It is not known whether this is tight: currently the best lower bound for evennis2nand for oddnit is2n– 1.[18]
Afractional matchingis a set of edges with a non-negative weight assigned to each edge, such that the sum of weights adjacent to each vertex is at most 1. The size of a fractional matching is the sum of weights of all edges. It is a generalization of a matching, and can be used to generalize both the colors and the rainbow matching:
It is known that, in a bipartite graph, the maximum fractional matching size equals the maximum matching size. Therefore, the theorem of Aharoni and Berger[15]is equivalent to the following. Letnbe any positive integer. Given any family of2n– 1fractional-matchings (=colors) of sizenin a bipartite graph, there exists a rainbow-fractional-matching of sizen.
Aharoni, Holzman and Jiang extend this theorem to arbitrary graphs as follows. Letnbe any positive integer or half-integer. Any family of2nfractional-matchings (=colors) of size at leastnin an arbitrary graph has a rainbow-fractional-matching of sizen.[18]: Thm.1.5The2nis the smallest possible for fractional matchings in arbitrary graphs: the extremal case is constructed using an odd-length cycle.
For the case of perfect fractional matchings, both the above theorems can derived from thecolorful Caratheodory theorem.
For every edgeeinE, let1ebe a vector of size|V|, where for each vertexvinV, elementvin1eequals 1 ifeis adjacent tov, and 0 otherwise (so each vector1ehas 2 ones and|V|-2 zeros). Every fractional matching corresponds to aconical combinationof edges, in which each element is at most 1. A conical combination in which each element isexactly1 corresponds to aperfectfractional matching. In other words, a collectionFof edges admits a perfect fractional matching, if and only if1v(the vector of|V|ones) is contained in theconical hullof the vectors1eforeinF.
Consider a graph with2nvertices, and suppose there are2nsubsets of edges, each of which admits a perfect fractional matching (of sizen). This means that the vector1vis in the conical hull of each of thesensubsets. By thecolorful Caratheodory theorem, there exists a selection of2nedges, one from each subset, that their conical hull contains1v. This corresponds to a rainbow perfect fractional matching. The expression2nis the dimension of the vectors1e- each vector has2nelements.
Now, suppose that the graph is bipartite. In a bipartite graph, there is a constraint on the vectors1e: the sum of elements corresponding to each part of the graph must be 1. Therefore, the vectors1elive in a(2n– 1)-dimensional space. Therefore, the same argument as above holds when there are only2n– 1subsets of edges.
Anr-uniformhypergraphis a set of hyperedges each of which contains exactlyrvertices (so a 2-uniform hypergraph is a just a graph without self-loops). Aharoni, Holzman and Jiang extend their theorem to such hypergraphs as follows. Letnbe any positive rational number. Any family of⌈r⋅n⌉fractional-matchings (=colors) of size at leastnin anr-uniform hypergraph has a rainbow-fractional-matching of sizen.[18]: Thm.1.6The⌈r⋅n⌉is the smallest possible whennis an integer.
Anr-partite hypergraphis anr-uniform hypergraph in which the vertices are partitioned intordisjoint sets and each hyperedge contains exactly one vertex of each set (so a 2-partite hypergraph is a just bipartite graph). Letnbe any positive integer. Any family ofrn–r+ 1fractional-matchings (=colors) of size at leastnin anr-partite hypergraph has a rainbow-fractional-matching of sizen.[18]: Thm.1.7Thern–r+ 1is the smallest possible: the extremal case is whenn=r– 1is aprime power, and all colors are edges of the truncatedprojective planeof ordern. So each color hasn2=rn–r+ 1edges and a fractional matching of sizen, but any fractional matching of that size requires allrn–r+ 1edges.[19]
For the case of perfect fractional matchings, both the above theorems can derived from thecolorful caratheodory theoremin the previous section. For a generalr-uniform hypergraph (admitting a perfect matching of sizen), the vectors1elive in a(rn)-dimensional space. For anr-uniformr-partite hypergraph, ther-partiteness constraints imply that the vectors1elive in a(rn–r+ 1)-dimensional space.
The above results hold only for rainbowfractionalmatchings. In contrast, the case of rainbowintegralmatchings inr-uniform hypergraphs is much less understood. The number of required matchings for a rainbow matching of sizengrows at least exponentially withn.
GareyandJohnsonhave shown that computing a maximum rainbow matching isNP-completeeven for edge-coloredbipartite graphs.[20]
Rainbow matchings have been applied for solvingpacking problems.[21]
|
https://en.wikipedia.org/wiki/Rainbow_matching
|
Ingraph theory, a branch of mathematics, askew-symmetric graphis adirected graphthat isisomorphicto its owntranspose graph, the graph formed by reversing all of its edges, under an isomorphism that is aninvolutionwithout anyfixed points. Skew-symmetric graphs are identical to thedouble covering graphsofbidirected graphs.
Skew-symmetric graphs were first introduced under the name ofantisymmetrical digraphsbyTutte (1967), later as the double covering graphs of polar graphs byZelinka (1976b), and still later as the double covering graphs of bidirected graphs byZaslavsky (1991). They arise in modeling the search for alternating paths and alternating cycles in algorithms for findingmatchingsin graphs, in testing whether astill lifepattern inConway's Game of Lifemay be partitioned into simpler components, ingraph drawing, and in theimplication graphsused to efficiently solve the2-satisfiabilityproblem.
As defined, e.g., byGoldberg & Karzanov (1996), a skew-symmetric graphGis a directed graph, together with a function σ mapping vertices ofGto other vertices ofG, satisfying the following properties:
One may use the third property to extend σ to an orientation-reversing function on the edges ofG.
Thetranspose graphofGis the graph formed by reversing every edge ofG, and σ defines agraph isomorphismfromGto its transpose. However, in a skew-symmetric graph, it is additionally required that the isomorphism pair each vertex with a different vertex, rather than allowing a vertex to be mapped to itself by the isomorphism or to group more than two vertices in a cycle of isomorphism.
A path or cycle in a skew-symmetric graph is said to beregularif, for each vertexvof the path or cycle, the corresponding vertex σ(v) is not part of the path or cycle.
Every directedpath graphwith an even number of vertices is skew-symmetric, via a symmetry that swaps the two ends of the path. However, path graphs with an odd number of vertices are not skew-symmetric, because the orientation-reversing symmetry of these graphs maps the center vertex of the path to itself, something that is not allowed for skew-symmetric graphs.
Similarly, a directedcycle graphis skew-symmetric if and only if it has an even number of vertices. In this case, the number of different mappings σ that realize the skew symmetry of the graph equals half the length of the cycle.
A skew-symmetric graph may equivalently be defined as the double covering graph of apolar graphorswitch graph,[1]which is an undirected graph in which the edges incident to each vertex are partitioned into two subsets. Each vertex of the polar graph corresponds to two vertices of the skew-symmetric graph, and each edge of the polar graph corresponds to two edges of the skew-symmetric graph. This equivalence is the one used byGoldberg & Karzanov (1996)to model problems of matching in terms of skew-symmetric graphs; in that application, the two subsets of edges at each vertex are the unmatched edges and the matched edges. Zelinka (following F. Zitek) and Cook visualize the vertices of a polar graph as points where multiple tracks of atrain trackcome together: if a train enters a switch via a track that comes in from one direction, it must exit via a track in the other direction. The problem of finding non-self-intersecting smooth curves between given points in a train track comes up in testing whether certain kinds ofgraph drawingsare valid.[2]and may be modeled as the search for a regular path in a skew-symmetric graph.
A closely related concept is thebidirected graphorpolarized graph,[3]a graph in which each of the two ends of each edge may be either a head or a tail, independently of the other end. A bidirected graph may be interpreted as a polar graph by letting the partition of edges at each vertex be determined by the partition of endpoints at that vertex into heads and tails; however, swapping the roles of heads and tails at a single vertex ("switching" the vertex) produces a different bidirected graph but the same polar graph.[4]
To form the double covering graph (i.e., the corresponding skew-symmetric graph) from a polar graphG, create for each vertexvofGtwo verticesv0andv1, and let σ(vi) =v1 −i. For each edgee= (u,v) ofG, create two directed edges in the covering graph, one oriented fromutovand one oriented fromvtou. Ifeis in the first subset of edges atv, these two edges are fromu0intov0and fromv1intou1, while ifeis in the second subset, the edges are fromu0intov1and fromv0intou1.
In the other direction, given a skew-symmetric graphG, one may form a polar graph that has one vertex for every corresponding pair of vertices inGand one undirected edge for every corresponding pair of edges inG. The undirected edges at each vertex of the polar graph may be partitioned into two subsets according to which vertex of the polar graph they go out of and come into.
A regular path or cycle of a skew-symmetric graph corresponds to a path or cycle in the polar graph that uses at most one edge from each subset of edges at each of its vertices.
In constructingmatchingsin undirected graphs, it is important to findalternating paths, paths of vertices that start and end at unmatched vertices, in which the edges at odd positions in the path are not part of a given partial matching and in which the edges at even positions in the path are part of the matching. By removing the matched edges of such a path from a matching, and adding the unmatched edges, one can increase the size of the matching. Similarly, cycles that alternate between matched and unmatched edges are of importance in weighted matching problems.
An alternating path or cycle in an undirected graph may be modeled as a regular path or cycle in a skew-symmetric directed graph.[5]To create a skew-symmetric graph from an undirected graphGwith a specified matchingM, viewGas a switch graph in which the edges at each vertex are partitioned into matched and unmatched edges; an alternating path inGis then a regular path in this switch graph and an alternating cycle inGis a regular cycle in the switch graph.
Goldberg & Karzanov (1996)generalized alternating path algorithms to show that the existence of a regular path between any two vertices of a skew-symmetric graph may be tested in linear time. Given additionally a non-negative length function on the edges of the graph that assigns the same length to any edgeeand to σ(e), the shortest regular path connecting a given pair of nodes in a skew-symmetric graph withmedges andnvertices may be tested in time O(mlogn). If the length function is allowed to have negative lengths, the existence of a negative regular cycle may be tested in polynomial time.
Along with the path problems arising in matchings, skew-symmetric generalizations of themax-flow min-cut theoremhave also been studied.[6]
Cook (2003)shows that astill life patterninConway's Game of Lifemay be partitioned into two smaller still lifes if and only if an associated switch graph contains a regular cycle. As he shows, for switch graphs with at most three edges per vertex, this may be tested in polynomial time by repeatedly removingbridges(edges the removal of which disconnects the graph) and vertices at which all edges belong to a single partition until no more such simplifications may be performed. If the result is anempty graph, there is no regular cycle; otherwise, a regular cycle may be found in any remaining bridgeless component. The repeated search for bridges in this algorithm may be performed efficiently using a dynamic graph algorithm ofThorup (2000).
Similar bridge-removal techniques in the context of matching were previously considered byGabow, Kaplan & Tarjan (1999).
An instance of the2-satisfiabilityproblem, that is, a Boolean expression inconjunctive normal formwith two variables or negations of variables per clause, may be transformed into animplication graphby replacing each clauseu∨v{\displaystyle \scriptstyle u\lor v}by the two implications(¬u)⇒v{\displaystyle \scriptstyle (\lnot u)\Rightarrow v}and(¬v)⇒u{\displaystyle \scriptstyle (\lnot v)\Rightarrow u}. This graph has a vertex for each variable or negated variable, and a directed edge for each implication; it is, by construction, skew-symmetric, with a correspondence σ that maps each variable to its negation.
AsAspvall, Plass & Tarjan (1979)showed, a satisfying assignment to the 2-satisfiability instance is equivalent to a partition of this implication graph into two subsets of vertices,Sand σ(S), such that no edge starts inSand ends in σ(S). If such a partition exists, a satisfying assignment may be formed by assigning a true value to every variable inSand a false value to every variable in σ(S). This may be done if and only if nostrongly connected componentof the graph contains both some vertexvand its complementary vertex σ(v). If two vertices belong to the same strongly connected component, the corresponding variables or negated variables are constrained to equal each other in any satisfying assignment of the 2-satisfiability instance. The total time for testing strong connectivity and finding a partition of the implication graph is linear in the size of the given 2-CNF expression.
It isNP-completeto determine whether a given directed graph is skew-symmetric, by a result ofLalonde (1981)that it is NP-complete to find a color-reversing involution in abipartite graph. Such an involution exists if and only if the directed graph given byorientingeach edge from one color class to the other is skew-symmetric, so testing skew-symmetry of this directed graph is hard. This complexity does not affect path-finding algorithms for skew-symmetric graphs, because these algorithms assume that the skew-symmetric structure is given as part of the input to the algorithm rather than requiring it to be inferred from the graph alone.
|
https://en.wikipedia.org/wiki/Skew-symmetric_graph
|
Inmathematics,economics, andcomputer science, thestable matching problem[1][2][3]is the problem of finding a stable matching between two equally sized sets of elements given an ordering of preferences for each element. A matching is abijectionfrom the elements of one set to the elements of the other set. A matching isnotstable if:
In other words, a matching is stable when there does not exist any pair (A,B) which both prefer each other to their current partner under the matching.
The stable marriage problem has been stated as follows:
Givennmen andnwomen, where each person has ranked all members of the opposite sex in order of preference, marry the men and women together such that there are no two people of opposite sex who would both rather have each other than their current partners. When there are no such pairs of people, the set of marriages is deemed stable.
The existence of two classes that need to be paired with each other (heterosexual men and women in this example) distinguishes this problem from thestable roommates problem.
Algorithms for finding solutions to the stable marriage problem have applications in a variety of real-world situations, perhaps the best known of these being in the assignment of graduating medical students to their first hospital appointments.[4]In 2012, theNobel Memorial Prize in Economic Scienceswas awarded toLloyd S. ShapleyandAlvin E. Roth"for the theory of stable allocations and the practice of market design."[5]
An important and large-scale application of stable marriage is in assigning users to servers in a large distributed Internet service.[6]Billions of users access web pages, videos, and other services on the Internet, requiring each user to be matched to one of (potentially) hundreds of thousands of servers around the world that offer that service. A user prefers servers that are proximal enough to provide a faster response time for the requested service, resulting in a (partial) preferential ordering of the servers for each user. Each server prefers to serve users that it can with a lower cost, resulting in a (partial) preferential ordering of users for each server.Content delivery networksthat distribute much of the world's content and services solve this large and complex stable marriage problem between users and servers every tens of seconds to enable billions of users to be matched up with their respective servers that can provide the requested web pages, videos, or other services.[6]
TheGale–Shapley algorithmfor stable matching is used to assign rabbis who graduate fromHebrew Union Collegeto Jewish congregations.[7]
In general, there may be many different stable matchings. For example, suppose there are three men (A, B, C) and three women (X, Y, Z) which have preferences of:
There are three stable solutions to this matching arrangement:
All three are stable, because instability requires both of the participants to be happier with an alternative match. Giving one group their first choices ensures that the matches are stable because they would be unhappy with any other proposed match. Giving everyone their second choice ensures that any other match would be disliked by one of the parties. In general, the family of solutions to any instance of the stable marriage problem can be given the structure of a finitedistributive lattice,
and this structure leads to efficient algorithms for several problems on stable marriages.[8]
In a uniformly-random instance of the stable marriage problem withnmen andnwomen, the average number of stable matchings is asymptoticallye−1nlnn{\displaystyle e^{-1}n\ln n}.[9]In a stable marriage instance chosen to maximize the number of different stable matchings, this number is anexponential functionofn.[10]Counting the number of stable matchings in a given instance is#P-complete.[11]
In 1962,David GaleandLloyd Shapleyproved that, for any equal number in different groups, in the context of college admissions and individuals wanting marriage it is always possible to solve as matched couples to make all resultant pairings / matched factors stable. They presented analgorithmto do so.[12][13]
TheGale–Shapley algorithm(also known as the deferred acceptance algorithm) involves a number of "rounds" (or "iterations"):
This algorithm is guaranteed to produce a stable marriage for all participants intimeO(n2){\displaystyle O(n^{2})}wheren{\displaystyle n}is the number of men or women.[14]
Among all possible different stable matchings, it always yields the one that is best for all men among all stable matchings, and worst for all women.[15]
It is atruthful mechanismfrom the point of view of men (the proposing side), i.e., no man can get a better matching for himself by misrepresenting his preferences. Moreover, the GS algorithm is evengroup-strategy prooffor men, i.e., no coalition of men can coordinate a misrepresentation of their preferences such that all men in the coalition are strictly better-off.[16]However, it is possible for some coalition to misrepresent their preferences such that some men are better-off and the other men retain the same partner.[17]The GS algorithm is non-truthful for the women (the reviewing side): each woman may be able to misrepresent her preferences and get a better match.
The rural hospitals theorem concerns a more general variant of the stable matching problem, like that applying in the problem of matching doctors to positions at hospitals, differing in the following ways from the basicn-to-nform of the stable marriage problem:
In this case, the condition of stability is that no unmatched pair prefer each other to their situation in the matching (whether that situation is another partner or being unmatched). With this condition, a stable matching will still exist, and can still be found by the Gale–Shapley algorithm.
For this kind of stable matching problem, the rural hospitals theorem states that:
Instable matching with indifference, some men might be indifferent between two or more women and vice versa.
Thestable roommates problemis similar to the stable marriage problem, but differs in that all participants belong to a single pool (instead of being divided into equal numbers of "men" and "women").
Thehospitals/residents problem– also known as thecollege admissions problem– differs from the stable marriage problem in that a hospital can take multiple residents, or a college can take an incoming class of more than one student. Algorithms to solve the hospitals/residents problem can behospital-oriented(as theNRMPwas before 1995)[18]orresident-oriented. This problem was solved, with an algorithm, in the same original paper by Gale and Shapley, in which the stable marriage problem was solved.[12]
Thehospitals/residents problem with couplesallows the set of residents to include couples who must be assigned together, either to the same hospital or to a specific pair of hospitals chosen by the couple (e.g., a married couple want to ensure that they will stay together and not be stuck in programs that are far away from each other). The addition of couples to the hospitals/residents problem renders the problemNP-complete.[19]
Theassignment problemseeks to find a matching in a weightedbipartite graphthat has maximum weight. Maximum weighted matchings do not have to be stable, but in some applications a maximum weighted matching is better than a stable one.
Thematching with contractsproblem is a generalization of matching problem, in which participants can be matched with different terms of contracts.[20]An important special case of contracts is matching with flexible wages.[21]
|
https://en.wikipedia.org/wiki/Stable_matching
|
Ingraph theory, anindependent set,stable set,cocliqueoranticliqueis a set ofverticesin agraph, no two of which are adjacent. That is, it is a setS{\displaystyle S}of vertices such that for every two vertices inS{\displaystyle S}, there is noedgeconnecting the two. Equivalently, each edge in the graph has at most one endpoint inS{\displaystyle S}. A set is independent if and only if it is acliquein thegraph's complement. The size of an independent set is the number of vertices it contains. Independent sets have also been called "internally stable sets", of which "stable set" is a shortening.[1]
Amaximal independent setis an independent set that is not aproper subsetof any other independent set.
Amaximum independent setis an independent set of largest possible size for a given graphG{\displaystyle G}. This size is called theindependence numberofG{\displaystyle G}and is usually denoted byα(G){\displaystyle \alpha (G)}.[2]Theoptimization problemof finding such a set is called themaximum independent set problem.It is astrongly NP-hardproblem.[3]As such, it is unlikely that there exists an efficient algorithm for finding a maximum independent set of a graph.
Every maximum independent set also is maximal, but the converse implication does not necessarily hold.
A set is independent if and only if it is acliquein the graph’scomplement, so the two concepts are complementary. In fact, sufficiently large graphs with no large cliques have large independent sets, a theme that is explored inRamsey theory.
A set is independent if and only if its complement is avertex cover.[4]Therefore, the sum of the size of the largest independent setα(G){\displaystyle \alpha (G)}and the size of a minimum vertex coverβ(G){\displaystyle \beta (G)}is equal to the number of vertices in the graph.
Avertex coloringof a graphG{\displaystyle G}corresponds to apartitionof its vertex set into independent subsets. Hence the minimal number of colors needed in a vertex coloring, thechromatic numberχ(G){\displaystyle \chi (G)}, is at least the quotient of the number of vertices inG{\displaystyle G}and the independent numberα(G){\displaystyle \alpha (G)}.
In abipartite graphwith no isolated vertices, the number of vertices in a maximum independent set equals the number of edges in a minimumedge covering; this isKőnig's theorem.
An independent set that is not a proper subset of another independent set is calledmaximal. Such sets aredominating sets. Every graph contains at most 3n/3maximal independent sets,[5]but many graphs have far fewer.
The number of maximal independent sets inn-vertexcycle graphsis given by thePerrin numbers, and the number of maximal independent sets inn-vertexpath graphsis given by thePadovan sequence.[6]Therefore, both numbers are proportional to powers of 1.324718..., theplastic ratio.
Incomputer science, severalcomputational problemsrelated to independent sets have been studied.
The first three of these problems are all important in practical applications; the independent set decision problem is not, but is necessary in order to apply the theory ofNP-completenessto problems related to independent sets.
The independent set problem and theclique problemare complementary: a clique inGis an independent set in thecomplement graphofGand vice versa. Therefore, many computational results may be applied equally well to either problem. For example, the results related to the clique problem have the following corollaries:
Despite the close relationship between maximum cliques and maximum independent sets in arbitrary graphs, the independent set and clique problems may be very different when restricted to special classes of graphs. For instance, forsparse graphs(graphs in which the number of edges is at most a constant times the number of vertices in any subgraph), the maximum clique has bounded size and may be found exactly in linear time;[7]however, for the same classes of graphs, or even for the more restricted class of bounded degree graphs, finding the maximum independent set isMAXSNP-complete, implying that, for some constantc(depending on the degree) it isNP-hardto find an approximate solution that comes within a factor ofcof the optimum.[8]
The maximum independent set problem is NP-hard. However, it can be solved more efficiently than the O(n22n) time that would be given by a naivebrute force algorithmthat examines every vertex subset and checks whether it is an independent set.
As of 2017 it can be solved in time O(1.1996n) using polynomial space.[9]When restricted to graphs with maximum degree 3, it can be solved in time O(1.0836n).[10]
For many classes of graphs, a maximum weight independent set may be found in polynomial time. Famous examples areclaw-free graphs,[11]P5-free graphs[12]andperfect graphs.[13]Forchordal graphs, a maximum weight independent set can be found in linear time.[14]
Modular decompositionis a good tool for solving the maximum weight independent set problem; the linear time algorithm oncographsis the basic example for that. Another important tool areclique separatorsas described by Tarjan.[15]
Kőnig's theoremimplies that in abipartite graphthe maximum independent set can be found in polynomial time using a bipartite matching algorithm.
In general, the maximum independent set problem cannot be approximated to a constant factor in polynomial time (unless P = NP). In fact, Max Independent Set in general isPoly-APX-complete, meaning it is as hard as any problem that can be approximated to a polynomial factor.[16]However, there are efficient approximation algorithms for restricted classes of graphs.
Inplanar graphs, the maximum independent set may be approximated to within any approximation ratioc< 1 in polynomial time; similarpolynomial-time approximation schemesexist in any family of graphs closed under takingminors.[17]
In bounded degree graphs, effective approximation algorithms are known withapproximation ratiosthat are constant for a fixed value of the maximum degree; for instance, agreedy algorithmthat forms a maximal independent set by, at each step, choosing the minimum degree vertex in the graph and removing its neighbors, achieves an approximation ratio of (Δ+2)/3 on graphs with maximum degree Δ.[18]Approximation hardness bounds for such instances were proven inBerman & Karpinski (1999). Indeed, even Max Independent Set on 3-regular 3-edge-colorable graphs isAPX-complete.[19]
Aninterval graphis a graph in which the nodes are 1-dimensional intervals (e.g. time intervals) and there is an edge between two intervals if and only if they intersect. An independent set in an interval graph is just a set of non-overlapping intervals. The problem of finding maximum independent sets in interval graphs has been studied, for example, in the context ofjob scheduling: given a set of jobs that has to be executed on a computer, find a maximum set of jobs that can be executed without interfering with each other. This problem can be solved exactly in polynomial time usingearliest deadline first scheduling.
A geometricintersection graphis a graph in which the nodes are geometric shapes and there is an edge between two shapes if and only if they intersect. An independent set in a geometric intersection graph is just a set of disjoint (non-overlapping) shapes. The problem of finding maximum independent sets in geometric intersection graphs has been studied, for example, in the context ofAutomatic label placement: given a set of locations in a map, find a maximum set of disjoint rectangular labels near these locations.
Finding a maximum independent set in intersection graphs is still NP-complete, but it is easier to approximate than the general maximum independent set problem. A recent survey can be found in the introduction ofChan & Har-Peled (2012).
Ad-clawin a graph is a set ofd+1 vertices, one of which (the "center") is connected to the otherdvertices, but the other d vertices are not connected to each other. Ad-claw-free graphis a graph that does not have ad-claw subgraph. Consider the algorithm that starts with an empty set, and incrementally adds an arbitrary vertex to it as long as it is not adjacent to any existing vertex. Ind-claw-free graphs, every added vertex invalidates at mostd− 1 vertices from the maximum independent set; therefore, this trivial algorithm attains a (d− 1)-approximation algorithm for the maximum independent set. In fact, it is possible to get much better approximation ratios:
The problem of finding a maximal independent set can be solved inpolynomial timeby a trivial parallelgreedy algorithm.[22]All maximal independent sets can be found in time O(3n/3) = O(1.4423n).
Thecounting problem#IS asks, given an undirected graph, how many independent sets it contains. This problem is intractable, namely, it is♯P-complete, already on graphs with maximaldegreethree.[23]It is further known that, assuming thatNPis different fromRP, the problem cannot be tractablyapproximatedin the sense that it does not have a fullypolynomial-time approximation schemewith randomization (FPRAS), even on graphs with maximal degree six;[24]however it does have an fully polynomial-time approximation scheme (FPTAS) in the case where the maximal degree is five.[25]The problem #BIS, of counting independent sets onbipartite graphs, is also♯P-complete, already on graphs with maximaldegreethree.[26]It is not known whether #BIS admits a FPRAS.[27]
The question ofcounting maximal independent setshas also been studied.
The maximum independent set and its complement, theminimum vertex coverproblem, is involved in proving thecomputational complexityof many theoretical problems.[28]They also serve as useful models for real world optimization problems, for example maximum independent set is a useful model for discovering stablegenetic componentsfor designingengineered genetic systems.[29]
|
https://en.wikipedia.org/wiki/Independent_vertex_set
|
Inmathematics,economics, andcomputer science, thestable matching problem[1][2][3]is the problem of finding a stable matching between two equally sized sets of elements given an ordering of preferences for each element. A matching is abijectionfrom the elements of one set to the elements of the other set. A matching isnotstable if:
In other words, a matching is stable when there does not exist any pair (A,B) which both prefer each other to their current partner under the matching.
The stable marriage problem has been stated as follows:
Givennmen andnwomen, where each person has ranked all members of the opposite sex in order of preference, marry the men and women together such that there are no two people of opposite sex who would both rather have each other than their current partners. When there are no such pairs of people, the set of marriages is deemed stable.
The existence of two classes that need to be paired with each other (heterosexual men and women in this example) distinguishes this problem from thestable roommates problem.
Algorithms for finding solutions to the stable marriage problem have applications in a variety of real-world situations, perhaps the best known of these being in the assignment of graduating medical students to their first hospital appointments.[4]In 2012, theNobel Memorial Prize in Economic Scienceswas awarded toLloyd S. ShapleyandAlvin E. Roth"for the theory of stable allocations and the practice of market design."[5]
An important and large-scale application of stable marriage is in assigning users to servers in a large distributed Internet service.[6]Billions of users access web pages, videos, and other services on the Internet, requiring each user to be matched to one of (potentially) hundreds of thousands of servers around the world that offer that service. A user prefers servers that are proximal enough to provide a faster response time for the requested service, resulting in a (partial) preferential ordering of the servers for each user. Each server prefers to serve users that it can with a lower cost, resulting in a (partial) preferential ordering of users for each server.Content delivery networksthat distribute much of the world's content and services solve this large and complex stable marriage problem between users and servers every tens of seconds to enable billions of users to be matched up with their respective servers that can provide the requested web pages, videos, or other services.[6]
TheGale–Shapley algorithmfor stable matching is used to assign rabbis who graduate fromHebrew Union Collegeto Jewish congregations.[7]
In general, there may be many different stable matchings. For example, suppose there are three men (A, B, C) and three women (X, Y, Z) which have preferences of:
There are three stable solutions to this matching arrangement:
All three are stable, because instability requires both of the participants to be happier with an alternative match. Giving one group their first choices ensures that the matches are stable because they would be unhappy with any other proposed match. Giving everyone their second choice ensures that any other match would be disliked by one of the parties. In general, the family of solutions to any instance of the stable marriage problem can be given the structure of a finitedistributive lattice,
and this structure leads to efficient algorithms for several problems on stable marriages.[8]
In a uniformly-random instance of the stable marriage problem withnmen andnwomen, the average number of stable matchings is asymptoticallye−1nlnn{\displaystyle e^{-1}n\ln n}.[9]In a stable marriage instance chosen to maximize the number of different stable matchings, this number is anexponential functionofn.[10]Counting the number of stable matchings in a given instance is#P-complete.[11]
In 1962,David GaleandLloyd Shapleyproved that, for any equal number in different groups, in the context of college admissions and individuals wanting marriage it is always possible to solve as matched couples to make all resultant pairings / matched factors stable. They presented analgorithmto do so.[12][13]
TheGale–Shapley algorithm(also known as the deferred acceptance algorithm) involves a number of "rounds" (or "iterations"):
This algorithm is guaranteed to produce a stable marriage for all participants intimeO(n2){\displaystyle O(n^{2})}wheren{\displaystyle n}is the number of men or women.[14]
Among all possible different stable matchings, it always yields the one that is best for all men among all stable matchings, and worst for all women.[15]
It is atruthful mechanismfrom the point of view of men (the proposing side), i.e., no man can get a better matching for himself by misrepresenting his preferences. Moreover, the GS algorithm is evengroup-strategy prooffor men, i.e., no coalition of men can coordinate a misrepresentation of their preferences such that all men in the coalition are strictly better-off.[16]However, it is possible for some coalition to misrepresent their preferences such that some men are better-off and the other men retain the same partner.[17]The GS algorithm is non-truthful for the women (the reviewing side): each woman may be able to misrepresent her preferences and get a better match.
The rural hospitals theorem concerns a more general variant of the stable matching problem, like that applying in the problem of matching doctors to positions at hospitals, differing in the following ways from the basicn-to-nform of the stable marriage problem:
In this case, the condition of stability is that no unmatched pair prefer each other to their situation in the matching (whether that situation is another partner or being unmatched). With this condition, a stable matching will still exist, and can still be found by the Gale–Shapley algorithm.
For this kind of stable matching problem, the rural hospitals theorem states that:
Instable matching with indifference, some men might be indifferent between two or more women and vice versa.
Thestable roommates problemis similar to the stable marriage problem, but differs in that all participants belong to a single pool (instead of being divided into equal numbers of "men" and "women").
Thehospitals/residents problem– also known as thecollege admissions problem– differs from the stable marriage problem in that a hospital can take multiple residents, or a college can take an incoming class of more than one student. Algorithms to solve the hospitals/residents problem can behospital-oriented(as theNRMPwas before 1995)[18]orresident-oriented. This problem was solved, with an algorithm, in the same original paper by Gale and Shapley, in which the stable marriage problem was solved.[12]
Thehospitals/residents problem with couplesallows the set of residents to include couples who must be assigned together, either to the same hospital or to a specific pair of hospitals chosen by the couple (e.g., a married couple want to ensure that they will stay together and not be stuck in programs that are far away from each other). The addition of couples to the hospitals/residents problem renders the problemNP-complete.[19]
Theassignment problemseeks to find a matching in a weightedbipartite graphthat has maximum weight. Maximum weighted matchings do not have to be stable, but in some applications a maximum weighted matching is better than a stable one.
Thematching with contractsproblem is a generalization of matching problem, in which participants can be matched with different terms of contracts.[20]An important special case of contracts is matching with flexible wages.[21]
|
https://en.wikipedia.org/wiki/Stable_marriage_problem
|
In the context ofnetwork theory, acomplex networkis agraph(network) with non-trivialtopologicalfeatures—features that do not occur in simple networks such aslatticesorrandom graphsbut often occur in networks representing real systems. The study of complex networks is a young and active area of scientific research[1][2](since 2000) inspired largely by empirical findings of real-world networks such ascomputer networks,biological networks, technological networks,brain networks,[3][4]climate networksandsocial networks.
Mostsocial,biological, andtechnological networksdisplay substantial non-trivial topological features, with patterns of connection between their elements that are neither purely regular nor purely random. Such features include a heavy tail in thedegree distribution, a highclustering coefficient,assortativityor disassortativity among vertices,community structure, andhierarchical structure. In the case of directed networks these features also includereciprocity, triad significance profile and other features. In contrast, many of the mathematical models of networks that have been studied in the past, such aslatticesandrandom graphs, do not show these features. The most complex structures can be realized by networks with a medium number of interactions.[5]This corresponds to the fact that the maximum information content (entropy) is obtained for medium probabilities.
Two well-known and much studied classes of complex networks arescale-free networks[6]andsmall-world networks,[7][8]whose discovery and definition are canonical case-studies in the field. Both are characterized by specific structural features—power-lawdegree distributionsfor the former and short path lengths and highclusteringfor the latter. However, as the study of complex networks has continued to grow in importance and popularity, many other aspects of network structures have attracted attention as well.
The field continues to develop at a brisk pace, and has brought together researchers from many areas includingmathematics,physics, electric power systems,[9]biology,climate,computer science,sociology,epidemiology, and others.[10]Ideas and tools from network science and engineering have been applied to the analysis of metabolic and genetic regulatory networks; the study of ecosystem stability and robustness;[11]clinical science;[12]the modeling and design of scalable communication networks such as the generation and visualization of complex wireless networks;[13]and a broad range of other practical issues. Network science is the topic of many conferences in a variety of different fields, and has been the subject of numerous books both for the lay person and for the expert.
A network is called scale-free[6][14]if its degree distribution, i.e., the probability that a node selected uniformly at random has a certain number of links (degree), follows a mathematical function called apower law. The power law implies that the degree distribution of these networks has no characteristic scale. In contrast, networks with a single well-defined scale are somewhat similar to a lattice in that every node has (roughly) the same degree. Examples of networks with a single scale include theErdős–Rényi (ER) random graph,random regular graphs,regular lattices, andhypercubes. Some models of growing networks that produce scale-invariant degree distributions are theBarabási–Albert modeland thefitness model. In a network with a scale-free degree distribution, some vertices have a degree that is orders of magnitude larger than the average - these vertices are often called "hubs", although this language is misleading as, by definition, there is no inherent threshold above which a node can be viewed as a hub. If there were such a threshold, the network would not be scale-free.
Interest in scale-free networks began in the late 1990s with the reporting of discoveries of power-law degree distributions in real world networks such as theWorld Wide Web, the network ofAutonomous systems(ASs), some networks of Internet routers, protein interaction networks, email networks, etc. Most of these reported "power laws" fail when challenged with rigorous statistical testing, but the more general idea of heavy-tailed degree distributions—which many of these networks do genuinely exhibit (before finite-size effects occur) -- are very different from what one would expect if edges existed independently and at random (i.e., if they followed aPoisson distribution). There are many different ways to build a network with a power-law degree distribution. TheYule processis a canonical generative process for power laws, and has been known since 1925. However, it is known by many other names due to its frequent reinvention, e.g., The Gibrat principle byHerbert A. Simon, theMatthew effect, cumulative advantage and,preferential attachmentbyBarabásiand Albert for power-law degree distributions. Recently,Hyperbolic Geometric Graphshave been suggested as yet another way of constructing scale-free networks.
Some networks with a power-law degree distribution (and specific other types of structure) can be highly resistant to the random deletion of vertices—i.e., the vast majority of vertices remain connected together in a giant component. Such networks can also be quite sensitive to targeted attacks aimed at fracturing the network quickly. When the graph is uniformly random except for the degree distribution, these critical vertices are the ones with the highest degree, and have thus been implicated in the spread of disease (natural and artificial) in social and communication networks, and in the spread of fads (both of which are modeled by apercolationorbranching process). While random graphs (ER) have an average distance of order log N[7]between nodes, where N is the number of nodes, scale free graph can have a distance of log log N.
A network is called a small-world network[7]by analogy with thesmall-world phenomenon(popularly known assix degrees of separation). The small world hypothesis, which was first described by the Hungarian writerFrigyes Karinthyin 1929, and tested experimentally byStanley Milgram(1967), is the idea that two arbitrary people are connected by only six degrees of separation, i.e. the diameter of the corresponding graph of social connections is not much larger than six. In 1998,Duncan J. WattsandSteven Strogatzpublished the first small-world network model, which through a single parameter smoothly interpolates between a random graph and a lattice.[7]Their model demonstrated that with the addition of only a small number of long-range links, a regular graph, in which the diameter is proportional to the size of the network, can be transformed into a "small world" in which the average number of edges between any two vertices is very small (mathematically, it should grow as the logarithm of the size of the network), while the clustering coefficient stays large. It is known that a wide variety of abstract graphs exhibit the small-world property, e.g., random graphs and scale-free networks. Further, real world networks such as theWorld Wide Weband the metabolic network also exhibit this property.
In the scientific literature on networks, there is some ambiguity associated with the term "small world". In addition to referring to the size of the diameter of the network, it can also refer to the co-occurrence of a small diameter and a highclustering coefficient. The clustering coefficient is a metric that represents the density of triangles in the network. For instance, sparse random graphs have a vanishingly small clustering coefficient while real world networks often have a coefficient significantly larger. Scientists point to this difference as suggesting that edges are correlated in real world networks. Approaches have been developed to generate network models that exhibit high correlations, while preserving the desired degree distribution and small-world properties. These approaches can be used to generate analytically solvable toy models for research into these systems.[15]
Many real networks are embedded in space. Examples include, transportation and other infrastructure networks, brain networks.[3][4]Several models for spatial networks have been developed.[16]
|
https://en.wikipedia.org/wiki/Complex_network
|
Graf(German pronunciation:[ɡʁaːf]ⓘ; feminine:Gräfin[ˈɡʁɛːfɪn]ⓘ) is a historicaltitleof theGerman nobilityand later also of theRussian nobility, usually translated as "count". Considered to be intermediate amongnoble ranks, the title is often treated as equivalent to the British title of "earl" (whose female version is "countess").
The German nobility was gradually divided into high and low nobility. The high nobility included those counts who ruled immediate imperial territories of "princelysize and importance" for which they had a seat and vote in theImperial Diet.
The wordGrafderives fromMiddle High German:grave, which is usually derived fromLatin:graphio.Graphiois in turn thought to come from theByzantinetitlegrapheus, which ultimately derives from the Greek verbγρᾰ́φειν(graphein) 'to write'.[1]Other explanations have been put forward, however;JacobandWilhelm Grimm, while still noting the potential of a Greek derivation, suggested a connection toGothic:gagrêfts, meaning 'decision, decree'. However, the Grimms preferred a solution that allows a connection toOld English:gerēfa'reeve', in which thege-is a prefix, and which the Grimms derive fromProto-Germanic*rōva'number'.[2]
Thecomitaltitle ofGrafis common to various European territories where German was or is the official or vernacular tongue, including Austria, Germany, Switzerland, Luxembourg, Liechtenstein, Alsace, theBaltic statesand other formerHabsburg crown lands. In Germany, all legal privileges of the nobility have been officially abolished since August 1919, andGraf, like any other hereditary title, is treated as part of the legal surname.[3]In Austria, its use is banned by law, as with all hereditary titles andnobiliary particles. InSwitzerland, the title is not acknowledged in law. In the monarchies of Belgium, Liechtenstein and Luxembourg, where German is one of theofficial languages, the title continues to be recognised, used and, occasionally, granted by the nationalfons honorum, the reigning monarch.
From theMiddle Ages, aGrafusually ruled a territory known as aGrafschaft('county'). In theHoly Roman Empire, many Imperial counts (Reichsgrafen) retained near-sovereign authority in their lands until theCongress of Viennasubordinated them to larger, neighboring monarchs through theGerman mediatisationprocess of 1815, preserving their precedence, allocating familial representation in local legislatures, some jurisdictional immunities and the prestigious privilege ofEbenbürtigkeit. In regions of Europe where nobles did not actually exerciseLandeshoheitover the populace, theGraflong retained specificfeudalprivileges over the land and in the villages in his county, such as rights topeasantservice, to periodic fees for use of common infrastructure such as timber, mills, wells and pastures.
These rights gradually eroded and were largely eliminated before or during the 19th century, leaving theGrafwith few legal privileges beyond land ownership, although comital estates in German-speaking lands were often substantial. Nonetheless, various rulers in German-speaking lands granted the hereditary title ofGrafto their subjects, particularly after the abolition of the Holy Roman Empire in 1806. Although lacking the prestige and powers of the former Imperial counts, they remained legal members of the local nobility, entitled to whatever minor privileges were recognised at the ruler's court. The title, translated as "count", was generally accepted and used in other countries by custom.
ManyContinentalcounts in Germany and Austria were titledGrafwithout any additional qualification. Except in theKingdom of Prussiafrom the 19th century, the title ofGrafwas not restricted byprimogeniture: it was inherited by all legitimate descendants in themale lineof the original titleholder, the males also inheriting an approximately equal share of the family's wealth and estates. Usually a hyphenated suffix indicated which of the familial lands a particular line of counts held, e.g.Castell-Rudenhausen.
In the medieval Holy Roman Empire, some counts took or were granted unique variations of thegräflichetitle, often relating to a specific domain or jurisdiction of responsibility, e.g.Landgraf,Markgraf,Pfalzgraf(Count Palatine),Burggraf,Wildgraf,Waldgraf,Altgraf,Raugraf, etc. Although as a titleGrafranked, officially, below those ofHerzog(duke) andFürst(prince), theHoly Roman Emperorcould and did recognise unique concessions of authority or rank to some of these nobles, raising them to the status ofgefürsteter Grafor "princely count". But agraflichetitle with such a prefix did not always signify a higher than comital rank or membership in theHochadel. Only the more important of these titles, historically associated with degrees of sovereignty, remained in use by the 19th century, specificallyMarkgrafandLandgraf.
In Russia, the title ofGraf(Russian:Граф; feminine: Графиня,romanizedGrafinya) was introduced byPeter the Great. The first Russiangraf(or count) wasBoris Petrovich Sheremetev, elevated to this dignity in 1706 for the pacification of theAstrakhan uprising (1705–1706)[ru]. Then Peter granted six moregrafdignities. Initially, when someone was elevated to thegraf'sdignity of theRussian Empire, the elevated person's recognition by the German Emperor in the same dignity of the Holy Roman Empire was required. Subsequently, the latter ceased to be obligatory.[4]
Some are approximately of comital rank, some higher, some lower. The more important ones are treated in separate articles (follow the links); a few minor, rarer ones only in sections below.
AReichsgrafwas anoblemanwhose title ofcountwas conferred or confirmed by theHoly Roman Emperor, and meant "Imperial Count", i.e., a count of the Holy Roman Empire. Since thefeudalera, any count whose territory lay within the Empire and was under theimmediatejurisdiction of the Emperor with a shared vote in theReichstagcame to be considered a member of the "upper nobility" (Hochadel) in Germany, along with princes (Fürsten), dukes (Herzöge), electors (Kurfürsten), and the emperor himself.[5]A count who was not aReichsgrafwas likely to possess only amesnefief(Afterlehen) — he was subject to an immediate prince of the empire, such as a duke orprince elector.[citation needed]
However, the Holy Roman Emperors also occasionally granted the title ofReichsgrafto subjects and foreigners who did not possess and were not granted immediate territories — or, sometimes, any territory at all.[5]Such titles were purelyhonorific.[citation needed]
In English,Reichsgrafis usually translated simply ascountand is combined with a territorial suffix (e.g.,Count of Holland,Count Reuss) or a surname (Count Fugger,Count von Browne). Even after the abolition of the Holy Roman Empire in 1806, theReichsgrafenretained precedence above other counts in Germany. Those who had beenquasi-sovereignuntilGerman mediatisationretained, until 1918, status and privileges pertaining to members of reigningdynasties.[citation needed]
NotableReichsgrafenhave included:
A complete list ofReichsgrafenwith immediate territories as of 1792 can be found in theList of Reichstag participants (1792).[citation needed]
AMarkgraforMargravewas originally a military governor of aCarolingian"mark" (march), a border province. In medieval times the borders of the Holy Roman Empire were especially vulnerable to foreign attack, so the hereditary count of these "marches" of the realm was sometimes granted greater authority than othervassalsto ensure security. They bore the title "margrave" until the few who survived as sovereigns assumed higher titles when the Empire was abolished in 1806.
Examples:Margrave of Baden,Margrave ofBrandenburg-Bayreuth. Since the abolition of the German Empire at the end of World War I, the heirs of some of its former monarchies have resumed use ofmargraveas atitle of pretence, e.g.Maria Emanuel, Margrave ofMeissenandMaximilian, Margrave of Baden.
ALandgraforLandgravewas a nobleman of comital rank in feudal Germany whose jurisdiction stretched over a territory larger than usually held by a count within theHoly Roman Empire. The status of a landgrave was elevated, usually being associated withsuzerainswho were subject to the Holy Roman Emperor but exercised sovereign authority within their lands and independence greater than the prerogatives to which a simpleGrafwas entitled, but the title itself implied no specific, legal privileges.
Landgrafoccasionally continued in use as the subsidiary title of such minor royalty as theElector of Hesseor the Grand Duke ofSaxe-Weimar, who functioned as theLandgraveofThuringiain the first decade of the 20th century. The jurisdiction of a landgrave was aLandgrafschaftor landgraviate, and the wife of a landgrave was aLandgräfinor landgravine.
Examples: Landgrave ofThuringia, Landgrave ofHesse, Landgrave ofLeuchtenberg, Landgrave ofFürstenberg-Weitra. The title is now borne by the hereditary heirs to thedeposed monarchsof Hesse (Donatus, Landgrave of Hesseand Wilhelm, Landgrave of Hesse-Philippsthal-Barchfeld), who lost their throne in 1918.
Agefürsteter Graf(English:princely count) is aReichsgrafwho was recognised by the Holy Roman Emperor as bearing the higher rank or exercising the more extensive authority of anImperial prince(Reichsfürst). While nominally retaining only a comital title, he was accorded princely rank and, usually,armsby the emperor. An example of this would be thePrincely County of Habsburg, the namesake of theHabsburg Dynasty, which at various points in time controlled vast amounts of lands throughout Europe.
ABurggraf, orBurgrave, was a 12th- and 13th-century military and civil judicialgovernorof a castle (comparecastellan,custos,keeper) of the town it dominated and of its immediate surrounding countryside. His jurisdiction was aBurggrafschaft, burgraviate.
Over time the office and domain to which it was attached tended to become hereditary by Imperial grant or retention over generations by members of the same family.
Examples: Burgrave ofNuremberg, Burgrave of (Burggraf zu)Dohna-Schlobitten, Burg grafschaft Colditz.
Initiallyburgravesuggested a similar function and history as other titles rendered in German byVizegraf, in Dutch asBurggraafor in English asViscount[citation needed](Latin:Vicecomes); the deputy of a count charged with exercising the count's prerogatives in overseeing one or more of the count's strongholds or fiefs, as the burgrave dwelt usually in a castle or fortified town. Some became hereditary and by the modern era obtained rank just below a count, though above aFreiherr' (baron) who might hold a fief as vassal of the original count.
Unlike the other comital titles, Rhinegrave, Wildgrave (Waldgrave),Raugrave, and Altgrave are not generic titles. Rather, each is linked to a specific countship, whose unique title emerged during the course of its history. These unusually named countships were equivalent in rank to other Counts of the Empire who were ofHochadelstatus, being entitled to a shared seat and vote in theImperial Dietand possessingImperial immediacy, most of which would bemediatisedupon dissolution of the Empire in 1806.[6]
The corresponding titles in Scandinavia aregreve(m.) andgrevinna(f.) and would commonly be used in the third-person in direct address as a mark of courtesy, as ingrevinnan.
German nobility, although not abolished (unlike theAustrian nobilityby the newFirst Austrian Republicin 1919), lost recognition as a legal class in Germany under theWeimar Republicin 1919 under theWeimar Constitution, article 109. Former hereditary noble titles legally simply transformed into dependent parts of thelegal surname(with the former title thus now following the given name, e.g.Otto Graf Lambsdorff).[10]As dependent parts of the surnames (nichtselbständige Namensbestandteile), they are ignored in alphabetical sorting of names, as is anynobiliary particle, such asvonorzu,[11]and might or might not be used by those bearing them. The distinguishing main surname is the name following theGraf, orGräfin, and the nobiliary particle if any. Today, having lost their legal status, these terms are often not translated, unlike before 1919. The titles do, however, retain prestige in some circles of society.
The suffix-grafoccurs in various office titles which did not attain nobiliary status but were either held as asinecureby nobleman or courtiers, or functional officials such as theDeichgraf(in a polder management organization).
(incomplete)
|
https://en.wikipedia.org/wiki/Graf
|
Graffmay refer to:
|
https://en.wikipedia.org/wiki/Graff_(disambiguation)
|
Agraph database(GDB) is adatabasethat usesgraph structuresforsemantic querieswithnodes,edges, and properties to represent and store data.[1]A key concept of the system is thegraph(or edge or relationship). The graph relates the data items in the store to a collection of nodes and edges, the edges representing the relationships between the nodes. The relationships allow data in the store to be linked together directly and, in many cases, retrieved with one operation. Graph databases hold the relationships between data as a priority. Querying relationships is fast because they are perpetually stored in the database. Relationships can be intuitively visualized using graph databases, making them useful for heavily inter-connected data.[2]
Graph databases are commonly referred to as aNoSQLdatabase. Graph databases are similar to 1970snetwork modeldatabases in that both represent general graphs, but network-model databases operate at a lower level ofabstraction[3]and lack easytraversalover a chain of edges.[4]
The underlying storage mechanism of graph databases can vary. Relationships are first-class citizens in a graph database and can be labelled, directed, and given properties. Some depend on a relational engine and store the graph data in atable(although a table is a logical element, therefore this approach imposes a level of abstraction between the graph database management system and physical storage devices). Others use akey–value storeordocument-oriented databasefor storage, making them inherently NoSQL structures.
As of 2021[update], no graph query language has been universally adopted in the same way as SQL was for relational databases, and there are a wide variety of systems, many of which are tightly tied to one product. Some early standardization efforts led to multi-vendor query languages likeGremlin,SPARQL, andCypher. In September 2019 a proposal for a project to create a new standard graph query language (ISO/IEC 39075 Information Technology — Database Languages — GQL) was approved by members of ISO/IEC Joint Technical Committee 1(ISO/IEC JTC 1).GQLis intended to be a declarative database query language, like SQL. In addition to having query language interfaces, some graph databases are accessed throughapplication programming interfaces(APIs).
Graph databases differ from graph compute engines. Graph databases are technologies that are translations of the relationalonline transaction processing(OLTP) databases. On the other hand, graph compute engines are used inonline analytical processing(OLAP) for bulk analysis.[5]Graph databases attracted considerable attention in the 2000s, due to the successes of major technology corporations in using proprietary graph databases,[6]along with the introduction ofopen-sourcegraph databases.
One study concluded that an RDBMS was "comparable" in performance to existing graph analysis engines at executing graph queries.[7]
In the mid-1960s,navigational databasessuch asIBM'sIMSsupportedtree-like structures in itshierarchical model, but the stricttree structurecould be circumvented with virtual records.[8][9]
Graph structures could be represented in network model databases from the late 1960s.CODASYL, which had definedCOBOLin 1959, defined the Network Database Language in 1969.
Labeled graphscould be represented in graph databases from the mid-1980s, such as the Logical Data Model.[10][11]
Commercialobject databases(ODBMSs) emerged in the early 1990s. In 2000, theObject Data Management Grouppublished a standard language for defining object and relationship (graph) structures in their ODMG'93 publication.
Several improvements to graph databases appeared in the early 1990s, accelerating in the late 1990s with endeavors to index web pages.
In the mid-to-late 2000s, commercial graph databases withACIDguarantees such asNeo4jandOracle Spatial and Graphbecame available.
In the 2010s, commercial ACID graph databases that could bescaled horizontallybecame available. Further,SAP HANAbroughtin-memoryandcolumnartechnologies to graph databases.[12]Also in the 2010s,multi-model databasesthat supported graph models (and other models such as relational database ordocument-oriented database) became available, such asOrientDB,ArangoDB, andMarkLogic(starting with its 7.0 version). During this time, graph databases of various types have become especially popular withsocial network analysiswith the advent of social media companies. Also during the decade,cloud-based graph databases such asAmazon NeptuneandNeo4j AuraDBbecame available.
Graph databases portray the data as it is viewed conceptually. This is accomplished by transferring the data into nodes and its relationships into edges.
A graph database is a database that is based ongraph theory. It consists of a set of objects, which can be a node or an edge.
A labeled-property graph model is represented by a set of nodes, relationships, properties, and labels. Both nodes of data and their relationships are named and can store properties represented bykey–value pairs. Nodes can be labelled to be grouped. The edges representing the relationships have two qualities: they always have a start node and an end node, and are directed;[13]making the graph adirected graph. Relationships can also have properties. This is useful in providing additional metadata and semantics to relationships of the nodes.[14]Direct storage of relationships allows aconstant-timetraversal.[15]
In anRDFgraph model, each addition of information is represented with a separate node. For example, imagine a scenario where a user has to add a name property for a person represented as a distinct node in the graph. In a labeled-property graph model, this would be done with an addition of a name property into the node of the person. However, in an RDF, the user has to add a separate node calledhasNameconnecting it to the original person node. Specifically, an RDF graph model is composed of nodes and arcs. An RDF graph notation or a statement is represented by: a node for the subject, a node for the object, and an arc for the predicate. A node may be left blank, aliteraland/or be identified by aURI. An arc may also be identified by a URI. A literal for a node may be of two types: plain (untyped) and typed. A plain literal has a lexical form and optionally a language tag. A typed literal is made up of a string with a URI that identifies a particular datatype. A blank node may be used to accurately illustrate the state of the data when the data does not have aURI.[16]
Graph databases are a powerful tool for graph-like queries. For example, computing the shortest path between two nodes in the graph. Other graph-like queries can be performed over a graph database in a natural way (for example graph's diameter computations or community detection).
Graphs are flexible, meaning it allows the user to insert new data into the existing graph without loss of application functionality. There is no need for the designer of the database to plan out extensive details of the database's future use cases.
The underlying storage mechanism of graph databases can vary. Some depend on a relational engine and "store" the graph data in atable(although a table is a logical element, therefore this approach imposes another level of abstraction between the graph database, the graph database management system and the physical devices where the data is actually stored). Others use akey–value storeordocument-oriented databasefor storage, making them inherentlyNoSQLstructures. A node would be represented as any other document store, but edges that link two different nodes hold special attributes inside its document; a _from and _to attributes.
Data lookup performance is dependent on the access speed from one particular node to another. Becauseindex-free adjacency enforces the nodes to have direct physicalRAMaddresses and physically point to other adjacent nodes, it results in a fast retrieval. A native graph system with index-free adjacency does not have to move through any other type of data structures to find links between the nodes. Directly related nodes in a graph are stored in thecacheonce one of the nodes are retrieved, making the data lookup even faster than the first time a user fetches a node. However, such advantage comes at a cost. Index-free adjacency sacrifices the efficiency of queries that do not usegraph traversals. Native graph databases use index-free adjacency to processCRUDoperations on the stored data.
Multiple categories of graphs by kind of data have been recognised. Gartner suggests the five broad categories of graphs:[17]
SinceEdgar F. Codd's 1970 paper on therelational model,[18]relational databaseshave been the de facto industry standard for large-scale data storage systems. Relational models require a strict schema anddata normalizationwhich separates data into many tables and removes any duplicate data within the database. Data is normalized in order to preservedata consistencyand supportACID transactions. However this imposes limitations on how relationships can be queried.
One of the relational model's design motivations was to achieve a fast row-by-row access.[18]Problems arise when there is a need to form complex relationships between the stored data. Although relationships can be analyzed with the relational model, complex queries performing many join operations on many different attributes over several tables are required. In working with relational models,foreign keyconstraints should also be considered when retrieving relationships, causing additional overhead.
Compared withrelational databases, graph databases are often faster for associative data sets[citation needed]and map more directly to the structure ofobject-orientedapplications. They can scale more naturally[citation needed]to large datasets as they do not typically needjoinoperations, which can often be expensive. As they depend less on a rigid schema, they are marketed as more suitable to manage ad hoc and changing data with evolving schemas.
Conversely, relational database management systems are typically faster at performing the same operation on large numbers of data elements, permitting the manipulation of the data in its natural structure. Despite the graph databases' advantages and recent popularity over[citation needed]relational databases, it is recommended the graph model itself should not be the sole reason to replace an existing relational database. A graph database may become relevant if there is an evidence for performance improvement by orders of magnitude and lower latency.[19]
The relational model gathers data together using information in the data. For example, one might look for all the "users" whose phone number contains the area code "311". This would be done by searching selected datastores, ortables, looking in the selected phone number fields for the string "311". This can be a time-consuming process in large tables, so relational databases offerindexes, which allow data to be stored in a smaller sub-table, containing only the selected data and aunique key(or primary key) of the record. If the phone numbers are indexed, the same search would occur in the smaller index table, gathering the keys of matching records, and then looking in the main data table for the records with those keys. Usually, a table is stored in a way that allows a lookup via a key to be very fast.[20]
Relational databases do notinherentlycontain the idea of fixed relationships between records. Instead, related data is linked to each other by storing one record's unique key in another record's data. For example, a table containing email addresses for users might hold a data item calleduserpk, which contains theprimary keyof the user record it is associated with. In order to link users and their email addresses, the system first looks up the selected user records primary keys, looks for those keys in theuserpkcolumn in the email table (or, more likely, an index of them), extracts the email data, and then links the user and email records to make composite records containing all the selected data. This operation, termed ajoin, can be computationally expensive. Depending on the complexity of the query, the number of joins, and indexing various keys, the system may have to search through multiple tables and indexes and then sort it all to match it together.[20]
In contrast, graph databases directly store the relationships between records. Instead of an email address being found by looking up its user's key in theuserpkcolumn, the user record contains a pointer that directly refers to the email address record. That is, having selected a user, the pointer can be followed directly to the email records, there is no need to search the email table to find the matching records. This can eliminate the costly join operations. For example, if one searches for all of the email addresses for users in area code "311", the engine would first perform a conventional search to find the users in "311", but then retrieve the email addresses by following the links found in those records. A relational database would first find all the users in "311", extract a list of the primary keys, perform another search for any records in the email table with those primary keys, and link the matching records together. For these types of common operations, graph databases would theoretically be faster.[20]
The true value of the graph approach becomes evident when one performs searches that are more than one level deep. For example, consider a search for users who have "subscribers" (a table linking users to other users) in the "311" area code. In this case a relational database has to first search for all the users with an area code in "311", then search the subscribers table for any of those users, and then finally search the users table to retrieve the matching users. In contrast, a graph database would search for all the users in "311", then follow thebacklinksthrough the subscriber relationship to find the subscriber users. This avoids several searches, look-ups, and the memory usage involved in holding all of the temporary data from multiple records needed to construct the output. In terms ofbig O notation, this query would beO(logn)+O(1){\displaystyle O(\log n)+O(1)}time – i.e., proportional to the logarithm of the size of the data. In contrast, the relational version would be multipleO(logn){\displaystyle O(\log n)}lookups, plus theO(n){\displaystyle O(n)}time needed to join all of the data records.[20]
The relative advantage of graph retrieval grows with the complexity of a query. For example, one might want to know "that movie about submarines with the actor who was in that movie with that other actor that played the lead inGone With the Wind". This first requires the system to find the actors inGone With the Wind, find all the movies they were in, find all the actors in all of those movies who were not the lead inGone With the Wind, and then find all of the movies they were in, finally filtering that list to those with descriptions containing "submarine". In a relational database, this would require several separate searches through the movies and actors tables, doing another search on submarine movies, finding all the actors in those movies, and then comparing the (large) collected results. In contrast, the graph database would walk fromGone With the WindtoClark Gable, gather the links to the movies he has been in, gather the links out of those movies to other actors, and then follow the links out of those actors back to the list of movies. The resulting list of movies can then be searched for "submarine". All of this can be done via one search.[21]
Propertiesadd another layer ofabstractionto this structure that also improves many common queries. Properties are essentially labels that can be applied to any record, or in some cases, edges as well. For example, one might label Clark Gable as "actor", which would then allow the system to quickly find all the records that are actors, as opposed to director or camera operator. If labels on edges are allowed, one could also label the relationship betweenGone With the Windand Clark Gable as "lead", and by performing a search on people that are "lead" "actor" in the movieGone With the Wind, the database would produceVivien Leigh,Olivia de Havillandand Clark Gable. The equivalent SQL query would have to rely on added data in the table linking people and movies, adding more complexity to the query syntax. These sorts of labels may improve search performance under certain circumstances, but are generally more useful in providing added semantic data for end users.[21]
Relational databases are very well suited to flat data layouts, where relationships between data are only one or two levels deep. For example, an accounting database might need to look up all the line items for all the invoices for a given customer, a three-join query. Graph databases are aimed at datasets that contain many more links. They are especially well suited tosocial networkingsystems, where the "friends" relationship is essentially unbounded. These properties make graph databases naturally suited to types of searches that are increasingly common in online systems, and inbig dataenvironments. For this reason, graph databases are becoming very popular for large online systems likeFacebook,Google,Twitter, and similar systems with deep links between records.
To further illustrate, imagine a relational model with two tables: apeopletable (which has aperson_idandperson_namecolumn) and afriendtable (withfriend_idandperson_id, which is aforeign keyfrom thepeopletable). In this case, searching for all of Jack's friends would result in the following SQL query.
The same query may be translated into --
The above examples are a simple illustration of a basic relationship query. They condense the idea of relational models' query complexity that increases with the total amount of data. In comparison, a graph database query is easily able to sort through the relationship graph to present the results.
There are also results that indicate simple, condensed, and declarative queries of the graph databases do not necessarily provide good performance in comparison to the relational databases. While graph databases offer an intuitive representation of data, relational databases offer better results when set operations are needed.[15]
The following is a list ofnotablegraph databases:
It is implemented as apeer-to-peernetwork featuringmulti-master replicationwith a customcommutative replicated data type (CRDT).[32]
|
https://en.wikipedia.org/wiki/Graph_database
|
Inlinguistics, agraphemeis the smallest functional unit of awriting system.[1]The wordgraphemeis derived fromAncient Greekgráphō('write'), and the suffix-emeby analogy withphonemeand otheremic units. The study of graphemes is calledgraphemics. The concept of graphemes is abstract and similar to the notion incomputingof acharacter. (A specific geometric shape that represents any particular grapheme in a giventypefaceis called aglyph.)
There are two main opposing grapheme concepts.[2]
In the so-calledreferential conception, graphemes are interpreted as the smallest units of writing that correspond with sounds (more accuratelyphonemes). In this concept, theshin the written English wordshakewould be a grapheme because it represents the phoneme/ʃ/. This referential concept is linked to thedependency hypothesisthat claims that writing merely depicts speech.
By contrast, theanalogical conceptdefines graphemes analogously to phonemes, i.e. via writtenminimal pairssuch asshakevs.snake. In this example,handnare graphemes because they distinguish two words. This analogical concept is associated with the autonomy hypothesis which holds that writing is a system in its own right and should be studied independently from speech. Both concepts have weaknesses.[3]
Some models adhere to both concepts simultaneously by including two individual units,[4]which are given names such asgraphemic graphemefor the grapheme according to the analogical conception (hinshake), andphonological-fit graphemefor the grapheme according to the referential concept (shinshake).[5]
In newer concepts, in which the grapheme is interpretedsemioticallyas a dyadiclinguistic sign,[6]it is defined as a minimal unit of writing that is both lexically distinctive and corresponds with a linguistic unit (phoneme,syllable, ormorpheme).[7]
Graphemes are often notated withinangle brackets: e.g.⟨a⟩.[8]This is analogous to the slash notation/a/used forphonemes. Analogous to thesquare bracketnotation[a]used forphones,glyphsare sometimes denoted with vertical lines, e.g.|ɑ|.[9]
In the same way that thesurface formsofphonemesare speech sounds orphones(and different phones representing the same phoneme are calledallophones), the surface forms of graphemes areglyphs(sometimesgraphs), namely concrete written representations of symbols (and different glyphs representing the same grapheme are calledallographs).
Thus, a grapheme can be regarded as anabstractionof a collection of glyphs that are all functionally equivalent.
For example, in written English (or other languages using theLatin alphabet), there are two different physical representations of thelowercaseLatin letter "a": "a" and "ɑ". Since, however, the substitution of either of them for the other cannot change the meaning of a word, they are considered to be allographs of the same grapheme, which can be written⟨a⟩. Similarly, the grapheme corresponding to "Arabic numeral zero" has a unique semantic identity and Unicode valueU+0030but exhibits variation in the form ofslashed zero. Italic and bold face forms are also allographic, as is the variation seen inserif(as inTimes New Roman) versussans-serif(as inHelvetica) forms.
There is some disagreement as to whether capital and lower case letters are allographs or distinct graphemes. Capitals are generally found in certain triggering contexts that do not change the meaning of a word: a proper name, for example, or at the beginning of a sentence, or all caps in a newspaper headline. In other contexts, capitalization can determine meaning: compare, for examplePolishandpolish: the former is a language, the latter is for shining shoes.
Some linguists considerdigraphslike the⟨sh⟩inshipto be distinct graphemes, but these are generally analyzed as sequences of graphemes. Non-stylisticligatures, however, such as⟨æ⟩, are distinct graphemes, as are various letters with distinctivediacritics, such as⟨ç⟩.
Identical glyphs may not always represent the same grapheme. For example, the three letters⟨A⟩,⟨А⟩and⟨Α⟩appear identical but each has a different meaning: in order, they are the Latin letterA, the Cyrillic letterAzǔ/Азъand the Greek letterAlpha. Each has its owncode pointin Unicode:U+0041ALATIN CAPITAL LETTER A,U+0410АCYRILLIC CAPITAL LETTER AandU+0391ΑGREEK CAPITAL LETTER ALPHA.
The principal types of graphemes arelogograms(more accurately termed morphograms[10]), which represent words ormorphemes(for exampleChinese characters, theampersand"&" representing the wordand,Arabic numerals);syllabiccharacters, representingsyllables(as in Japanesekana); andalphabeticletters, corresponding roughly tophonemes(see next section). For a full discussion of the different types, seeWriting system § Functional classification.
There are additional graphemic components used in writing, such aspunctuation marks,mathematical symbols,word dividerssuch as the space, and othertypographic symbols. Ancientlogographic scriptsoften used silentdeterminativesto disambiguate the meaning of a neighboring (non-silent) word.
As mentioned in the previous section, in languages that usealphabeticwriting systems, many of the graphemes stand in principle for thephonemes(significant sounds) of the language. In practice, however, theorthographiesof such languages entail at least a certain amount of deviation from the ideal of exact grapheme–phoneme correspondence. A phoneme may be represented by amultigraph(sequence of more than one grapheme), as thedigraphshrepresents a single sound in English (and sometimes a single grapheme may represent more than one phoneme, as with the Russian letterяor the Spanish c). Some graphemes may not represent any sound at all (like thebin Englishdebtor thehin all Spanish words containing the said letter), and often the rules of correspondence between graphemes and phonemes become complex or irregular, particularly as a result of historicalsound changesthat are not necessarily reflected in spelling. "Shallow" orthographies such as those of standardSpanishandFinnishhave relatively regular (though not always one-to-one) correspondence between graphemes and phonemes, while those of French and English have much less regular correspondence, and are known asdeep orthographies.
Multigraphs representing a single phoneme are normally treated as combinations of separate letters, not as graphemes in their own right. However, in some languages a multigraph may be treated as a single unit for the purposes ofcollation; for example, in aCzechdictionary, the section for words that start with⟨ch⟩comes after that for⟨h⟩.[11]For more examples, seeAlphabetical order § Language-specific conventions.
|
https://en.wikipedia.org/wiki/Grapheme
|
Graphemicsorgraphematicsis the linguistic study ofwriting systemsand their basic components, i.e.graphemes.
At the beginning of the development of this area of linguistics,Ignace Gelbcoined the termgrammatologyfor this discipline;[1]later some scholars suggested calling itgraphology[2]to matchphonology, but that name istraditionally used for a pseudo-science. Others therefore suggested renaming the study of language-dependent pronunciationphonemicsorphonematicsinstead, but this did not gain widespread acceptance either, so the termsgraphemicsandgraphematicsbecame more frequent.
Graphemics examines the specifics of written texts in a certain language and their correspondence to the spoken language. One major task is thedescriptive analysisof implicit regularities in written words and texts (graphotactics) to formulate explicit rules (orthography) for the writing system that can be used inprescriptive educationor incomputer linguistics, e.g. forspeech synthesis.
In analogy tophonemeand (allo)phoneinphonology, the graphic units of language are graphemes, i.e. language-specificcharacters, andgraphs, i.e. language-specificglyphs. Different schools of thought consider different entities to be graphemes; major points of divergence are the handling ofpunctuation,diacritic marks,digraphsor othermultigraphsand non-alphabeticscripts.
Analogous tophonetics, the "etic" counterpart of graphemics is calledgrapheticsand deals with the material side only (includingpaleography,typographyandgraphology).
The term 'grammatologywas first promoted in English bylinguistIgnace Gelbin his 1952 bookA Study of Writing.[1]The equivalent word is recorded in German and French use long before then.[3][4]Grammatology can examine thetypologyof scripts, the analysis of the structural properties of scripts, and the relationship between written and spokenlanguage.[5]In its broadest sense, some scholars also include the study ofliteracyin grammatology and, indeed, the impact of writing on philosophy, religion, science, administration and other aspects of the organization of society.[6]HistorianBruce Triggerassociates grammatology withcultural evolution.[7]
Graphotacticsrefers to rules which restrict the allowable sequences of letters in alphabetic languages.[8]: 67A common example is the partially correct "I before E except after C". However, there are exceptions, for example Edward Carney in his book,A Survey of English Spelling, refers to the "I before E except after C” rule instead as an example of a “phonotactic rule”.[8]: 161Graphotactical rules are useful inerror detectionbyoptical character recognitionsystems.[9]
In studies ofOld English, "graphotactics" is also used to refer to the variable-length spacing between words.[10]
The scholars most immediately associated with grammatology, understood as the history and theory of writing, includeEric Havelock(The Muse Learns to Write),Walter J. Ong(Orality and Literacy),Jack Goody(Domestication of the Savage Mind), andMarshall McLuhan(The Gutenberg Galaxy). Grammatology brings to any topic a consideration of the contribution of technology and the material and social apparatus of language. A more theoretical treatment of the approach may be seen in the works ofFriedrich Kittler(Discourse Networks: 1800/1900) andAvital Ronell(The Telephone Book).
Swiss linguistFerdinand de Saussure, who is considered to be a key figure in structural approaches to language,[11]saw speech and writing as 'two distinct systems of signs' with the second having 'the sole purpose of representing the first.',[12]a view further explained in Peter Barry's theBeginning Theory. In the 1960s, with the writingsRoland BarthesandJacques Derrida, critiques have been put forth to this proposed relation.
In 1967,Jacques Derridaborrowed the term, but put it to different use, in his bookOf Grammatology. Derrida aimed to show that writing is not simply a reproduction of speech, but that the way in which thoughts are recorded in writing strongly affects the nature of knowledge. Deconstruction from a grammatological perspective places the history of philosophy in general, and metaphysics in particular, in the context of writing as such. In this perspective metaphysics is understood as a category or classification system relative to the invention of alphabetic writing and its institutionalization in School. Plato's Academy, and Aristotle's Lyceum, are as much a part of the invention of literacy as is the introduction of the vowel to create the Classical Greek alphabet.Gregory Ulmertook up this trajectory, from historical to philosophical grammatology, to add applied grammatology (Applied Grammatology: Post(e)-Pedagogy from Jacques Derrida toJoseph Beuys, Johns Hopkins, 1985). Ulmer coined the term "electracy" to call attention to the fact that digital technologies and their elaboration in new media forms are part of an apparatus that is to these inventions what literacy is to alphabetic and print technologies.
|
https://en.wikipedia.org/wiki/Graphemics
|
Graphicsare two-dimensional images.
Graphic(s)orThe Graphicmay also refer to:
|
https://en.wikipedia.org/wiki/Graphic_(disambiguation)
|
The Englishsuffix-graphymeans a "field of study" or related to "writing" a book, and is an anglicization of the French-graphieinherited from the Latin-graphia, which is a transliterated direct borrowing from Greek.
|
https://en.wikipedia.org/wiki/-graphy
|
This is a list of software to create any kind ofinformation graphics:
Vector graphicssoftware can be used for manual graphing or for editing the output of another program; see:
A few online editors using vector graphics for specific needs have been created.[citation needed]This kind of creativeinterfaceswork well together withdata visualizationtools like the ones above.[citation needed]
|
https://en.wikipedia.org/wiki/List_of_information_graphics_software
|
Statistical graphics, also known asstatistical graphical techniques, aregraphicsused in the field ofstatisticsfordata visualization.
Whereasstatisticsanddata analysisprocedures generally yield their output in numeric or tabular form, graphical techniques allow such results to be displayed in some sort of pictorial form. They includeplotssuch asscatter plots,histograms,probability plots,spaghetti plots, residual plots,box plots, block plots andbiplots.[1]
Exploratory data analysis(EDA) relies heavily on such techniques. They can also provide insight into a data set to help with testing assumptions,model selectionandregression model validation, estimator selection, relationship identification, factor effect determination, andoutlierdetection. In addition, the choice of appropriate statistical graphics can provide a convincing means of communicating the underlying message that is present in the data to others.[1]
Graphical statistical methods have four objectives:[2]
If one is not using statistical graphics, then one is forfeiting insight into one or more aspects of the underlying structure of the data.
Statistical graphics have been central to the development of science and date to the earliest attempts to analyse data. Many familiar forms, includingbivariate plots,statistical maps,bar charts, andcoordinate paperwere used in the 18th century. Statistical graphics developed through attention to four problems:[3]
Since the 1970s statistical graphics have been re-emerging as an important analytic tool with the revitalisation ofcomputer graphicsand related technologies.[3]
Famous graphics were designed by:
See theplotspage for many more examples of statistical graphics.
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
|
https://en.wikipedia.org/wiki/Statistical_graphics
|
Curve fitting[1][2]is the process of constructing acurve, ormathematical function, that has the best fit to a series ofdata points,[3]possibly subject to constraints.[4][5]Curve fitting can involve eitherinterpolation,[6][7]where an exact fit to the data is required, orsmoothing,[8][9]in which a "smooth" function is constructed that approximately fits the data. A related topic isregression analysis,[10][11]which focuses more on questions ofstatistical inferencesuch as how much uncertainty is present in a curve that is fitted to data observed with random errors. Fitted curves can be used as an aid for data visualization,[12][13]to infer values of a function where no data are available,[14]and to summarize the relationships among two or more variables.[15]Extrapolationrefers to the use of a fitted curve beyond therangeof the observed data,[16]and is subject to adegree of uncertainty[17]since it may reflect the method used to construct the curve as much as it reflects the observed data.
For linear-algebraic analysis of data, "fitting" usually means trying to find the curve that minimizes the vertical (y-axis) displacement of a point from the curve (e.g.,ordinary least squares). However, for graphical and image applications, geometric fitting seeks to provide the best visual fit; which usually means trying to minimize theorthogonal distanceto the curve (e.g.,total least squares), or to otherwise include both axes of displacement of a point from the curve. Geometric fits are not popular because they usually require non-linear and/or iterative calculations, although they have the advantage of a more aesthetic and geometrically accurate result.[18][19][20]
Most commonly, one fits a function of the formy=f(x).
The first degreepolynomialequation
is a line withslopea. A line will connect any two points, so a first degree polynomial equation is an exact fit through any two points with distinct x coordinates.
If the order of the equation is increased to a second degree polynomial, the following results:
This will exactly fit a simple curve to three points.
If the order of the equation is increased to a third degree polynomial, the following is obtained:
This will exactly fit four points.
A more general statement would be to say it will exactly fit fourconstraints. Each constraint can be a point,angle, orcurvature(which is the reciprocal of the radius of anosculating circle). Angle and curvature constraints are most often added to the ends of a curve, and in such cases are calledend conditions. Identical end conditions are frequently used to ensure a smooth transition between polynomial curves contained within a singlespline. Higher-order constraints, such as "the change in the rate of curvature", could also be added. This, for example, would be useful in highwaycloverleafdesign to understand the rate of change of the forces applied to a car (seejerk), as it follows the cloverleaf, and to set reasonable speed limits, accordingly.
The first degree polynomial equation could also be an exact fit for a single point and an angle while the third degree polynomial equation could also be an exact fit for two points, an angle constraint, and a curvature constraint. Many other combinations of constraints are possible for these and for higher order polynomial equations.
If there are more thann+ 1 constraints (nbeing the degree of the polynomial), the polynomial curve can still be run through those constraints. An exact fit to all constraints is not certain (but might happen, for example, in the case of a first degree polynomial exactly fitting threecollinear points). In general, however, some method is then needed to evaluate each approximation. Theleast squaresmethod is one way to compare the deviations.
There are several reasons given to get an approximate fit when it is possible to simply increase the degree of the polynomial equation and get an exact match.:
The degree of the polynomial curve being higher than needed for an exact fit is undesirable for all the reasons listed previously for high order polynomials, but also leads to a case where there are an infinite number of solutions. For example, a first degree polynomial (a line) constrained by only a single point, instead of the usual two, would give an infinite number of solutions. This brings up the problem of how to compare and choose just one solution, which can be a problem for both software and humans. Because of this, it is usually best to choose as low a degree as possible for an exact match on all constraints, and perhaps an even lower degree, if an approximate fit is acceptable.
Other types of curves, such astrigonometric functions(such as sine and cosine), may also be used, in certain cases.
In spectroscopy, data may be fitted withGaussian,Lorentzian,Voigtand related functions.
In biology, ecology, demography, epidemiology, and many other disciplines, thegrowth of a population, the spread of infectious disease, etc. can be fitted using thelogistic function.
Inagriculturethe inverted logisticsigmoid function(S-curve) is used to describe the relation between crop yield and growth factors. The blue figure was made by a sigmoid regression of data measured in farm lands. It can be seen that initially, i.e. at low soil salinity, the crop yield reduces slowly at increasing soil salinity, while thereafter the decrease progresses faster.
If a function of the formy=f(x){\displaystyle y=f(x)}cannot be postulated, one can still try to fit aplane curve.
Other types of curves, such asconic sections(circular, elliptical, parabolic, and hyperbolic arcs) ortrigonometric functions(such as sine and cosine), may also be used, in certain cases. For example, trajectories of objects under the influence of gravity follow a parabolic path, when air resistance is ignored. Hence, matching trajectory data points to a parabolic curve would make sense. Tides follow sinusoidal patterns, hence tidal data points should be matched to a sine wave, or the sum of two sine waves of different periods, if the effects of the Moon and Sun are both considered.
For aparametric curve, it is effective to fit each of its coordinates as a separate function ofarc length; assuming that data points can be ordered, thechord distancemay be used.[22]
Coope[23]approaches the problem of trying to find the best visual fit of circle to a set of 2D data points. The method elegantly transforms the ordinarily non-linear problem into a linear problem that can be solved without using iterative numerical methods, and is hence much faster than previous techniques.
The above technique is extended to general ellipses[24]by adding a non-linear step, resulting in a method that is fast, yet finds visually pleasing ellipses of arbitrary orientation and displacement.
Note that while this discussion was in terms of 2D curves, much of this logic also extends to 3D surfaces, each patch of which is defined by a net of curves in two parametric directions, typically calleduandv. A surface may be composed of one or more surface patches in each direction.
Manystatistical packagessuch asRandnumerical softwaresuch as thegnuplot,GNU Scientific Library,Igor Pro,MLAB,Maple,MATLAB, TK Solver 6.0,Scilab,Mathematica,GNU Octave, andSciPyinclude commands for doing curve fitting in a variety of scenarios. There are also programs specifically written to do curve fitting; they can be found in thelists of statisticalandnumerical-analysis programsas well as inCategory:Regression and curve fitting software.
|
https://en.wikipedia.org/wiki/Curve_fitting
|
Instatistics,linear regressionis amodelthat estimates the relationship between ascalarresponse (dependent variable) and one or more explanatory variables (regressororindependent variable). A model with exactly one explanatory variable is asimple linear regression; a model with two or more explanatory variables is amultiple linear regression.[1]This term is distinct frommultivariate linear regression, which predicts multiplecorrelateddependent variables rather than a single dependent variable.[2]
In linear regression, the relationships are modeled usinglinear predictor functionswhose unknown modelparametersareestimatedfrom thedata. Most commonly, theconditional meanof the response given the values of the explanatory variables (or predictors) is assumed to be anaffine functionof those values; less commonly, the conditionalmedianor some otherquantileis used. Like all forms ofregression analysis, linear regression focuses on theconditional probability distributionof the response given the values of the predictors, rather than on thejoint probability distributionof all of these variables, which is the domain ofmultivariate analysis.
Linear regression is also a type ofmachine learningalgorithm, more specifically asupervisedalgorithm, that learns from the labelled datasets and maps the data points to the most optimized linear functions that can be used for prediction on new datasets.[3]
Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications.[4]This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine.
Linear regression has many practical uses. Most applications fall into one of the following two broad categories:
Linear regression models are often fitted using theleast squaresapproach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some othernorm(as withleast absolute deviationsregression), or by minimizing a penalized version of the least squarescost functionas inridge regression(L2-norm penalty) andlasso(L1-norm penalty). Use of theMean Squared Error(MSE) as the cost on a dataset that has many large outliers, can result in a model that fits the outliers more than the true data due to the higher importance assigned by MSE to large errors. So, cost functions that are robust to outliers should be used if the dataset has many largeoutliers. Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous.
Given adata set{yi,xi1,…,xip}i=1n{\displaystyle \{y_{i},\,x_{i1},\ldots ,x_{ip}\}_{i=1}^{n}}ofnstatistical units, a linear regression model assumes that the relationship between the dependent variableyand the vector of regressorsxislinear. This relationship is modeled through adisturbance termorerror variableε—an unobservedrandom variablethat adds "noise" to the linear relationship between the dependent variable and regressors. Thus the model takes the formyi=β0+β1xi1+⋯+βpxip+εi=xiTβ+εi,i=1,…,n,{\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i1}+\cdots +\beta _{p}x_{ip}+\varepsilon _{i}=\mathbf {x} _{i}^{\mathsf {T}}{\boldsymbol {\beta }}+\varepsilon _{i},\qquad i=1,\ldots ,n,}whereTdenotes thetranspose, so thatxiTβis theinner productbetweenvectorsxiandβ.
Often thesenequations are stacked together and written inmatrix notationas
where
Fitting a linear model to a given data set usually requires estimating the regression coefficientsβ{\displaystyle {\boldsymbol {\beta }}}such that the error termε=y−Xβ{\displaystyle {\boldsymbol {\varepsilon }}=\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}}is minimized. For example, it is common to use the sum of squared errors‖ε‖22{\displaystyle \|{\boldsymbol {\varepsilon }}\|_{2}^{2}}as a measure ofε{\displaystyle {\boldsymbol {\varepsilon }}}for minimization.
Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascenthiat various moments in timeti. Physics tells us that, ignoring thedrag, the relationship can be modeled as
whereβ1determines the initial velocity of the ball,β2is proportional to thestandard gravity, andεiis due to measurement errors. Linear regression can be used to estimate the values ofβ1andβ2from the measured data. This model is non-linear in the time variable, but it is linear in the parametersβ1andβ2; if we take regressorsxi= (xi1,xi2) = (ti,ti2), the model takes on the standard form
Standard linear regression models with standard estimation techniques make a number of assumptions about the predictor variables, the response variable and their relationship. Numerous extensions have been developed that allow each of these assumptions to be relaxed (i.e. reduced to a weaker form), and in some cases eliminated entirely. Generally these extensions make the estimation procedure more complex and time-consuming, and may also require more data in order to produce an equally precise model.[citation needed]
The following are the major assumptions made by standard linear regression models with standard estimation techniques (e.g.ordinary least squares):
Violations of these assumptions can result in biased estimations ofβ, biased standard errors, untrustworthy confidence intervals and significance tests. Beyond these assumptions, several other statistical properties of the data strongly influence the performance of different estimation methods:
A fitted linear regression model can be used to identify the relationship between a single predictor variablexjand the response variableywhen all the other predictor variables in the model are "held fixed". Specifically, the interpretation ofβjis theexpectedchange inyfor a one-unit change inxjwhen the other covariates are held fixed—that is, the expected value of thepartial derivativeofywith respect toxj. This is sometimes called theunique effectofxjony. In contrast, themarginal effectofxjonycan be assessed using acorrelation coefficientorsimple linear regressionmodel relating onlyxjtoy; this effect is thetotal derivativeofywith respect toxj.
Care must be taken when interpreting regression results, as some of the regressors may not allow for marginal changes (such asdummy variables, or the intercept term), while others cannot be held fixed (recall the example from the introduction: it would be impossible to "holdtifixed" and at the same time change the value ofti2).
It is possible that the unique effect be nearly zero even when the marginal effect is large. This may imply that some other covariate captures all the information inxj, so that once that variable is in the model, there is no contribution ofxjto the variation iny. Conversely, the unique effect ofxjcan be large while its marginal effect is nearly zero. This would happen if the other covariates explained a great deal of the variation ofy, but they mainly explain variation in a way that is complementary to what is captured byxj. In this case, including the other variables in the model reduces the part of the variability ofythat is unrelated toxj, thereby strengthening the apparent relationship withxj.
The meaning of the expression "held fixed" may depend on how the values of the predictor variables arise. If the experimenter directly sets the values of the predictor variables according to a study design, the comparisons of interest may literally correspond to comparisons among units whose predictor variables have been "held fixed" by the experimenter. Alternatively, the expression "held fixed" can refer to a selection that takes place in the context of data analysis. In this case, we "hold a variable fixed" by restricting our attention to the subsets of the data that happen to have a common value for the given predictor variable. This is the only interpretation of "held fixed" that can be used in anobservational study.
The notion of a "unique effect" is appealing when studying acomplex systemwhere multiple interrelated components influence the response variable. In some cases, it can literally be interpreted as the causal effect of an intervention that is linked to the value of a predictor variable. However, it has been argued that in many cases multiple regression analysis fails to clarify the relationships between the predictor variables and the response variable when the predictors are correlated with each other and are not assigned following a study design.[9]
Numerous extensions of linear regression have been developed, which allow some or all of the assumptions underlying the basic model to be relaxed.
The simplest case of a singlescalarpredictor variablexand a single scalar response variableyis known assimple linear regression. The extension to multiple and/orvector-valued predictor variables (denoted with a capitalX) is known asmultiple linear regression, also known asmultivariable linear regression(not to be confused withmultivariate linear regression).[10]
Multiple linear regression is a generalization ofsimple linear regressionto the case of more than one independent variable, and aspecial caseof general linear models, restricted to one dependent variable. The basic model for multiple linear regression is
for each observationi=1,…,n{\textstyle i=1,\ldots ,n}.
In the formula above we considernobservations of one dependent variable andpindependent variables. Thus,Yiis theithobservation of the dependent variable,Xijisithobservation of thejthindependent variable,j= 1, 2, ...,p. The valuesβjrepresent parameters to be estimated, andεiis theithindependent identically distributed normal error.
In the more general multivariate linear regression, there is one equation of the above form for each ofm> 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other:
for all observations indexed asi= 1, ... ,nand for all dependent variables indexed asj = 1, ... ,m.
Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variableyis still a scalar. Another term,multivariate linear regression, refers to cases whereyis a vector, i.e., the same asgeneral linear regression.
Model Assumptions to Check:
1. Linearity: Relationship between each predictor and outcome must be linear
2. Normality of residuals: Residuals should follow a normal distribution
3. Homoscedasticity: Constant variance of residuals across predicted values
4. Independence: Observations should be independent (not repeated measures)
SPSS: Use partial plots, histograms, P-P plots, residual vs. predicted plots
Thegeneral linear modelconsiders the situation when the response variable is not a scalar (for each observation) but a vector,yi. Conditional linearity ofE(y∣xi)=xiTB{\displaystyle E(\mathbf {y} \mid \mathbf {x} _{i})=\mathbf {x} _{i}^{\mathsf {T}}B}is still assumed, with a matrixBreplacing the vectorβof the classical linear regression model. Multivariate analogues ofordinary least squares(OLS) andgeneralized least squares(GLS) have been developed. "General linear models" are also called "multivariate linear models". These are not the same as multivariable linear models (also called "multiple linear models").
Various models have been created that allow forheteroscedasticity, i.e. the errors for different response variables may have differentvariances. For example,weighted least squaresis a method for estimating linear regression models when the response variables may have different error variances, possibly with correlated errors. (See alsoWeighted linear least squares, andGeneralized least squares.)Heteroscedasticity-consistent standard errorsis an improved method for use with uncorrelated but potentially heteroscedastic errors.
TheGeneralized linear model(GLM) is a framework for modeling response variables that are bounded or discrete. This is used, for example:
Generalized linear models allow for an arbitrarylink function,g, that relates themeanof the response variable(s) to the predictors:E(Y)=g−1(XB){\displaystyle E(Y)=g^{-1}(XB)}. The link function is often related to the distribution of the response, and in particular it typically has the effect of transforming between the(−∞,∞){\displaystyle (-\infty ,\infty )}range of the linear predictor and the range of the response variable.
Some common examples of GLMs are:
Single index models[clarification needed]allow some degree of nonlinearity in the relationship betweenxandy, while preserving the central role of the linear predictorβ′xas in the classical linear regression model. Under certain conditions, simply applying OLS to data from a single-index model will consistently estimateβup to a proportionality constant.[11]
Hierarchical linear models(ormultilevel regression) organizes the data into a hierarchy of regressions, for example whereAis regressed onB, andBis regressed onC. It is often used where the variables of interest have a natural hierarchical structure such as in educational statistics, where students are nested in classrooms, classrooms are nested in schools, and schools are nested in some administrative grouping, such as a school district. The response variable might be a measure of student achievement such as a test score, and different covariates would be collected at the classroom, school, and school district levels.
Errors-in-variables models(or "measurement error models") extend the traditional linear regression model to allow the predictor variablesXto be observed with error. This error causes standard estimators ofβto become biased. Generally, the form of bias is an attenuation, meaning that the effects are biased toward zero.
In a multiple linear regression model
parameterβj{\displaystyle \beta _{j}}of predictor variablexj{\displaystyle x_{j}}represents the individual effect ofxj{\displaystyle x_{j}}. It has an interpretation as the expected change in the response variabley{\displaystyle y}whenxj{\displaystyle x_{j}}increases by one unit with other predictor variables held constant. Whenxj{\displaystyle x_{j}}is strongly correlated with other predictor variables, it is improbable thatxj{\displaystyle x_{j}}can increase by one unit with other variables held constant. In this case, the interpretation ofβj{\displaystyle \beta _{j}}becomes problematic as it is based on an improbable condition, and the effect ofxj{\displaystyle x_{j}}cannot be evaluated in isolation.
For a group of predictor variables, say,{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}, a group effectξ(w){\displaystyle \xi (\mathbf {w} )}is defined as a linear combination of their parameters
wherew=(w1,w2,…,wq)⊺{\displaystyle \mathbf {w} =(w_{1},w_{2},\dots ,w_{q})^{\intercal }}is a weight vector satisfying∑j=1q|wj|=1{\textstyle \sum _{j=1}^{q}|w_{j}|=1}. Because of the constraint onwj{\displaystyle {w_{j}}},ξ(w){\displaystyle \xi (\mathbf {w} )}is also referred to as a normalized group effect. A group effectξ(w){\displaystyle \xi (\mathbf {w} )}has an interpretation as the expected change iny{\displaystyle y}when variables in the groupx1,x2,…,xq{\displaystyle x_{1},x_{2},\dots ,x_{q}}change by the amountw1,w2,…,wq{\displaystyle w_{1},w_{2},\dots ,w_{q}}, respectively, at the same time with other variables (not in the group) held constant. It generalizes the individual effect of a variable to a group of variables in that (i{\displaystyle i}) ifq=1{\displaystyle q=1}, then the group effect reduces to an individual effect, and (ii{\displaystyle ii}) ifwi=1{\displaystyle w_{i}=1}andwj=0{\displaystyle w_{j}=0}forj≠i{\displaystyle j\neq i}, then the group effect also reduces to an individual effect.
A group effectξ(w){\displaystyle \xi (\mathbf {w} )}is said to be meaningful if the underlying simultaneous changes of theq{\displaystyle q}variables(x1,x2,…,xq)⊺{\displaystyle (x_{1},x_{2},\dots ,x_{q})^{\intercal }}is probable.
Group effects provide a means to study the collective impact of strongly correlated predictor variables in linear regression models. Individual effects of such variables are not well-defined as their parameters do not have good interpretations. Furthermore, when the sample size is not large, none of their parameters can be accurately estimated by theleast squares regressiondue to themulticollinearityproblem. Nevertheless, there are meaningful group effects that have good interpretations and can be accurately estimated by the least squares regression. A simple way to identify these meaningful group effects is to use an all positive correlations (APC) arrangement of the strongly correlated variables under which pairwise correlations among these variables are all positive, and standardize allp{\displaystyle p}predictor variables in the model so that they all have mean zero and length one. To illustrate this, suppose that{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}is a group of strongly correlated variables in an APC arrangement and that they are not strongly correlated with predictor variables outside the group. Lety′{\displaystyle y'}be the centredy{\displaystyle y}andxj′{\displaystyle x_{j}'}be the standardizedxj{\displaystyle x_{j}}. Then, the standardized linear regression model is
Parametersβj{\displaystyle \beta _{j}}in the original model, includingβ0{\displaystyle \beta _{0}}, are simple functions ofβj′{\displaystyle \beta _{j}'}in the standardized model. The standardization of variables does not change their correlations, so{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}is a group of strongly correlated variables in an APC arrangement and they are not strongly correlated with other predictor variables in the standardized model. A group effect of{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}is
and its minimum-variance unbiased linear estimator is
whereβ^j′{\displaystyle {\hat {\beta }}_{j}'}is the least squares estimator ofβj′{\displaystyle \beta _{j}'}. In particular, the average group effect of theq{\displaystyle q}standardized variables is
which has an interpretation as the expected change iny′{\displaystyle y'}when allxj′{\displaystyle x_{j}'}in the strongly correlated group increase by(1/q){\displaystyle (1/q)}th of a unit at the same time with variables outside the group held constant. With strong positive correlations and in standardized units, variables in the group are approximately equal, so they are likely to increase at the same time and in similar amount. Thus, the average group effectξA{\displaystyle \xi _{A}}is a meaningful effect. It can be accurately estimated by its minimum-variance unbiased linear estimatorξ^A=1q(β^1′+β^2′+⋯+β^q′){\textstyle {\hat {\xi }}_{A}={\frac {1}{q}}({\hat {\beta }}_{1}'+{\hat {\beta }}_{2}'+\dots +{\hat {\beta }}_{q}')}, even when individually none of theβj′{\displaystyle \beta _{j}'}can be accurately estimated byβ^j′{\displaystyle {\hat {\beta }}_{j}'}.
Not all group effects are meaningful or can be accurately estimated. For example,β1′{\displaystyle \beta _{1}'}is a special group effect with weightsw1=1{\displaystyle w_{1}=1}andwj=0{\displaystyle w_{j}=0}forj≠1{\displaystyle j\neq 1}, but it cannot be accurately estimated byβ^1′{\displaystyle {\hat {\beta }}'_{1}}. It is also not a meaningful effect. In general, for a group ofq{\displaystyle q}strongly correlated predictor variables in an APC arrangement in the standardized model, group effects whose weight vectorsw{\displaystyle \mathbf {w} }are at or near the centre of the simplex∑j=1qwj=1{\textstyle \sum _{j=1}^{q}w_{j}=1}(wj≥0{\displaystyle w_{j}\geq 0}) are meaningful and can be accurately estimated by their minimum-variance unbiased linear estimators. Effects with weight vectors far away from the centre are not meaningful as such weight vectors represent simultaneous changes of the variables that violate the strong positive correlations of the standardized variables in an APC arrangement. As such, they are not probable. These effects also cannot be accurately estimated.
Applications of the group effects include (1) estimation and inference for meaningful group effects on the response variable, (2) testing for "group significance" of theq{\displaystyle q}variables via testingH0:ξA=0{\displaystyle H_{0}:\xi _{A}=0}versusH1:ξA≠0{\displaystyle H_{1}:\xi _{A}\neq 0}, and (3) characterizing the region of the predictor variable space over which predictions by the least squares estimated model are accurate.
A group effect of the original variables{x1,x2,…,xq}{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}can be expressed as a constant times a group effect of the standardized variables{x1′,x2′,…,xq′}{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}. The former is meaningful when the latter is. Thus meaningful group effects of the original variables can be found through meaningful group effects of the standardized variables.[12]
InDempster–Shafer theory, or alinear belief functionin particular, a linear regression model may be represented as a partially swept matrix, which can be combined with similar matrices representing observations and other assumed normal distributions and state equations. The combination of swept or unswept matrices provides an alternative method for estimating linear regression models.
A large number of procedures have been developed forparameterestimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of aclosed-form solution,robustnesswith respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such asconsistencyand asymptoticefficiency.
Some of the more common estimation techniques for linear regression are summarized below.
Assuming that the independent variables arexi→=[x1i,x2i,…,xmi]{\displaystyle {\vec {x_{i}}}=\left[x_{1}^{i},x_{2}^{i},\ldots ,x_{m}^{i}\right]}and the model's parameters areβ→=[β0,β1,…,βm]{\displaystyle {\vec {\beta }}=\left[\beta _{0},\beta _{1},\ldots ,\beta _{m}\right]}, then the model's prediction would be
Ifxi→{\displaystyle {\vec {x_{i}}}}is extended toxi→=[1,x1i,x2i,…,xmi]{\displaystyle {\vec {x_{i}}}=\left[1,x_{1}^{i},x_{2}^{i},\ldots ,x_{m}^{i}\right]}thenyi{\displaystyle y_{i}}would become adot productof the parameter and the independent vectors, i.e.
In the least-squares setting, the optimum parameter vector is defined as such that minimizes the sum of mean squared loss:
Now putting the independent and dependent variables in matricesX{\displaystyle X}andY{\displaystyle Y}respectively, the loss function can be rewritten as:
As the loss function isconvex, the optimum solution lies atgradientzero. The gradient of the loss function is (usingDenominator layout convention):
Setting the gradient to zero produces the optimum parameter:
Note:Theβ^{\displaystyle {\hat {\beta }}}obtained may indeed be the local minimum, one needs to differentiate once more to obtain theHessian matrixand show that it is positive definite. This is provided by theGauss–Markov theorem.
Linear least squaresmethods include mainly:
Maximum likelihood estimationcan be performed when the distribution of the error terms is known to belong to a certain parametric familyƒθofprobability distributions.[15]Whenfθis a normal distribution with zeromeanand variance θ, the resulting estimate is identical to the OLS estimate. GLS estimates are maximum likelihood estimates when ε follows a multivariate normal distribution with a knowncovariance matrix.
Let's denote each data point by(xi→,yi){\displaystyle ({\vec {x_{i}}},y_{i})}and the regression parameters asβ→{\displaystyle {\vec {\beta }}}, and the set of all data byD{\displaystyle D}and the cost function byL(D,β→)=∑i(yi−β→⋅xi→)2{\displaystyle L(D,{\vec {\beta }})=\sum _{i}(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}})^{2}}.
As shown below the same optimal parameter that minimizesL(D,β→){\displaystyle L(D,{\vec {\beta }})}achieves maximum likelihood too.[16]Here the assumption is that the dependent variabley{\displaystyle y}is a random variable that follows aGaussian distribution, where the standard deviation is fixed and the mean is a linear combination ofx→{\displaystyle {\vec {x}}}:H(D,β→)=∏i=1nPr(yi|xi→;β→,σ)=∏i=1n12πσexp(−(yi−β→⋅xi→)22σ2){\displaystyle {\begin{aligned}H(D,{\vec {\beta }})&=\prod _{i=1}^{n}Pr(y_{i}|{\vec {x_{i}}}\,\,;{\vec {\beta }},\sigma )\\&=\prod _{i=1}^{n}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}}{2\sigma ^{2}}}\right)\end{aligned}}}
Now, we need to look for a parameter that maximizes this likelihood function. Since the logarithmic function is strictly increasing, instead of maximizing this function, we can also maximize its logarithm and find the optimal parameter that way.[16]
I(D,β→)=log∏i=1nPr(yi|xi→;β→,σ)=log∏i=1n12πσexp(−(yi−β→⋅xi→)22σ2)=nlog12πσ−12σ2∑i=1n(yi−β→⋅xi→)2{\displaystyle {\begin{aligned}I(D,{\vec {\beta }})&=\log \prod _{i=1}^{n}Pr(y_{i}|{\vec {x_{i}}}\,\,;{\vec {\beta }},\sigma )\\&=\log \prod _{i=1}^{n}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}}{2\sigma ^{2}}}\right)\\&=n\log {\frac {1}{{\sqrt {2\pi }}\sigma }}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\end{aligned}}}
The optimal parameter is thus equal to:[16]
arg maxβ→I(D,β→)=arg maxβ→(nlog12πσ−12σ2∑i=1n(yi−β→⋅xi→)2)=arg minβ→∑i=1n(yi−β→⋅xi→)2=arg minβ→L(D,β→)=β^→{\displaystyle {\begin{aligned}{\underset {\vec {\beta }}{\mbox{arg max}}}\,I(D,{\vec {\beta }})&={\underset {\vec {\beta }}{\mbox{arg max}}}\left(n\log {\frac {1}{{\sqrt {2\pi }}\sigma }}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\right)\\&={\underset {\vec {\beta }}{\mbox{arg min}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\\&={\underset {\vec {\beta }}{\mbox{arg min}}}\,L(D,{\vec {\beta }})\\&={\vec {\hat {\beta }}}\end{aligned}}}
In this way, the parameter that maximizesH(D,β→){\displaystyle H(D,{\vec {\beta }})}is the same as the one that minimizesL(D,β→){\displaystyle L(D,{\vec {\beta }})}. This means that in linear regression, the result of the least squares method is the same as the result of the maximum likelihood estimation method.[16]
Ridge regression[17][18][19]and other forms of penalized estimation, such asLasso regression,[5]deliberately introducebiasinto the estimation ofβin order to reduce thevariabilityof the estimate. The resulting estimates generally have lowermean squared errorthan the OLS estimates, particularly whenmulticollinearityis present or whenoverfittingis a problem. They are generally used when the goal is to predict the value of the response variableyfor values of the predictorsxthat have not yet been observed. These methods are not as commonly used when the goal is inference, since it is difficult to account for the bias.
Least absolute deviation(LAD) regression is arobust estimationtechnique in that it is less sensitive to the presence of outliers than OLS (but is lessefficientthan OLS when no outliers are present). It is equivalent to maximum likelihood estimation under aLaplace distributionmodel forε.[20]
If we assume that error terms areindependentof the regressors,εi⊥xi{\displaystyle \varepsilon _{i}\perp \mathbf {x} _{i}}, then the optimal estimator is the 2-step MLE, where the first step is used to non-parametrically estimate the distribution of the error term.[21]
Linear regression is widely used in biological, behavioral and social sciences to describe possible relationships between variables. It ranks as one of the most important tools used in these disciplines.
Atrend linerepresents a trend, the long-term movement intime seriesdata after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line.
Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.
Early evidence relatingtobacco smokingto mortality andmorbiditycame fromobservational studiesemploying regression analysis. In order to reducespurious correlationswhen analyzing observational data, researchers usually include several variables in their regression models in addition to the variable of primary interest. For example, in a regression model in which cigarette smoking is the independent variable of primary interest and the dependent variable is lifespan measured in years, researchers might include education and income as additional independent variables, to ensure that any observed effect of smoking on lifespan is not due to those othersocio-economic factors. However, it is never possible to include all possibleconfoundingvariables in an empirical analysis. For example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason,randomized controlled trialsare often able to generate more compelling evidence of causal relationships than can be obtained using regression analyses of observational data. When controlled experiments are not feasible, variants of regression analysis such asinstrumental variablesregression may be used to attempt to estimate causal relationships from observational data.
Thecapital asset pricing modeluses linear regression as well as the concept ofbetafor analyzing and quantifying the systematic risk of an investment. This comes directly from the beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets.
Linear regression is the predominant empirical tool ineconomics. For example, it is used to predictconsumption spending,[24]fixed investmentspending,inventory investment, purchases of a country'sexports,[25]spending onimports,[25]thedemand to hold liquid assets,[26]labor demand,[27]andlabor supply.[27]
Linear regression finds application in a wide range of environmental science applications such asland use,[28]infectious diseases,[29]andair pollution.[30]For example, linear regression can be used to predict the changing effects of car pollution.[31]One notable example of this application in infectious diseases is theflattening the curvestrategy emphasized early in the COVID-19 pandemic, where public health officials dealt with sparse data on infected individuals and sophisticated models of disease transmission to characterize the spread of COVID-19.[32]
Linear regression is commonly used inbuilding sciencefield studies to derive characteristics of building occupants. In athermal comfortfield study, building scientists usually ask occupants' thermal sensation votes, which range from -3 (feeling cold) to 0 (neutral) to +3 (feeling hot), and measure occupants' surrounding temperature data. A neutral or comfort temperature can be calculated based on a linear regression between the thermal sensation vote and indoor temperature, and setting the thermal sensation vote as zero. However, there has been a debate on the regression direction: regressing thermal sensation votes (y-axis) against indoor temperature (x-axis) or the opposite: regressing indoor temperature (y-axis) against thermal sensation votes (x-axis).[33]
Linear regression plays an important role in the subfield ofartificial intelligenceknown asmachine learning. The linear regression algorithm is one of the fundamentalsupervised machine-learningalgorithms due to its relative simplicity and well-known properties.[34]
Isaac Newtonis credited with inventing "a certain technique known today aslinear regression analysis" in his work on equinoxes in 1700, and wrote down the first of the two normal equations of theordinary least squaresmethod.[35][36]The Least squares linear regression, as a means of finding a good rough linear fit to a set of points was performed byLegendre(1805) andGauss(1809) for the prediction of planetary movement.Queteletwas responsible for making the procedure well-known and for using it extensively in the social sciences.[37]
|
https://en.wikipedia.org/wiki/Line_regression
|
Local regressionorlocal polynomial regression,[1]also known asmoving regression,[2]is a generalization of themoving averageandpolynomial regression.[3]Its most common methods, initially developed forscatterplot smoothing, areLOESS(locally estimated scatterplot smoothing) andLOWESS(locally weighted scatterplot smoothing), both pronounced/ˈloʊɛs/LOH-ess. They are two strongly relatednon-parametric regressionmethods that combine multiple regression models in ak-nearest-neighbor-based meta-model.
In some fields, LOESS is known and commonly referred to asSavitzky–Golay filter[4][5](proposed 15 years before LOESS).
LOESS and LOWESS thus build on"classical" methods, such as linear and nonlinearleast squares regression. They address situations in which the classical procedures do not perform well or cannot be effectively applied without undue labor. LOESS combines much of the simplicity of linear least squares regression with the flexibility ofnonlinear regression. It does this by fitting simple models to localized subsets of the data to build up a function that describes the deterministic part of the variation in the data, point by point. In fact, one of the chief attractions of this method is that the data analyst is not required to specify a global function of any form to fit a model to the data, only to fit segments of the data.
The trade-off for these features is increased computation. Because it is so computationally intensive, LOESS would have been practically impossible to use in the era when least squares regression was being developed. Most other modern methods for process modeling are similar to LOESS in this respect. These methods have been consciously designed to use our current computational ability to the fullest possible advantage to achieve goals not easily achieved by traditional approaches.
A smooth curve through a set of data points obtained with this statistical technique is called aloess curve, particularly when each smoothed value is given by a weighted quadratic least squares regression over the span of values of they-axisscattergramcriterion variable. When each smoothed value is given by a weighted linear least squares regression over the span, this is known as alowess curve; however, some authorities treatlowessand loess as synonyms.[6][7]
Local regression and closely related procedures have a long and rich history, having been discovered and rediscovered in different fields on multiple occasions. An early work byRobert Henderson[8]studying the problem of graduation (a term for smoothing used in Actuarial literature) introduced local regression using cubic polynomials.
Specifically, letYj{\displaystyle Y_{j}}denote an ungraduated sequence of observations. Following Henderson, suppose that only the terms fromY−h{\displaystyle Y_{-h}}toYh{\displaystyle Y_{h}}are to be taken into account when computing the graduated value ofY0{\displaystyle Y_{0}}, andWj{\displaystyle W_{j}}is the weight to be assigned toYj{\displaystyle Y_{j}}. Henderson then uses a local polynomial approximationa+bj+cj2+dj3{\displaystyle a+bj+cj^{2}+dj^{3}}, and sets up the following four equations for the coefficients:
Solving these equations for the polynomial coefficients yields the graduated value,Y^0=a{\displaystyle {\hat {Y}}_{0}=a}.
Henderson went further. In preceding years, many 'summation formula' methods of graduation had been developed, which derived graduation rules based on summation formulae (convolution of the series of obeservations with a chosen set of weights). Two such rules are the 15-point and 21-point rules ofSpencer(1904).[9]These graduation rules were carefully designed to have a quadratic-reproducing property: If the ungraduated values exactly follow a quadratic formula, then the graduated values equal the ungraduated values. This is an important property: a simple moving average, by contrast, cannot adequately model peaks and troughs in the data. Henderson's insight was to show thatanysuch graduation rule can be represented as a local cubic (or quadratic) fit for an appropriate choice of weights.
Further discussions of the historical work on graduation and local polynomial fitting can be found inMaculay(1931),[10]ClevelandandLoader(1995);[11]andMurrayandBellhouse(2019).[12]
TheSavitzky-Golay filter, introduced byAbraham SavitzkyandMarcel J. E. Golay(1964)[13]significantly expanded the method. Like the earlier graduation work, their focus was data with an equally-spaced predictor variable, where (excluding boundary effects) local regression can be represented as aconvolution. Savitzky and Golay published extensive sets of convolution coefficients for different orders of polynomial and smoothing window widths.
Local regression methods started to appear extensively in statistics literature in the 1970s; for example,Charles J. Stone(1977),[14]Vladimir Katkovnik(1979)[15]andWilliam S. Cleveland(1979).[16]Katkovnik (1985)[17]is the earliest book devoted primarily to local regression methods.
Theoretical work continued to appear throughout the 1990s. Important contributions includeJianqing FanandIrène Gijbels(1992)[18]studying efficiency properties, andDavid RuppertandMatthew P. Wand(1994)[19]developing an asymptotic distribution theory for multivariate local regression.
An important extension of local regression is Local Likelihood Estimation, formulated byRobert TibshiraniandTrevor Hastie(1987).[20]This replaces the local least-squares criterion with a likelihood-based criterion, thereby extending the local regression method to theGeneralized linear modelsetting; for example binary data, count data or censored data.
Practical implementations of local regression began appearing in statistical software in the 1980s. Cleveland (1981)[21]introduces the LOWESS routines, intended for smoothing scatterplots. This implements local linear fitting with a single predictor variable, and also introduces robustness downweighting to make the procedure resistant to outliers. An entirely new implementation, LOESS, is described in Cleveland andSusan J. Devlin(1988).[22]LOESS is a multivariate smoother, able to handle spatial data with two (or more) predictor variables, and uses (by default) local quadratic fitting. Both LOWESS and LOESS are implemented in theSandRprogramming languages. See also Cleveland's Local Fitting Software.[23]
While Local Regression, LOWESS and LOESS are sometimes used interchangeably, this usage should be considered incorrect. Local Regression is a general term for the fitting procedure; LOWESS and LOESS are two distinct implementations.
Local regression uses adata setconsisting of observations one or more ‘independent’ or ‘predictor’ variables, and a ‘dependent’ or ‘response’ variable. The dataset will consist of a numbern{\displaystyle n}observations. The observations of the predictor variable can be denotedx1,…,xn{\displaystyle x_{1},\ldots ,x_{n}}, and corresponding observations of the response variable byY1,…,Yn{\displaystyle Y_{1},\ldots ,Y_{n}}.
For ease of presentation, the development below assumes a single predictor variable; the extension to multiple predictors (when thexi{\displaystyle x_{i}}are vectors) is conceptually straightforward. A functional relationship between the predictor and response variables is assumed:Yi=μ(xi)+ϵi{\displaystyle Y_{i}=\mu (x_{i})+\epsilon _{i}}whereμ(x){\displaystyle \mu (x)}is the unknown ‘smooth’ regression function to be estimated, and represents the conditional expectation of the response, given a value of the predictor variables. In theoretical work, the ‘smoothness’ of this function can be formally characterized by placing bounds on higher order derivatives. Theϵi{\displaystyle \epsilon _{i}}represents random error; for estimation purposes these are assumed to havemeanzero. Stronger assumptions (e.g.,independenceand equalvariance) may be made when assessing properties of the estimates.
Local regression then estimates the functionμ(x){\displaystyle \mu (x)}, for one value ofx{\displaystyle x}at a time. Since the function is assumed to be smooth, the most informative data points are those whosexi{\displaystyle x_{i}}values are close tox{\displaystyle x}. This is formalized with a bandwidthh{\displaystyle h}and akernelor weight functionW(⋅){\displaystyle W(\cdot )}, with observations assigned weightswi(x)=W(xi−xh).{\displaystyle w_{i}(x)=W\left({\frac {x_{i}-x}{h}}\right).}A typical choice ofW{\displaystyle W}, used by Cleveland in LOWESS, isW(u)=(1−|u|3)3{\displaystyle W(u)=(1-|u|^{3})^{3}}for|u|<1{\displaystyle |u|<1}, although any similar function (peaked atu=0{\displaystyle u=0}and small or 0 for large values ofu{\displaystyle u}) can be used. Questions of bandwidth selection and specification (how large shouldh{\displaystyle h}be, and should it vary depending upon the fitting pointx{\displaystyle x}?) are deferred for now.
A local model (usually a low-order polynomial with degreep≤3{\displaystyle p\leq 3}), expressed asμ(xi)≈β0+β1(xi−x)+…+βp(xi−x)p{\displaystyle \mu (x_{i})\approx \beta _{0}+\beta _{1}(x_{i}-x)+\ldots +\beta _{p}(x_{i}-x)^{p}}is then fitted byweighted least squares: choose regression coefficients(β^0,…,β^p){\displaystyle ({\hat {\beta }}_{0},\ldots ,{\hat {\beta }}_{p})}to minimize∑i=1nwi(x)(Yi−β0−β1(xi−x)−…−βp(xi−x)p)2.{\displaystyle \sum _{i=1}^{n}w_{i}(x)\left(Y_{i}-\beta _{0}-\beta _{1}(x_{i}-x)-\ldots -\beta _{p}(x_{i}-x)^{p}\right)^{2}.}The local regression estimate ofμ(x){\displaystyle \mu (x)}is then simply the intercept estimate:μ^(x)=β^0{\displaystyle {\hat {\mu }}(x)={\hat {\beta }}_{0}}while the remaining coefficients can be interpreted
(up to a factor ofp!{\displaystyle p!}) as derivative estimates.
It is to be emphasized that the above procedure produces the estimateμ^(x){\displaystyle {\hat {\mu }}(x)}for one value ofx{\displaystyle x}. When considering a new value ofx{\displaystyle x}, a new set of weightswi(x){\displaystyle w_{i}(x)}must be computed, and the regression coefficient estimated afresh.
As with all least squares estimates, the estimated regression coefficients can be expressed in closed form (seeWeighted least squaresfor details):β^=(XTWX)−1XTWy{\displaystyle {\hat {\boldsymbol {\beta }}}=(\mathbf {X^{\textsf {T}}WX} )^{-1}\mathbf {X^{\textsf {T}}W} \mathbf {y} }whereβ^{\displaystyle {\hat {\boldsymbol {\beta }}}}is a vector of the local regression coefficients;X{\displaystyle \mathbf {X} }is then×(p+1){\displaystyle n\times (p+1)}design matrixwith entries(xi−x)j{\displaystyle (x_{i}-x)^{j}};W{\displaystyle \mathbf {W} }is a diagonal matrix of the smoothing weightswi(x){\displaystyle w_{i}(x)}; andy{\displaystyle \mathbf {y} }is a vector of the responsesYi{\displaystyle Y_{i}}.
This matrix representation is crucial for studying the theoretical properties of local regression estimates. With appropriate definitions of the design and weight matrices, it immediately generalizes to the multiple-predictor setting.
Implementation of local regression requires specification and selection of several components:
Each of these components has been the subject of extensive study; a summary is provided below.
The bandwidthh{\displaystyle h}controls the resolution of the local regression estimate. Ifhis too small, the estimate may show high-resolution features that represent noise in the data, rather than any real structure in the mean function. Conversely, ifhis too large, the estimate will only show low-resolution features, and important structure may be lost. This is thebias-variance tradeoff; ifhis too small, the estimate exhibits large variation; while at largeh, the estimate exhibits large bias.
Careful choice of bandwidth is therefore crucial when applying local regression. Mathematical methods for bandwidth selection require, firstly, formal criteria to assess the performance of an estimate. One such criterion is prediction error: if a new observation is made atx~{\displaystyle {\tilde {x}}}, how well does the estimateμ^(x~){\displaystyle {\hat {\mu }}({\tilde {x}})}predict the new responseY~{\displaystyle {\tilde {Y}}}?
Performance is often assessed using a squared-error loss function. The mean squared prediction error isE(Y~−μ^(x~))2=E(Y~−μ(x)+μ(x)−μ^(x~))2=E(Y~−μ(x))2+E(μ(x)−μ^(x~))2.{\displaystyle {\begin{aligned}E\left({\tilde {Y}}-{\hat {\mu }}({\tilde {x}})\right)^{2}&=E\left({\tilde {Y}}-\mu (x)+\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}\\&=E\left({\tilde {Y}}-\mu (x)\right)^{2}+E\left(\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}.\end{aligned}}}The first termE(Y~−μ(x))2{\displaystyle E\left({\tilde {Y}}-\mu (x)\right)^{2}}is the random variation of the observation; this is entirely independent of the local regression estimate. The second term,E(μ(x)−μ^(x~))2{\displaystyle E\left(\mu (x)-{\hat {\mu }}({\tilde {x}})\right)^{2}}is the mean squared estimation error. This relation shows that, for squared error loss, minimizing prediction error and estimation error are equivalent problems.
In global bandwidth selection, these measures can be integrated over thex{\displaystyle x}space ("mean integrated squared error", often used in theoretical work), or averaged over the actualxi{\displaystyle x_{i}}(more useful for practical implementations). Some standard techniques from model selection can be readily adapted to local regression:
Any of these criteria can be minimized to produce an automatic bandwidth selector. Cleveland and Devlin[22]prefer a graphical method (theM-plot) to visually display the bias-variance trade-off and guide bandwidth choice.
One question not addressed above is, how should the bandwidth depend upon the fitting pointx{\displaystyle x}? Often a constant bandwidth is used, while LOWESS and LOESS prefer a nearest-neighbor bandwidth, meaninghis smaller in regions with many data points. Formally, the smoothing parameter,α{\displaystyle \alpha }, is the fraction of the total numbernof data points that are used in each local fit. The subset of data used in each weighted least squares fit thus comprises thenα{\displaystyle n\alpha }points (rounded to the next largest integer) whose explanatory variables' values are closest to the point at which the response is being estimated.[7]
More sophisticated methods attempt to choose the bandwidthadaptively; that is, choose a bandwidth at each fitting pointx{\displaystyle x}by applying criteria such as cross-validation locally within the smoothing window. An early example of this isJerome H. Friedman's[24]"supersmoother", which uses cross-validation to choose among local linear fits at different bandwidths.
Most sources, in both theoretical and computational work, use low-order polynomials as the local model, with polynomial degree ranging from 0 to 3.
The degree 0 (local constant) model is equivalent to akernel smoother; usually credited toÈlizbar Nadaraya(1964)[25]andG. S. Watson(1964).[26]This is the simplest model to use, but can suffer from bias when fitting near boundaries of the dataset.
Local linear (degree 1) fitting can substantially reduce the boundary bias.
Local quadratic (degree 2) and local cubic (degree 3) can result in improved fits, particularly when the underlying mean functionμ(x){\displaystyle \mu (x)}has substantial curvature, or equivalently a large second derivative.
In theory, higher orders of polynomial can lead to faster convergence of the estimateμ^(x){\displaystyle {\hat {\mu }}(x)}to the true meanμ(x){\displaystyle \mu (x)},provided thatμ(x){\displaystyle \mu (x)}has a sufficient number of derivatives. See C. J. Stone (1980).[27]Generally, it takes a large sample size for this faster convergence to be realized. There are also computational and stability issues that arise, particularly for multivariate smoothing. It is generally not recommended to use local polynomials with degree greater than 3.
As with bandwidth selection, methods such as cross-validation can be used to compare the fits obtained with different degrees of polynomial.
As mentioned above, the weight function gives the most weight to the data points nearest the point of estimation and the least weight to the data points that are furthest away. The use of the weights is based on the idea that points near each other in the explanatory variable space are more likely to be related to each other in a simple way than points that are further apart. Following this logic, points that are likely to follow the local model best influence the local model parameter estimates the most. Points that are less likely to actually conform to the local model have less influence on the local modelparameterestimates.
Cleveland (1979)[16]sets out four requirements for the weight function:
Asymptotic efficiency of weight functions has been considered byV. A. Epanechnikov(1969)[28]in the context of kernel density estimation; J. Fan (1993)[29]has derived similar results for local regression. They conclude that the quadratic kernel,W(x)=1−x2{\displaystyle W(x)=1-x^{2}}for|x|≤1{\displaystyle |x|\leq 1}has greatest efficiency under a mean-squared-error loss function. See"kernel functions in common use"for more discussion of different kernels and their efficiencies.
Considerations other than MSE are also relevant to the choice of weight function. Smoothness properties ofW(x){\displaystyle W(x)}directly affect smoothness of the estimateμ^(x){\displaystyle {\hat {\mu }}(x)}. In particular, the quadaratic kernel is not differentiable atx=±1{\displaystyle x=\pm 1}, andμ^(x){\displaystyle {\hat {\mu }}(x)}is not differentiable as a result.
Thetri-cube weight function,W(x)=(1−|x|3)3;|x|<1{\displaystyle W(x)=(1-|x|^{3})^{3};|x|<1}has been used in LOWESS and other local regression software; this combines higher-order differentiability with a high MSE efficiency.
One criticism of weight functions with bounded support is that they can lead to numerical problems (i.e. an unstable or singular design matrix) when fitting in regions with sparse data. For this reason, some authors[who?]choose to use the Gaussian kernel, or others with unbounded support.
As described above, local regression uses a locally weighted least squares criterion to estimate the regression parameters. This inherits many of the advantages (ease of implementation and interpretation; good properties when errors are normally distributed) and disadvantages (sensitivity to extreme values and outliers; inefficiency when errors have unequal variance or are not normally distributed) usually associated with least squares regression.
These disadvantages can be addressed by replacing the local least-squares estimation by something else. Two such ideas are presented here: local likelihood estimation, which applies local estimation to thegeneralized linear model, and robust local regression, which localizes methods fromrobust regression.
In local likelihood estimation, developed in Tibshirani and Hastie (1987),[20]the observationsYi{\displaystyle Y_{i}}are assumed to come from a parametric family of distributions, with a known probability density function (or mass function, for discrete data),Yi∼f(y,θ(xi)),{\displaystyle Y_{i}\sim f(y,\theta (x_{i})),}where the parameter functionθ(x){\displaystyle \theta (x)}is the unknown quantity to be estimated. To estimateθ(x){\displaystyle \theta (x)}at a particular pointx{\displaystyle x}, the local likelihood criterion is∑i=1nwi(x)log(f(Yi,β0+β1(xi−x)+…+βp(xi−x)p).{\displaystyle \sum _{i=1}^{n}w_{i}(x)\log \left(f(Y_{i},\beta _{0}+\beta _{1}(x_{i}-x)+\ldots +\beta _{p}(x_{i}-x)^{p}\right).}Estimates of the regression coefficients (in, particular,β^0{\displaystyle {\hat {\beta }}_{0}}) are obtained by maximizing the local likelihood criterion, and
the local likelihood estimate isθ^(x)=β^0.{\displaystyle {\hat {\theta }}(x)={\hat {\beta }}_{0}.}
Whenf(y,θ(x)){\displaystyle f(y,\theta (x))}is the normal distribution andθ(x){\displaystyle \theta (x)}is the mean function, the local likelihood method reduces to the standard local least-squares regression. For other likelihood families, there is (usually) no closed-form solution for the local likelihood estimate, and iterative procedures such asiteratively reweighted least squaresmust be used to compute the estimate.
Example(local logistic regression). All response observations are 0 or 1, and the mean function is the "success" probability,μ(xi)=Pr(Yi=1|xi){\displaystyle \mu (x_{i})=\Pr(Y_{i}=1|x_{i})}. Sinceμ(xi){\displaystyle \mu (x_{i})}must be between 0 and 1, a local polynomial model should not be used forμ(x){\displaystyle \mu (x)}directly. Insead, the logistic transformationθ(x)=log(μ(x)1−μ(x)){\displaystyle \theta (x)=\log \left({\frac {\mu (x)}{1-\mu (x)}}\right)}can be used; equivalently,1−μ(x)=11+eθ(x);μ(x)=eθ(x)1+eθ(x){\displaystyle {\begin{aligned}1-\mu (x)&={\frac {1}{1+e^{\theta (x)}}};\\\mu (x)&={\frac {e^{\theta (x)}}{1+e^{\theta (x)}}}\end{aligned}}}and the mass function isf(Yi,θ(xi))=eYiθ(xi)1+eθ(xi).{\displaystyle f(Y_{i},\theta (x_{i}))={\frac {e^{Y_{i}\theta (x_{i})}}{1+e^{\theta (x_{i})}}}.}
An asymptotic theory for local likelihood estimation is developed in J. Fan,Nancy E. Heckmanand M.P.Wand (1995);[30]the book Loader (1999)[31]discusses many more applications of local likelihood.
To address the sensitivity to outliers, techniques fromrobust regressioncan be employed. In localM-estimation, the local least-squares criterion is replaced by a criterion of the form∑i=1nwi(x)ρ(Yi−β0−…−βp(xi−x)ps){\displaystyle \sum _{i=1}^{n}w_{i}(x)\rho \left({\frac {Y_{i}-\beta _{0}-\ldots -\beta _{p}(x_{i}-x)^{p}}{s}}\right)}whereρ(⋅){\displaystyle \rho (\cdot )}is a robustness function ands{\displaystyle s}is a scale parameter. Discussion of the merits of different choices of robustness function is best left to therobust regressionliterature. The scale parameters{\displaystyle s}must also be estimated. References for local M-estimation include Katkovnik (1985)[17]andAlexandre Tsybakov(1986).[32]
The robustness iterations in LOWESS and LOESS correspond to the robustness function defined byρ′(u)=u(1−u2/6)2;|u|<1{\displaystyle \rho '(u)=u(1-u^{2}/6)^{2};|u|<1}and a robust global estimate of the scale parameter.
Ifρ(u)=|u|{\displaystyle \rho (u)=|u|}, the localL1{\displaystyle L_{1}}criterion∑i=1nwi(x)|Yi−β0−…−βp(xi−x)p|{\displaystyle \sum _{i=1}^{n}w_{i}(x)\left|Y_{i}-\beta _{0}-\ldots -\beta _{p}(x_{i}-x)^{p}\right|}results; this does not require a scale parameter. Whenp=0{\displaystyle p=0}, this criterion is minimized by a locally weighted median; localL1{\displaystyle L_{1}}regression can be interpreted as estimating themedian, rather thanmean, response. If the loss function is skewed, this becomes local quantile regression. SeeKeming YuandM.C. Jones(1998).[33]
As discussed above, the biggest advantage LOESS has over many other methods is the process of fitting a model to the sample data does not begin with the specification of a function. Instead the analyst only has to provide a smoothing parameter value and the degree of the local polynomial. In addition, LOESS is very flexible, making it ideal for modeling complex processes for which no theoretical models exist. These two advantages, combined with the simplicity of the method, make LOESS one of the most attractive of the modern regression methods for applications that fit the general framework of least squares regression but which have a complex deterministic structure.
Although it is less obvious than for some of the other methods related to linear least squares regression, LOESS also accrues most of the benefits typically shared by those procedures. The most important of those is the theory for computing uncertainties for prediction and calibration. Many other tests and procedures used for validation of least squares models can also be extended to LOESS models[citation needed].
LOESS makes less efficient use of data than other least squares methods. It requires fairly large, densely sampled data sets in order to produce good models. This is because LOESS relies on the local data structure when performing the local fitting. Thus, LOESS provides less complex data analysis in exchange for greater experimental costs.[7]
Another disadvantage of LOESS is the fact that it does not produce a regression function that is easily represented by a mathematical formula. This can make it difficult to transfer the results of an analysis to other people. In order to transfer the regression function to another person, they would need the data set and software for LOESS calculations. Innonlinear regression, on the other hand, it is only necessary to write down a functional form in order to provide estimates of the unknown parameters and the estimated uncertainty. Depending on the application, this could be either a major or a minor drawback to using LOESS. In particular, the simple form of LOESS can not be used for mechanistic modelling where fitted parameters specify particular physical properties of a system.
Finally, as discussed above, LOESS is a computationally intensive method (with the exception of evenly spaced data, where the regression can then be phrased as a non-causalfinite impulse responsefilter). LOESS is also prone to the effects of outliers in the data set, like other least squares methods. There is an iterative,robustversion of LOESS [Cleveland (1979)] that can be used to reduce LOESS' sensitivity tooutliers, but too many extreme outliers can still overcome even the robust method.
Books substantially covering local regression and extensions:
Book chapters, Reviews:
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
|
https://en.wikipedia.org/wiki/Local_polynomial_regression
|
Instatistical modeling(especiallyprocess modeling), polynomial functions and rational functions are sometimes used as an empirical technique forcurve fitting.
Apolynomial functionis one that has the form
wherenis a non-negativeintegerthat defines the degree of the polynomial. A polynomial with a degree of 0 is simply aconstant function; with a degree of 1 is aline; with a degree of 2 is aquadratic; with a degree of 3 is acubic, and so on.
Historically, polynomial models are among the most frequently used empirical models forcurve fitting.
These models are popular for the following reasons.
However, polynomial models also have the following limitations.
When modeling via polynomial functions is inadequate due to any of the limitations above, the use of rational functions for modeling may give a better fit.
Arational functionis simply the ratio of two polynomial functions.
withndenoting a non-negative integer that defines the degree of the numerator andmdenoting a non-negative integer that defines the degree of the denominator. For fitting rational function models, the constant term in the denominator is usually set to 1. Rational functions are typically identified by the degrees of the numerator and denominator. For example, a quadratic for the numerator and a cubic for the denominator is identified as a quadratic/cubic rational function. The rational function model is a generalization of the polynomial model: rational function models contain polynomial models as a subset (i.e., the case when the denominator is a constant).
Rational function models have the following advantages:
Rational function models have the following disadvantages:
This article incorporatespublic domain materialfrom theNational Institute of Standards and Technology
|
https://en.wikipedia.org/wiki/Polynomial_and_rational_function_modeling
|
Innumerical analysis,polynomial interpolationis theinterpolationof a givendata setby thepolynomialof lowest possible degree that passes through the points in the dataset.
Given a set ofn+ 1data points(x0,y0),…,(xn,yn){\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})}, with no twoxj{\displaystyle x_{j}}the same, a polynomial functionp(x)=a0+a1x+⋯+anxn{\displaystyle p(x)=a_{0}+a_{1}x+\cdots +a_{n}x^{n}}is said tointerpolatethe data ifp(xj)=yj{\displaystyle p(x_{j})=y_{j}}for eachj∈{0,1,…,n}{\displaystyle j\in \{0,1,\dotsc ,n\}}.
There is always a unique such polynomial, commonly given by two explicit formulas, theLagrange polynomialsandNewton polynomials.
The original use of interpolation polynomials was to approximate values of importanttranscendental functionssuch asnatural logarithmandtrigonometric functions. Starting with a few accurately computed data points, the corresponding interpolation polynomial will approximate the function at an arbitrary nearby point. Polynomial interpolation also forms the basis for algorithms innumerical quadrature(Simpson's rule) andnumerical ordinary differential equations(multigrid methods).
Incomputer graphics, polynomials can be used to approximate complicated plane curves given a few specified points, for example the shapes of letters intypography. This is usually done withBézier curves, which are a simple generalization of interpolation polynomials (having specified tangents as well as specified points).
In numerical analysis, polynomial interpolation is essential to perform sub-quadratic multiplication and squaring, such asKaratsuba multiplicationandToom–Cook multiplication, where interpolation through points on a product polynomial yields the specific product required. For example, givena=f(x) =a0x0+a1x1+ ··· andb=g(x) =b0x0+b1x1+ ···, the productabis a specific value ofW(x) =f(x)g(x). One may easily find points alongW(x) at small values ofx, and interpolation based on those points will yield the terms ofW(x) and the specific productab. As fomulated in Karatsuba multiplication, this technique is substantially faster than quadratic multiplication, even for modest-sized inputs, especially on parallel hardware.
Incomputer science, polynomial interpolation also leads to algorithms forsecure multi party computationandsecret sharing.
For anyn+1{\displaystyle n+1}bivariate data points(x0,y0),…,(xn,yn)∈R2{\displaystyle (x_{0},y_{0}),\dotsc ,(x_{n},y_{n})\in \mathbb {R} ^{2}}, where no twoxj{\displaystyle x_{j}}are the same, there exists a unique polynomialp(x){\displaystyle p(x)}of degree at mostn{\displaystyle n}that interpolates these points, i.e.p(x0)=y0,…,p(xn)=yn{\displaystyle p(x_{0})=y_{0},\ldots ,p(x_{n})=y_{n}}.[1]
Equivalently, for a fixed choice of interpolation nodesxj{\displaystyle x_{j}}, polynomial interpolation defines a linearbijectionLn{\displaystyle L_{n}}between the (n+1)-tuples of real-number values(y0,…,yn)∈Rn+1{\displaystyle (y_{0},\ldots ,y_{n})\in \mathbb {R} ^{n+1}}and thevector spaceP(n){\displaystyle P(n)}of real polynomials of degree at mostn:Ln:Rn+1⟶∼P(n).{\displaystyle L_{n}:\mathbb {R} ^{n+1}{\stackrel {\sim }{\longrightarrow }}\,P(n).}
This is a type ofunisolvencetheorem. The theorem is also valid over any infinitefieldin place of the real numbersR{\displaystyle \mathbb {R} }, for example the rational or complex numbers.
Consider theLagrange basis functionsL0(x),…,Ln(x){\displaystyle L_{0}(x),\ldots ,L_{n}(x)}given by:Lj(x)=∏i≠jx−xixj−xi=(x−x0)⋯(x−xj−1)(x−xj+1)⋯(x−xn)(xj−x0)⋯(xj−xj−1)(xj−xj+1)⋯(xj−xn).{\displaystyle L_{j}(x)=\prod _{i\neq j}{\frac {x-x_{i}}{x_{j}-x_{i}}}={\frac {(x-x_{0})\cdots (x-x_{j-1})(x-x_{j+1})\cdots (x-x_{n})}{(x_{j}-x_{0})\cdots (x_{j}-x_{j-1})(x_{j}-x_{j+1})\cdots (x_{j}-x_{n})}}.}
Notice thatLj(x){\displaystyle L_{j}(x)}is a polynomial of degreen{\displaystyle n}, and we haveLj(xk)=0{\displaystyle L_{j}(x_{k})=0}for eachj≠k{\displaystyle j\neq k}, whileLk(xk)=1{\displaystyle L_{k}(x_{k})=1}. It follows that the linear combination:p(x)=∑j=0nyjLj(x){\displaystyle p(x)=\sum _{j=0}^{n}y_{j}L_{j}(x)}hasp(xk)=∑jyjLj(xk)=yk{\displaystyle p(x_{k})=\sum _{j}y_{j}\,L_{j}(x_{k})=y_{k}}, sop(x){\displaystyle p(x)}is an interpolating polynomial of degreen{\displaystyle n}.
To prove uniqueness, assume that there exists another interpolating polynomialq(x){\displaystyle q(x)}of degree at mostn{\displaystyle n}, so thatp(xk)=q(xk){\displaystyle p(x_{k})=q(x_{k})}for allk=0,…,n{\displaystyle k=0,\dotsc ,n}. Thenp(x)−q(x){\displaystyle p(x)-q(x)}is a polynomial of degree at mostn{\displaystyle n}which hasn+1{\displaystyle n+1}distinct zeros (thexk{\displaystyle x_{k}}). But a non-zero polynomial of degree at mostn{\displaystyle n}can have at mostn{\displaystyle n}zeros,[a]sop(x)−q(x){\displaystyle p(x)-q(x)}must be the zero polynomial, i.e.p(x)=q(x){\displaystyle p(x)=q(x)}.[2]
Write out the interpolation polynomial in the form
Substituting this into the interpolation equationsp(xj)=yj{\displaystyle p(x_{j})=y_{j}}, we get asystem of linear equationsin the coefficientsaj{\displaystyle a_{j}}, which reads in matrix-vector form as the followingmultiplication:[x0nx0n−1x0n−2…x01x1nx1n−1x1n−2…x11⋮⋮⋮⋮⋮xnnxnn−1xnn−2…xn1][anan−1⋮a0]=[y0y1⋮yn].{\displaystyle {\begin{bmatrix}x_{0}^{n}&x_{0}^{n-1}&x_{0}^{n-2}&\ldots &x_{0}&1\\x_{1}^{n}&x_{1}^{n-1}&x_{1}^{n-2}&\ldots &x_{1}&1\\\vdots &\vdots &\vdots &&\vdots &\vdots \\x_{n}^{n}&x_{n}^{n-1}&x_{n}^{n-2}&\ldots &x_{n}&1\end{bmatrix}}{\begin{bmatrix}a_{n}\\a_{n-1}\\\vdots \\a_{0}\end{bmatrix}}={\begin{bmatrix}y_{0}\\y_{1}\\\vdots \\y_{n}\end{bmatrix}}.}
An interpolantp(x){\displaystyle p(x)}corresponds to a solutionA=(an,…,a0){\displaystyle A=(a_{n},\ldots ,a_{0})}of the above matrix equationX⋅A=Y{\displaystyle X\cdot A=Y}. The matrixXon the left is aVandermonde matrix, whose determinant is known to bedet(X)=∏1≤i<j≤n(xj−xi),{\displaystyle \textstyle \det(X)=\prod _{1\leq i<j\leq n}(x_{j}-x_{i}),}which is non-zero since the nodesxj{\displaystyle x_{j}}are all distinct. This ensures that the matrix isinvertibleand the equation has the unique solutionA=X−1⋅Y{\displaystyle A=X^{-1}\cdot Y}; that is,p(x){\displaystyle p(x)}exists and is unique.
Iff(x){\displaystyle f(x)}is a polynomial of degree at mostn{\displaystyle n}, then the interpolating polynomial off(x){\displaystyle f(x)}atn+1{\displaystyle n+1}distinct points isf(x){\displaystyle f(x)}itself.
We may write down the polynomial immediately in terms ofLagrange polynomialsas:p(x)=(x−x1)(x−x2)⋯(x−xn)(x0−x1)(x0−x2)⋯(x0−xn)y0+(x−x0)(x−x2)⋯(x−xn)(x1−x0)(x1−x2)⋯(x1−xn)y1+⋯+(x−x0)(x−x1)⋯(x−xn−1)(xn−x0)(xn−x1)⋯(xn−xn−1)yn=∑i=0n(∏j≠i0≤j≤nx−xjxi−xj)yi=∑i=0np(x)p′(xi)(x−xi)yi{\displaystyle {\begin{aligned}p(x)&={\frac {(x-x_{1})(x-x_{2})\cdots (x-x_{n})}{(x_{0}-x_{1})(x_{0}-x_{2})\cdots (x_{0}-x_{n})}}y_{0}\\[4pt]&+{\frac {(x-x_{0})(x-x_{2})\cdots (x-x_{n})}{(x_{1}-x_{0})(x_{1}-x_{2})\cdots (x_{1}-x_{n})}}y_{1}\\[4pt]&+\cdots \\[4pt]&+{\frac {(x-x_{0})(x-x_{1})\cdots (x-x_{n-1})}{(x_{n}-x_{0})(x_{n}-x_{1})\cdots (x_{n}-x_{n-1})}}y_{n}\\[7pt]&=\sum _{i=0}^{n}{\Biggl (}\prod _{\stackrel {\!0\,\leq \,j\,\leq \,n}{j\,\neq \,i}}{\frac {x-x_{j}}{x_{i}-x_{j}}}{\Biggr )}y_{i}=\sum _{i=0}^{n}{\frac {p(x)}{p'(x_{i})(x-x_{i})}}\,y_{i}\end{aligned}}}For matrix arguments, this formula is calledSylvester's formulaand the matrix-valued Lagrange polynomials are theFrobenius covariants.
For a polynomialpn{\displaystyle p_{n}}of degree less than or equal ton{\displaystyle n}, that interpolatesf{\displaystyle f}at the nodesxi{\displaystyle x_{i}}wherei=0,1,2,3,⋯,n{\displaystyle i=0,1,2,3,\cdots ,n}. Letpn+1{\displaystyle p_{n+1}}be the polynomial of degree less than or equal ton+1{\displaystyle n+1}that interpolatesf{\displaystyle f}at the nodesxi{\displaystyle x_{i}}wherei=0,1,2,3,⋯,n,n+1{\displaystyle i=0,1,2,3,\cdots ,n,n+1}. Thenpn+1{\displaystyle p_{n+1}}is given by:pn+1(x)=pn(x)+an+1wn(x){\displaystyle p_{n+1}(x)=p_{n}(x)+a_{n+1}w_{n}(x)}wherewn(x):=∏i=0n(x−xi){\textstyle w_{n}(x):=\prod _{i=0}^{n}(x-x_{i})}also known as Newton basis andan+1:=f(xn+1)−pn(xn+1)wn(xn+1){\textstyle a_{n+1}:={f(x_{n+1})-p_{n}(x_{n+1}) \over w_{n}(x_{n+1})}}.
Proof:
This can be shown for the case wherei=0,1,2,3,⋯,n{\displaystyle i=0,1,2,3,\cdots ,n}:pn+1(xi)=pn(xi)+an+1∏j=0n(xi−xj)=pn(xi){\displaystyle p_{n+1}(x_{i})=p_{n}(x_{i})+a_{n+1}\prod _{j=0}^{n}(x_{i}-x_{j})=p_{n}(x_{i})}and wheni=n+1{\displaystyle i=n+1}:pn+1(xn+1)=pn(xn+1)+f(xn+1)−pn(xn+1)wn(xn+1)wn(xn+1)=f(xn+1){\displaystyle p_{n+1}(x_{n+1})=p_{n}(x_{n+1})+{f(x_{n+1})-p_{n}(x_{n+1}) \over w_{n}(x_{n+1})}w_{n}(x_{n+1})=f(x_{n+1})}By the uniqueness of interpolated polynomials of degree less thann+1{\displaystyle n+1},pn+1(x)=pn(x)+an+1wn(x){\textstyle p_{n+1}(x)=p_{n}(x)+a_{n+1}w_{n}(x)}is the required polynomial interpolation. The function can thus be expressed as:
pn(x)=a0+a1(x−x0)+a2(x−x0)(x−x1)+⋯+an(x−x0)⋯(x−xn−1).{\textstyle p_{n}(x)=a_{0}+a_{1}(x-x_{0})+a_{2}(x-x_{0})(x-x_{1})+\cdots +a_{n}(x-x_{0})\cdots (x-x_{n-1}).}
To findai{\displaystyle a_{i}}, we have to solve thelower triangular matrixformed by arrangingpn(xi)=f(xi)=yi{\textstyle p_{n}(x_{i})=f(x_{i})=y_{i}}from above equation in matrix form:
The coefficients are derived as
where
is the notation fordivided differences. Thus,Newton polynomialsare used to provide a polynomial interpolation formula of n points.[2]
The first few coefficients can be calculated using the system of equations. The form of n-th coefficient is assumed for proof by mathematical induction.
a0=y0=[y0]a1=y1−y0x1−x0=[y0,y1]⋮an=[y0,⋯,yn](let){\displaystyle {\begin{aligned}a_{0}&=y_{0}=[y_{0}]\\a_{1}&={y_{1}-y_{0} \over x_{1}-x_{0}}=[y_{0},y_{1}]\\\vdots \\a_{n}&=[y_{0},\cdots ,y_{n}]\quad {\text{(let)}}\\\end{aligned}}}
Let Q be polynomial interpolation of points(x1,y1),…,(xn,yn){\displaystyle (x_{1},y_{1}),\ldots ,(x_{n},y_{n})}. Adding(x0,y0){\displaystyle (x_{0},y_{0})}to the polynomial Q:
Q(x)+an′(x−x1)⋅…⋅(x−xn)=Pn(x),{\displaystyle Q(x)+a'_{n}(x-x_{1})\cdot \ldots \cdot (x-x_{n})=P_{n}(x),}
wherean′(x0−x1)…(x0−xn)=y0−Q(x0){\textstyle a'_{n}(x_{0}-x_{1})\ldots (x_{0}-x_{n})=y_{0}-Q(x_{0})}. By uniqueness of the interpolating polynomial of the points(x0,y0),…,(xn,yn){\displaystyle (x_{0},y_{0}),\ldots ,(x_{n},y_{n})}, equating the coefficients ofxn−1{\displaystyle x^{n-1}}we get,an′=[y0,…,yn]{\textstyle a'_{n}=[y_{0},\ldots ,y_{n}]}.
Hence the polynomial can be expressed as:Pn(x)=Q(x)+[y0,…,yn](x−x1)⋅…⋅(x−xn).{\displaystyle P_{n}(x)=Q(x)+[y_{0},\ldots ,y_{n}](x-x_{1})\cdot \ldots \cdot (x-x_{n}).}
Adding(xn+1,yn+1){\displaystyle (x_{n+1},y_{n+1})}to the polynomial Q, it has to satisfiy:[y1,…,yn+1](xn+1−x1)⋅…⋅(xn+1−xn)=yn+1−Q(xn+1){\textstyle [y_{1},\ldots ,y_{n+1}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})=y_{n+1}-Q(x_{n+1})}where the formula foran{\textstyle a_{n}}and interpolating polynomial are used.
Thean+1{\textstyle a_{n+1}}term for the polynomialPn+1{\textstyle P_{n+1}}can be found by calculating:[y0,…,yn+1](xn+1−x0)⋅…⋅(xn+1−xn)=[y1,…,yn+1]−[y0,…,yn]xn+1−x0(xn+1−x0)⋅…⋅(xn+1−xn)=([y1,…,yn+1]−[y0,…,yn])(xn+1−x1)⋅…⋅(xn+1−xn)=[y1,…,yn+1](xn+1−x1)⋅…⋅(xn+1−xn)−[y0,…,yn](xn+1−x1)⋅…⋅(xn+1−xn)=(yn+1−Q(xn+1))−[y0,…,yn](xn+1−x1)⋅…⋅(xn+1−xn)=yn+1−(Q(xn+1)+[y0,…,yn](xn+1−x1)⋅…⋅(xn+1−xn))=yn+1−P(xn+1).{\displaystyle {\begin{aligned}&[y_{0},\ldots ,y_{n+1}](x_{n+1}-x_{0})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&={\frac {[y_{1},\ldots ,y_{n+1}]-[y_{0},\ldots ,y_{n}]}{x_{n+1}-x_{0}}}(x_{n+1}-x_{0})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=\left([y_{1},\ldots ,y_{n+1}]-[y_{0},\ldots ,y_{n}]\right)(x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=[y_{1},\ldots ,y_{n+1}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})-[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=(y_{n+1}-Q(x_{n+1}))-[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n})\\&=y_{n+1}-(Q(x_{n+1})+[y_{0},\ldots ,y_{n}](x_{n+1}-x_{1})\cdot \ldots \cdot (x_{n+1}-x_{n}))\\&=y_{n+1}-P(x_{n+1}).\end{aligned}}}which implies thatan+1=yn+1−Pn(xn+1)wn(xn+1)=[y0,…,yn+1]{\displaystyle a_{n+1}={y_{n+1}-P_{n}(x_{n+1}) \over w_{n}(x_{n+1})}=[y_{0},\ldots ,y_{n+1}]}.
Hence it is proved by principle of mathematical induction.
The Newton polynomial can be expressed in a simplified form whenx0,x1,…,xk{\displaystyle x_{0},x_{1},\dots ,x_{k}}are arranged consecutively with equal spacing.
Ifx0,x1,…,xk{\displaystyle x_{0},x_{1},\dots ,x_{k}}are consecutively arranged and equally spaced withxi=x0+ih{\displaystyle {x}_{i}={x}_{0}+ih}fori= 0, 1, ...,kand some variable x is expressed asx=x0+sh{\displaystyle {x}={x}_{0}+sh}, then the differencex−xi{\displaystyle x-x_{i}}can be written as(s−i)h{\displaystyle (s-i)h}. So the Newton polynomial becomes
Since the relationship between divided differences andforward differencesis given as:[3][yj,yj+1,…,yj+n]=1n!hnΔ(n)yj,{\displaystyle [y_{j},y_{j+1},\ldots ,y_{j+n}]={\frac {1}{n!h^{n}}}\Delta ^{(n)}y_{j},}Takingyi=f(xi){\displaystyle y_{i}=f(x_{i})}, if the representation of x in the previous sections was instead taken to bex=xj+sh{\displaystyle x=x_{j}+sh}, theNewton forward interpolation formulais expressed as:f(x)≈N(x)=N(xj+sh)=∑i=0k(si)Δ(i)f(xj){\displaystyle f(x)\approx N(x)=N(x_{j}+sh)=\sum _{i=0}^{k}{s \choose i}\Delta ^{(i)}f(x_{j})}which is the interpolation of all points afterxj{\displaystyle x_{j}}. It is expanded as:f(xj+sh)=f(xj)+s1!Δf(xj)+s(s−1)2!Δ2f(xj)+s(s−1)(s−2)3!Δ3f(xj)+s(s−1)(s−2)(s−3)4!Δ4f(xj)+⋯{\displaystyle f(x_{j}+sh)=f(x_{j})+{\frac {s}{1!}}\Delta f(x_{j})+{\frac {s(s-1)}{2!}}\Delta ^{2}f(x_{j})+{\frac {s(s-1)(s-2)}{3!}}\Delta ^{3}f(x_{j})+{\frac {s(s-1)(s-2)(s-3)}{4!}}\Delta ^{4}f(x_{j})+\cdots }
If the nodes are reordered asxk,xk−1,…,x0{\displaystyle {x}_{k},{x}_{k-1},\dots ,{x}_{0}}, the Newton polynomial becomes
Ifxk,xk−1,…,x0{\displaystyle {x}_{k},\;{x}_{k-1},\;\dots ,\;{x}_{0}}are equally spaced withxi=xk−(k−i)h{\displaystyle {x}_{i}={x}_{k}-(k-i)h}fori= 0, 1, ...,kandx=xk+sh{\displaystyle {x}={x}_{k}+sh}, then,
Since the relationship between divided differences and backward differences is given as:[citation needed][yj,yj−1,…,yj−n]=1n!hn∇(n)yj,{\displaystyle [{y}_{j},y_{j-1},\ldots ,{y}_{j-n}]={\frac {1}{n!h^{n}}}\nabla ^{(n)}y_{j},}takingyi=f(xi){\displaystyle y_{i}=f(x_{i})}, if the representation of x in the previous sections was instead taken to bex=xj+sh{\displaystyle x=x_{j}+sh}, theNewton backward interpolation formulais expressed as:f(x)≈N(x)=N(xj+sh)=∑i=0k(−1)i(−si)∇(i)f(xj).{\displaystyle f(x)\approx N(x)=N(x_{j}+sh)=\sum _{i=0}^{k}{(-1)}^{i}{-s \choose i}\nabla ^{(i)}f(x_{j}).}which is the interpolation of all points beforexj{\displaystyle x_{j}}. It is expanded as:f(xj+sh)=f(xj)+s1!∇f(xj)+s(s+1)2!∇2f(xj)+s(s+1)(s+2)3!∇3f(xj)+s(s+1)(s+2)(s+3)4!∇4f(xj)+⋯{\displaystyle f(x_{j}+sh)=f(x_{j})+{\frac {s}{1!}}\nabla f(x_{j})+{\frac {s(s+1)}{2!}}\nabla ^{2}f(x_{j})+{\frac {s(s+1)(s+2)}{3!}}\nabla ^{3}f(x_{j})+{\frac {s(s+1)(s+2)(s+3)}{4!}}\nabla ^{4}f(x_{j})+\cdots }
A Lozenge diagram is a diagram that is used to describe different interpolation formulas that can be constructed for a given data set. A line starting on the left edge and tracing across the diagram to the right can be used to represent an interpolation formula if the following rules are followed:[4]
The factors are expressed using the formula:C(u+k,n)=(u+k)(u+k−1)⋯(u+k−n+1)n!{\displaystyle C(u+k,n)={\frac {(u+k)(u+k-1)\cdots (u+k-n+1)}{n!}}}
If a path goes fromΔn−1ys{\displaystyle \Delta ^{n-1}y_{s}}toΔn+1ys−1{\displaystyle \Delta ^{n+1}y_{s-1}}, it can connect through three intermediate steps, (a) throughΔnys−1{\displaystyle \Delta ^{n}y_{s-1}}, (b) throughC(u−s,n){\textstyle C(u-s,n)}or (c) throughΔnys{\displaystyle \Delta ^{n}y_{s}}. Proving the equivalence of these three two-step paths should prove that all (n-step) paths can be morphed with the same starting and ending, all of which represents the same formula.
Path (a):
C(u−s,n)Δnys−1+C(u−s+1,n+1)Δn+1ys−1{\displaystyle C(u-s,n)\Delta ^{n}y_{s-1}+C(u-s+1,n+1)\Delta ^{n+1}y_{s-1}}
Path (b):
C(u−s,n)Δnys+C(u−s,n+1)Δn+1ys−1{\displaystyle C(u-s,n)\Delta ^{n}y_{s}+C(u-s,n+1)\Delta ^{n+1}y_{s-1}}
Path (c):
C(u−s,n)Δnys−1+Δnys2+C(u−s+1,n+1)+C(u−s,n+1)2Δn+1ys−1{\displaystyle C(u-s,n){\frac {\Delta ^{n}y_{s-1}+\Delta ^{n}y_{s}}{2}}\quad +{\frac {C(u-s+1,n+1)+C(u-s,n+1)}{2}}\Delta ^{n+1}y_{s-1}}
Subtracting contributions from path a and b:
Path a - Path b=C(u−s,n)(Δnys−1−Δnys)+(C(u−s+1,n+1)−C(u−s,n−1))Δn+1ys−1=−C(u−s,n)Δn+1ys−1+C(u−s,n)(u−s+1)−(u−s−n)n+1Δn+1ys−1=C(u−s,n)(−Δn+1ys−1+Δn+1ys−1)=0{\displaystyle {\begin{aligned}{\text{Path a - Path b}}=&C(u-s,n)(\Delta ^{n}y_{s-1}-\Delta ^{n}y_{s})+(C(u-s+1,n+1)-C(u-s,n-1))\Delta ^{n+1}y_{s-1}\\=&-C(u-s,n)\Delta ^{n+1}y_{s-1}+C(u-s,n){\frac {(u-s+1)-(u-s-n)}{n+1}}\Delta ^{n+1}y_{s-1}\\=&C(u-s,n)(-\Delta ^{n+1}y_{s-1}+\Delta ^{n+1}y_{s-1})=0\\\end{aligned}}}
Thus, the contribution of either path (a) or path (b) is the same. Since path (c) is the average of path (a) and (b), it also contributes identical function to the polynomial. Hence the equivalence of paths with same starting and ending points is shown. To check if the paths can be shifted to different values in the leftmost corner, taking only two step paths is sufficient: (a)ys+1{\displaystyle y_{s+1}}toys{\displaystyle y_{s}}throughΔys{\displaystyle \Delta y_{s}}or (b) factor betweenys+1{\displaystyle y_{s+1}}andys{\displaystyle y_{s}}, toys{\displaystyle y_{s}}throughΔys{\displaystyle \Delta y_{s}}or (c) starting fromys{\displaystyle y_{s}}.
Path (a)
ys+1+C(u−s−1,1)Δys−C(u−s,1)Δys{\displaystyle y_{s+1}+C(u-s-1,1)\Delta y_{s}-C(u-s,1)\Delta y_{s}}
Path (b)
ys+1+ys2+C(u−s−1,1)+C(u−s,1)2Δys−C(u−s,1)Δys{\displaystyle {\frac {y_{s+1}+y_{s}}{2}}+{\frac {C(u-s-1,1)+C(u-s,1)}{2}}\Delta y_{s}-C(u-s,1)\Delta y_{s}}
Path (c)
ys{\displaystyle y_{s}}
SinceΔys=ys+1−ys{\displaystyle \Delta y_{s}=y_{s+1}-y_{s}}, substituting in the above equations shows that all the above terms reduce toys{\displaystyle y_{s}}and are hence equivalent. Hence these paths can be morphed to start from the leftmost corner and end in a common point.[4]
Taking negative slope transversal fromy0{\displaystyle y_{0}}toΔny0{\displaystyle \Delta ^{n}y_{0}}gives the interpolation formula of all then+1{\displaystyle n+1}consecutively arranged points, equivalent to Newton's forward interpolation formula:
y(s)=y0+C(s,1)Δy0+C(s,2)Δ2y0+C(s,3)Δ3y0+⋯=y0+sΔy0+s(s−1)2Δ2y0+s(s−1)(s−2)3!Δ3y0+s(s−1)(s−2)(s−3)4!Δ4y0+⋯{\displaystyle {\begin{aligned}y(s)&=y_{0}+C(s,1)\Delta y_{0}+C(s,2)\Delta ^{2}y_{0}+C(s,3)\Delta ^{3}y_{0}+\cdots \\&=y_{0}+s\Delta y_{0}+{\frac {s(s-1)}{2}}\Delta ^{2}y_{0}+{\frac {s(s-1)(s-2)}{3!}}\Delta ^{3}y_{0}+{\frac {s(s-1)(s-2)(s-3)}{4!}}\Delta ^{4}y_{0}+\cdots \end{aligned}}}
whereas, taking positive slope transversal fromyn{\displaystyle y_{n}}to∇nyn=Δny0{\displaystyle \nabla ^{n}y_{n}=\Delta ^{n}y_{0}}, gives the interpolation formula of all then+1{\displaystyle n+1}consecutively arranged points, equivalent to Newton's backward interpolation formula:
y(u)=yk+C(u−k,1)Δyk−1+C(u−k+1,2)Δ2yk−2+C(u−k+2,3)Δ3yk−3+⋯=yk+(u−k)Δyk−1+(u−k+1)(u−k)2Δ2yk−2+(u−k+2)(u−k+1)(u−k)3!Δ3yk−3+⋯y(k+s)=yk+(s)∇yk+(s+1)s2∇2yk+(s+2)(s+1)s3!∇3yk+(s+3)(s+2)(s+1)s4!∇4yk+⋯{\displaystyle {\begin{aligned}y(u)&=y_{k}+C(u-k,1)\Delta y_{k-1}+C(u-k+1,2)\Delta ^{2}y_{k-2}+C(u-k+2,3)\Delta ^{3}y_{k-3}+\cdots \\&=y_{k}+(u-k)\Delta y_{k-1}+{\frac {(u-k+1)(u-k)}{2}}\Delta ^{2}y_{k-2}+{\frac {(u-k+2)(u-k+1)(u-k)}{3!}}\Delta ^{3}y_{k-3}+\cdots \\y(k+s)&=y_{k}+(s)\nabla y_{k}+{\frac {(s+1)s}{2}}\nabla ^{2}y_{k}+{\frac {(s+2)(s+1)s}{3!}}\nabla ^{3}y_{k}+{\frac {(s+3)(s+2)(s+1)s}{4!}}\nabla ^{4}y_{k}+\cdots \\\end{aligned}}}
wheres=u−k{\displaystyle s=u-k}is the number corresponding to that introduced in Newton interpolation.
Taking a zigzag line towards the right starting fromy0{\displaystyle y_{0}}with negative slope, we get Gauss forward formula:
y(u)=y0+uΔy0+u(u−1)2Δ2y−1+(u+1)u(u−1)3!Δ3y−1+(u+1)u(u−1)(u−2)4!Δ4y−2+⋯{\displaystyle y(u)=y_{0}+u\Delta y_{0}+{\frac {u(u-1)}{2}}\Delta ^{2}y_{-1}+{\frac {(u+1)u\left(u-1\right)}{3!}}\Delta ^{3}y_{-1}+{\frac {(u+1)u\left(u-1\right)(u-2)}{4!}}\Delta ^{4}y_{-2}+\cdots }
whereas starting fromy0{\displaystyle y_{0}}with positive slope, we get Gauss backward formula:
y(u)=y0+uΔy−1+(u+1)u2Δ2y−1+(u+1)u(u−1)3!Δ3y−2+(u+2)(u+1)u(u−1)4!Δ4y−2+⋯{\displaystyle y(u)=y_{0}+u\Delta y_{-1}+{\frac {(u+1)u}{2}}\Delta ^{2}y_{-1}+{\frac {(u+1)u\left(u-1\right)}{3!}}\Delta ^{3}y_{-2}+{\frac {(u+2)(u+1)u\left(u-1\right)}{4!}}\Delta ^{4}y_{-2}+\cdots }
By taking a horizontal path towards the right starting fromy0{\displaystyle y_{0}}, we get Stirling formula:
y(u)=y0+uΔy0+Δy−12+C(u+1,2)+C(u,2)2Δ2y−1+C(u+1,3)Δ3y−2+Δ3y−12+⋯=y0+uΔy0+Δy−12+u22Δ2y−1+u(u2−1)3!Δ3y−2+Δ3y−12+u2(u2−1)4!Δ4y−2+⋯{\displaystyle {\begin{aligned}y(u)&=y_{0}+u{\frac {\Delta y_{0}+\Delta y_{-1}}{2}}+{\frac {C(u+1,2)+C(u,2)}{2}}\Delta ^{2}y_{-1}+C(u+1,3){\frac {\Delta ^{3}y_{-2}+\Delta ^{3}y_{-1}}{2}}+\cdots \\&=y_{0}+u{\frac {\Delta y_{0}+\Delta y_{-1}}{2}}+{\frac {u^{2}}{2}}\Delta ^{2}y_{-1}+{\frac {u(u^{2}-1)}{3!}}{\frac {\Delta ^{3}y_{-2}+\Delta ^{3}y_{-1}}{2}}+{\frac {u^{2}(u^{2}-1)}{4!}}\Delta ^{4}y_{-2}+\cdots \end{aligned}}}
Stirling formula is the average of Gauss forward and Gauss backward formulas.
By taking a horizontal path towards the right starting from factor betweeny0{\displaystyle y_{0}}andy1{\displaystyle y_{1}}, we get Bessel formula:
y(u)=1y0+y12+C(u,1)+C(u−1,1)2Δy0+C(u,2)Δ2y−1+Δ2y02+⋯=y0+y12+(u−12)Δy0+u(u−1)2Δ2y−1+Δ2y02+(u−12)u(u−1)3!Δ3y0+(u+1)u(u−1)(u−2)4!Δ4y−1+Δ4y−22+⋯{\displaystyle {\begin{aligned}y(u)&=1{\frac {y_{0}+y_{1}}{2}}+{\frac {C(u,1)+C(u-1,1)}{2}}\Delta y_{0}+C(u,2){\frac {\Delta ^{2}y_{-1}+\Delta ^{2}y_{0}}{2}}+\cdots \\&={\frac {y_{0}+y_{1}}{2}}+\left(u-{\frac {1}{2}}\right)\Delta y_{0}+{\frac {u(u-1)}{2}}{\frac {\Delta ^{2}y_{-1}+\Delta ^{2}y_{0}}{2}}+{\frac {\left(u-{\frac {1}{2}}\right)u\left(u-1\right)}{3!}}\Delta ^{3}y_{0}+{\frac {(u+1)u(u-1)(u-2)}{4!}}{\frac {\Delta ^{4}y_{-1}+\Delta ^{4}y_{-2}}{2}}+\cdots \\\end{aligned}}}
TheVandermonde matrixin the second proof above may have largecondition number,[5]causing large errors when computing the coefficientsaiif the system of equations is solved usingGaussian elimination.
Several authors have therefore proposed algorithms which exploit the structure of the Vandermonde matrix to compute numerically stable solutions in O(n2) operations instead of the O(n3) required by Gaussian elimination.[6][7][8]These methods rely on constructing first aNewton interpolationof the polynomial and then converting it to amonomial form.
To find the interpolation polynomialp(x) in the vector spaceP(n) of polynomials of degreen, we may use the usualmonomial basisforP(n) and invert the Vandermonde matrix by Gaussian elimination, giving acomputational costof O(n3) operations. To improve this algorithm, a more convenient basis forP(n) can simplify the calculation of the coefficients, which must then be translated back in terms of themonomial basis.
One method is to write the interpolation polynomial in theNewton form(i.e. using Newton basis) and use the method ofdivided differencesto construct the coefficients, e.g.Neville's algorithm. The cost isO(n2)operations. Furthermore, you only need to do O(n) extra work if an extra point is added to the data set, while for the other methods, you have to redo the whole computation.
Another method is preferred when the aim is not to compute thecoefficientsofp(x), but only asingle valuep(a) at a pointx = anot in the original data set. TheLagrange formcomputes the valuep(a) with complexity O(n2).[9]
TheBernstein formwas used in a constructive proof of theWeierstrass approximation theorembyBernsteinand has gained great importance in computer graphics in the form ofBézier curves.
Given a set of (position, value) data points(x0,y0),…,(xj,yj),…,(xn,yn){\displaystyle (x_{0},y_{0}),\ldots ,(x_{j},y_{j}),\ldots ,(x_{n},y_{n})}where no two positionsxj{\displaystyle x_{j}}are the same, the interpolating polynomialy(x){\displaystyle y(x)}may be considered as alinear combinationof the valuesyj{\displaystyle y_{j}}, using coefficients which are polynomials inx{\displaystyle x}depending on thexj{\displaystyle x_{j}}. For example, the interpolation polynomial in theLagrange formis the linear combinationy(x):=∑j=0kyjcj(x){\displaystyle y(x):=\sum _{j=0}^{k}y_{j}c_{j}(x)}with each coefficientcj(x){\displaystyle c_{j}(x)}given by the corresponding Lagrange basis polynomial on the given positionsxj{\displaystyle x_{j}}:cj(x)=Lj(x0,…,xn;x)=∏0≤i≤ni≠jx−xixj−xi=(x−x0)(xj−x0)⋯(x−xj−1)(xj−xj−1)(x−xj+1)(xj−xj+1)⋯(x−xn)(xj−xn).{\displaystyle c_{j}(x)=L_{j}(x_{0},\ldots ,x_{n};x)=\prod _{0\leq i\leq n \atop i\neq j}{\frac {x-x_{i}}{x_{j}-x_{i}}}={\frac {(x-x_{0})}{(x_{j}-x_{0})}}\cdots {\frac {(x-x_{j-1})}{(x_{j}-x_{j-1})}}{\frac {(x-x_{j+1})}{(x_{j}-x_{j+1})}}\cdots {\frac {(x-x_{n})}{(x_{j}-x_{n})}}.}
Since the coefficients depend only on the positionsxj{\displaystyle x_{j}}, not the valuesyj{\displaystyle y_{j}}, we can use thesame coefficientsto find the interpolating polynomial for a second set of data points(x0,v0),…,(xn,vn){\displaystyle (x_{0},v_{0}),\ldots ,(x_{n},v_{n})}at the same positions:v(x):=∑j=0kvjcj(x).{\displaystyle v(x):=\sum _{j=0}^{k}v_{j}c_{j}(x).}
Furthermore, the coefficientscj(x){\displaystyle c_{j}(x)}only depend on the relative spacesxi−xj{\displaystyle x_{i}-x_{j}}between the positions. Thus, given a third set of data whose points are given by the new variablet=ax+b{\displaystyle t=ax+b}(anaffine transformationofx{\displaystyle x}, inverted byx=t−ba{\displaystyle x={\tfrac {t-b}{a}}}):(t0,w0),…,(tj,wj)…,(tn,wn)withtj=axj+b,{\displaystyle (t_{0},w_{0}),\ldots ,(t_{j},w_{j})\ldots ,(t_{n},w_{n})\qquad {\text{with}}\qquad t_{j}=ax_{j}+b,}
we can use a transformed version of the previous coefficient polynomials:
c~j(t):=cj(t−ba)=cj(x),{\displaystyle {\tilde {c}}_{j}(t):=c_{j}({\tfrac {t-b}{a}})=c_{j}(x),}
and write the interpolation polynomial as:
w(t):=∑j=0kwjc~j(t).{\textstyle w(t):=\sum _{j=0}^{k}w_{j}{\tilde {c}}_{j}(t).}
Data points(xj,yj){\displaystyle (x_{j},y_{j})}often haveequally spaced positions, which may be normalized by an affine transformation toxj=j{\displaystyle x_{j}=j}. For example, consider the data points
(0,y0),(1,y1),(2,y2){\displaystyle (0,y_{0}),(1,y_{1}),(2,y_{2})}.
The interpolation polynomial in the Lagrange form is thelinear combination
y(x):=∑j=02yjcj(x)=y0(x−1)(x−2)(0−1)(0−2)+y1(x−0)(x−2)(1−0)(1−2)+y2(x−0)(x−1)(2−0)(2−1)=12y0(x−1)(x−2)−y1(x−0)(x−2)+12y2(x−0)(x−1).{\displaystyle {\begin{aligned}y(x):=\sum _{j=0}^{2}y_{j}c_{j}(x)&=y_{0}{\frac {(x-1)(x-2)}{(0-1)(0-2)}}+y_{1}{\frac {(x-0)(x-2)}{(1-0)(1-2)}}+y_{2}{\frac {(x-0)(x-1)}{(2-0)(2-1)}}\\&={\tfrac {1}{2}}y_{0}(x-1)(x-2)-y_{1}(x-0)(x-2)+{\tfrac {1}{2}}y_{2}(x-0)(x-1).\end{aligned}}}
For example,y(3)=y3=y0−3y1+3y2{\displaystyle y(3)=y_{3}=y_{0}-3y_{1}+3y_{2}}andy(1.5)=y1.5=18(−y0+6y1+3y2){\displaystyle y(1.5)=y_{1.5}={\tfrac {1}{8}}(-y_{0}+6y_{1}+3y_{2})}.
The case of equally spaced points can also be treated by themethod of finite differences. The first difference of a sequence of valuesv={vj}j=0∞{\displaystyle v=\{v_{j}\}_{j=0}^{\infty }}is the sequenceΔv=u={uj}j=0∞{\displaystyle \Delta v=u=\{u_{j}\}_{j=0}^{\infty }}defined byuj=vj+1−vj{\displaystyle u_{j}=v_{j+1}-v_{j}}. Iterating this operation gives thenthdifference operationΔnv=u{\displaystyle \Delta ^{n}v=u}, defined explicitly by:uj=∑k=0n(−1)n−k(nk)vj+k,{\displaystyle u_{j}=\sum _{k=0}^{n}(-1)^{n-k}{n \choose k}v_{j+k},}where the coefficients form a signed version of Pascal's triangle, thetriangle of binomial transform coefficients:
A polynomialy(x){\displaystyle y(x)}of degreeddefines a sequence of values at positive integer points,yj=y(j){\displaystyle y_{j}=y(j)}, and the(d+1)th{\displaystyle (d+1)^{\text{th}}}difference of this sequence is identically zero:
Δd+1y=0{\displaystyle \Delta ^{d+1}y=0}.
Thus, given valuesy0,…,yn{\displaystyle y_{0},\ldots ,y_{n}}at equally spaced points, wheren=d+1{\displaystyle n=d+1}, we have:(−1)ny0+(−1)n−1(n1)y1+⋯−(nn−1)yn−1+yn=0.{\displaystyle (-1)^{n}y_{0}+(-1)^{n-1}{\binom {n}{1}}y_{1}+\cdots -{\binom {n}{n-1}}y_{n-1}+y_{n}=0.}For example, 4 equally spaced data pointsy0,y1,y2,y3{\displaystyle y_{0},y_{1},y_{2},y_{3}}of a quadraticy(x){\displaystyle y(x)}obey0=−y0+3y1−3y2+y3{\displaystyle 0=-y_{0}+3y_{1}-3y_{2}+y_{3}}, and solving fory3{\displaystyle y_{3}}gives the same interpolation equation obtained above using the Lagrange method.
When interpolating a given functionfby a polynomialpn{\displaystyle p_{n}}of degreenat the nodesx0,...,xnwe get the errorf(x)−pn(x)=f[x0,…,xn,x]∏i=0n(x−xi){\displaystyle f(x)-p_{n}(x)=f[x_{0},\ldots ,x_{n},x]\prod _{i=0}^{n}(x-x_{i})}
wheref[x0,…,xn,x]{\textstyle f[x_{0},\ldots ,x_{n},x]}is the (n+1)stdivided differenceof the data points
(x0,f(x0)),…,(xn,f(xn)),(x,f(x)){\displaystyle (x_{0},f(x_{0})),\ldots ,(x_{n},f(x_{n})),(x,f(x))}.
Furthermore, there is aLagrange remainder formof the error, for a functionfwhich isn+ 1times continuously differentiable on a closed intervalI{\displaystyle I}, and a polynomialpn(x){\displaystyle p_{n}(x)}of degree at mostnthat interpolatesfatn+ 1distinct pointsx0,…,xn∈I{\displaystyle x_{0},\ldots ,x_{n}\in I}. For eachx∈I{\displaystyle x\in I}there existsξ∈I{\displaystyle \xi \in I}such that
f(x)−pn(x)=f(n+1)(ξ)(n+1)!∏i=0n(x−xi).{\displaystyle f(x)-p_{n}(x)={\frac {f^{(n+1)}(\xi )}{(n+1)!}}\prod _{i=0}^{n}(x-x_{i}).}
This error bound suggests choosing the interpolation pointsxito minimize the product|∏(x−xi)|{\textstyle \left|\prod (x-x_{i})\right|}, which is achieved by theChebyshev nodes.
Set the error term asRn(x)=f(x)−pn(x){\textstyle R_{n}(x)=f(x)-p_{n}(x)}, and define an auxiliary function:Y(t)=Rn(t)−Rn(x)W(x)W(t)whereW(t)=∏i=0n(t−xi).{\displaystyle Y(t)=R_{n}(t)-{\frac {R_{n}(x)}{W(x)}}W(t)\qquad {\text{where}}\qquad W(t)=\prod _{i=0}^{n}(t-x_{i}).}Thus:Y(n+1)(t)=Rn(n+1)(t)−Rn(x)W(x)(n+1)!{\displaystyle Y^{(n+1)}(t)=R_{n}^{(n+1)}(t)-{\frac {R_{n}(x)}{W(x)}}\ (n+1)!}
But sincepn(x){\displaystyle p_{n}(x)}is a polynomial of degree at mostn, we haveRn(n+1)(t)=f(n+1)(t){\textstyle R_{n}^{(n+1)}(t)=f^{(n+1)}(t)}, and:Y(n+1)(t)=f(n+1)(t)−Rn(x)W(x)(n+1)!{\displaystyle Y^{(n+1)}(t)=f^{(n+1)}(t)-{\frac {R_{n}(x)}{W(x)}}\ (n+1)!}
Now, sincexiare roots ofRn(t){\displaystyle R_{n}(t)}andW(t){\displaystyle W(t)}, we haveY(x)=Y(xj)=0{\displaystyle Y(x)=Y(x_{j})=0}, which meansYhas at leastn+ 2roots. FromRolle's theorem,Y′(t){\displaystyle Y^{\prime }(t)}has at leastn+ 1roots, and iterativelyY(n+1)(t){\displaystyle Y^{(n+1)}(t)}has at least one rootξin the intervalI. Thus:Y(n+1)(ξ)=f(n+1)(ξ)−Rn(x)W(x)(n+1)!=0{\displaystyle Y^{(n+1)}(\xi )=f^{(n+1)}(\xi )-{\frac {R_{n}(x)}{W(x)}}\ (n+1)!=0}
and:Rn(x)=f(x)−pn(x)=f(n+1)(ξ)(n+1)!∏i=0n(x−xi).{\displaystyle R_{n}(x)=f(x)-p_{n}(x)={\frac {f^{(n+1)}(\xi )}{(n+1)!}}\prod _{i=0}^{n}(x-x_{i}).}
This parallels the reasoning behind the Lagrange remainder term in theTaylor theorem; in fact, the Taylor remainder is a special case of interpolation error when all interpolation nodesxiare identical.[10]Note that the error will be zero whenx=xi{\displaystyle x=x_{i}}for anyi. Thus, the maximum error will occur at some point in the interval between two successive nodes.
In the case of equally spaced interpolation nodes wherexi=a+ih{\displaystyle x_{i}=a+ih}, fori=0,1,…,n,{\displaystyle i=0,1,\ldots ,n,}and whereh=(b−a)/n,{\displaystyle h=(b-a)/n,}the product term in the interpolation error formula can be bound as[11]|∏i=0n(x−xi)|=∏i=0n|x−xi|≤n!4hn+1.{\displaystyle \left|\prod _{i=0}^{n}(x-x_{i})\right|=\prod _{i=0}^{n}\left|x-x_{i}\right|\leq {\frac {n!}{4}}h^{n+1}.}
Thus the error bound can be given as|Rn(x)|≤hn+14(n+1)maxξ∈[a,b]|f(n+1)(ξ)|{\displaystyle \left|R_{n}(x)\right|\leq {\frac {h^{n+1}}{4(n+1)}}\max _{\xi \in [a,b]}\left|f^{(n+1)}(\xi )\right|}
However, this assumes thatf(n+1)(ξ){\displaystyle f^{(n+1)}(\xi )}is dominated byhn+1{\displaystyle h^{n+1}}, i.e.f(n+1)(ξ)hn+1≪1{\displaystyle f^{(n+1)}(\xi )h^{n+1}\ll 1}. In several cases, this is not true and the error actually increases asn→ ∞(seeRunge's phenomenon). That question is treated in the sectionConvergence properties.
We fix the interpolation nodesx0, ...,xnand an interval [a,b] containing all the interpolation nodes. The process of interpolation maps the functionfto a polynomialp. This defines a mappingXfrom the spaceC([a,b]) of all continuous functions on [a,b] to itself. The mapXis linear and it is aprojectionon the subspaceP(n){\displaystyle P(n)}of polynomials of degreenor less.
The Lebesgue constantLis defined as theoperator normofX. One has (a special case ofLebesgue's lemma):‖f−X(f)‖≤(L+1)‖f−p∗‖.{\displaystyle \left\|f-X(f)\right\|\leq (L+1)\left\|f-p^{*}\right\|.}
In other words, the interpolation polynomial is at most a factor (L+ 1) worse than the best possible approximation. This suggests that we look for a set of interpolation nodes that makesLsmall. In particular, we have forChebyshev nodes:L≤2πlog(n+1)+1.{\displaystyle L\leq {\frac {2}{\pi }}\log(n+1)+1.}
We conclude again that Chebyshev nodes are a very good choice for polynomial interpolation, as the growth innis exponential for equidistant nodes. However, those nodes are not optimal.
It is natural to ask, for which classes of functions and for which interpolation nodes the sequence of interpolating polynomials converges to the interpolated function asn→ ∞? Convergence may be understood in different ways, e.g. pointwise, uniform or in some integral norm.
The situation is rather bad for equidistant nodes, in that uniform convergence is not even guaranteed for infinitely differentiable functions. One classical example, due toCarl Runge, is the functionf(x) = 1 / (1 +x2) on the interval[−5, 5]. The interpolation error||f−pn||∞grows without bound asn→ ∞. Another example is the functionf(x) = |x| on the interval[−1, 1], for which the interpolating polynomials do not even converge pointwise except at the three pointsx= ±1, 0.[12]
One might think that better convergence properties may be obtained by choosing different interpolation nodes. The following result seems to give a rather encouraging answer:
Theorem—For any functionf(x) continuous on an interval [a,b] there exists a table of nodes for which the sequence of interpolating polynomialspn(x){\displaystyle p_{n}(x)}converges tof(x) uniformly on [a,b].
It is clear that the sequence of polynomials of best approximationpn∗(x){\displaystyle p_{n}^{*}(x)}converges tof(x) uniformly (due to theWeierstrass approximation theorem). Now we have only to show that eachpn∗(x){\displaystyle p_{n}^{*}(x)}may be obtained by means of interpolation on certain nodes. But this is true due to a special property of polynomials of best approximation known from theequioscillation theorem. Specifically, we know that such polynomials should intersectf(x) at leastn+ 1times. Choosing the points of intersection as interpolation nodes we obtain the interpolating polynomial coinciding with the best approximation polynomial.
The defect of this method, however, is that interpolation nodes should be calculated anew for each new functionf(x), but the algorithm is hard to be implemented numerically. Does there exist a single table of nodes for which the sequence of interpolating polynomials converge to any continuous functionf(x)? The answer is unfortunately negative:
Theorem—For any table of nodes there is a continuous functionf(x) on an interval [a,b] for which the sequence of interpolating polynomials diverges on [a,b].[13]
The proof essentially uses the lower bound estimation of the Lebesgue constant, which we defined above to be the operator norm ofXn(whereXnis the projection operator on Πn). Now we seek a table of nodes for which
limn→∞Xnf=f,for everyf∈C([a,b]).{\displaystyle \lim _{n\to \infty }X_{n}f=f,{\text{ for every }}f\in C([a,b]).}
Due to theBanach–Steinhaus theorem, this is only possible when norms ofXnare uniformly bounded, which cannot be true since we know that
‖Xn‖≥2πlog(n+1)+C.{\displaystyle \|X_{n}\|\geq {\tfrac {2}{\pi }}\log(n+1)+C.}
For example, if equidistant points are chosen as interpolation nodes, the function fromRunge's phenomenondemonstrates divergence of such interpolation. Note that this function is not only continuous but even infinitely differentiable on[−1, 1]. For betterChebyshev nodes, however, such an example is much harder to find due to the following result:
Theorem—For everyabsolutely continuousfunction on[−1, 1]the sequence of interpolating polynomials constructed on Chebyshev nodes converges tof(x) uniformly.[14]
Runge's phenomenonshows that for high values ofn, the interpolation polynomial may oscillate wildly between the data points. This problem is commonly resolved by the use ofspline interpolation. Here, the interpolant is not a polynomial but aspline: a chain of several polynomials of a lower degree.
Interpolation ofperiodic functionsbyharmonicfunctions is accomplished byFourier transform. This can be seen as a form of polynomial interpolation with harmonic base functions, seetrigonometric interpolationandtrigonometric polynomial.
Hermite interpolationproblems are those where not only the values of the polynomialpat the nodes are given, but also all derivatives up to a given order. This turns out to be equivalent to a system of simultaneous polynomial congruences, and may be solved by means of theChinese remainder theoremfor polynomials.Birkhoff interpolationis a further generalization where only derivatives of some orders are prescribed, not necessarily all orders from 0 to ak.
Collocation methodsfor the solution of differential and integral equations are based on polynomial interpolation.
The technique ofrational function modelingis a generalization that considers ratios of polynomial functions.
At last,multivariate interpolationfor higher dimensions.
|
https://en.wikipedia.org/wiki/Polynomial_interpolation
|
In statistics,response surface methodology(RSM) explores the relationships between severalexplanatory variablesand one or moreresponse variables. RSM is an empirical model which employs the use of mathematical and statistical techniques to relate input variables, otherwise known as factors, to the response. RSM became very useful because other methods available, such as the theoretical model, could be very cumbersome to use, time-consuming, inefficient, error-prone, and unreliable. The method was introduced byGeorge E. P. Boxand K. B. Wilson in 1951. The main idea of RSM is to use a sequence ofdesigned experimentsto obtain an optimal response. Box and Wilson suggest using asecond-degreepolynomialmodel to do this. They acknowledge that this model is only an approximation, but they use it because such a model is easy to estimate and apply, even when little is known about the process.
Statistical approaches such as RSM can be employed to maximize the production of a special substance by optimization of operational factors. Of late, for formulation optimization, the RSM, using properdesign of experiments(DoE), has become extensively used.[1]In contrast to conventional methods, the interaction among process variables can be determined by statistical techniques.[2]
An easy way to estimate a first-degree polynomial model is to use afactorial experimentor afractional factorial design. This is sufficient to determine which explanatory variables affect the response variable(s) of interest. Once it is suspected that only significant explanatory variables are left, then a more complicated design, such as acentral composite designcan be implemented to estimate a second-degree polynomial model, which is still only an approximation at best. However, the second-degree model can be used to optimize (maximize, minimize, or attain a specific target for) the response variable(s) of interest.
Cubic designs are discussed by Kiefer, by Atkinson, Donev, and Tobias and by Hardin and Sloane.
Spherical designsare discussed by Kiefer and by Hardin and Sloane.
Mixture experiments are discussed in many books on thedesign of experiments, and in the response-surface methodology textbooks of Box and Draper and of Atkinson, Donev and Tobias. An extensive discussion and survey appears in the advanced textbook by John Cornell.
Some extensions of response surface methodology deal with the multiple response problem. Multiple response variables create difficulty because what is optimal for one response may not be optimal for other responses. Other extensions are used to reduce variability in a single response while targeting a specific value, or attaining a near maximum or minimum while preventing variability in that response from getting too large.
Response surface methodology uses statistical models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model.
Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, Box's original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years. The engineers had not been able to afford to fit a cubic three-level design to estimate a quadratic model, and theirbiasedlinear-models estimated the gradient to be zero. Box's design reduced the costs of experimentation so that a quadratic model could be fit, which led to a (long-sought) ascent direction.[3][4]
|
https://en.wikipedia.org/wiki/Response_surface_methodology
|
Smoothing splinesare function estimates,f^(x){\displaystyle {\hat {f}}(x)}, obtained from a set of noisy observationsyi{\displaystyle y_{i}}of the targetf(xi){\displaystyle f(x_{i})}, in order to balance a measure ofgoodness of fitoff^(xi){\displaystyle {\hat {f}}(x_{i})}toyi{\displaystyle y_{i}}with a derivative based measure of the smoothness off^(x){\displaystyle {\hat {f}}(x)}. They provide a means for smoothing noisyxi,yi{\displaystyle x_{i},y_{i}}data. The most familiar example is the cubic smoothing spline, but there are many other possibilities, including for the case wherex{\displaystyle x}is a vector quantity.
Let{xi,Yi:i=1,…,n}{\displaystyle \{x_{i},Y_{i}:i=1,\dots ,n\}}be a set of observations, modeled by the relationYi=f(xi)+ϵi{\displaystyle Y_{i}=f(x_{i})+\epsilon _{i}}where theϵi{\displaystyle \epsilon _{i}}are independent, zero mean random variables. The cubic smoothing spline estimatef^{\displaystyle {\hat {f}}}of the functionf{\displaystyle f}is defined to be the unique minimizer, in theSobolev spaceW22{\displaystyle W_{2}^{2}}on a compact interval, of[1][2]
Remarks:
It is useful to think of fitting a smoothing spline in two steps:
Now, treat the second step first.
Given the vectorm^=(f^(x1),…,f^(xn))T{\displaystyle {\hat {m}}=({\hat {f}}(x_{1}),\ldots ,{\hat {f}}(x_{n}))^{T}}of fitted values, the sum-of-squares part of the spline criterion is fixed. It remains only to minimize∫f^″(x)2dx{\displaystyle \int {\hat {f}}''(x)^{2}\,dx}, and the minimizer is a natural cubicsplinethat interpolates the points(xi,f^(xi)){\displaystyle (x_{i},{\hat {f}}(x_{i}))}. This interpolating spline is a linear operator, and can be written in the form
wherefi(x){\displaystyle f_{i}(x)}are a set of spline basis functions. As a result, the roughness penalty has the form
where the elements ofAare∫fi″(x)fj″(x)dx{\displaystyle \int f_{i}''(x)f_{j}''(x)dx}. The basis functions, and hence the matrixA, depend on the configuration of the predictor variablesxi{\displaystyle x_{i}}, but not on the responsesYi{\displaystyle Y_{i}}orm^{\displaystyle {\hat {m}}}.
Ais ann×nmatrix given byA=ΔTW−1Δ{\displaystyle A=\Delta ^{T}W^{-1}\Delta }.
Δis an(n-2)×nmatrix of second differences with elements:
Δii=1/hi{\displaystyle \Delta _{ii}=1/h_{i}},Δi,i+1=−1/hi−1/hi+1{\displaystyle \Delta _{i,i+1}=-1/h_{i}-1/h_{i+1}},Δi,i+2=1/hi+1{\displaystyle \Delta _{i,i+2}=1/h_{i+1}}
Wis an(n-2)×(n-2)symmetric tri-diagonal matrix with elements:
Wi−1,i=Wi,i−1=hi/6{\displaystyle W_{i-1,i}=W_{i,i-1}=h_{i}/6},Wii=(hi+hi+1)/3{\displaystyle W_{ii}=(h_{i}+h_{i+1})/3}andhi=ξi+1−ξi{\displaystyle h_{i}=\xi _{i+1}-\xi _{i}}, the distances between successive knots (or x values).
Now back to the first step. The penalized sum-of-squares can be written as
whereY=(Y1,…,Yn)T{\displaystyle Y=(Y_{1},\ldots ,Y_{n})^{T}}.
Minimizing overm^{\displaystyle {\hat {m}}}by differentiating againstm^{\displaystyle {\hat {m}}}. This results in:−2{Y−m^}+2λAm^=0{\displaystyle -2\{Y-{\hat {m}}\}+2\lambda A{\hat {m}}=0}[6]andm^=(I+λA)−1Y.{\displaystyle {\hat {m}}=(I+\lambda A)^{-1}Y.}
De Boor's approach exploits the same idea, of finding a balance between having a smooth curve and being close to the given data.[7]
wherep{\displaystyle p}is a parameter called smooth factor and belongs to the interval[0,1]{\displaystyle [0,1]}, andδi;i=1,…,n{\displaystyle \delta _{i};i=1,\dots ,n}are the quantities controlling the extent of smoothing (they represent the weightδi−2{\displaystyle \delta _{i}^{-2}}of each pointYi{\displaystyle Y_{i}}). In practice, sincecubic splinesare mostly used,m{\displaystyle m}is usually2{\displaystyle 2}. The solution form=2{\displaystyle m=2}was proposed byChristian Reinschin 1967.[8]Form=2{\displaystyle m=2}, whenp{\displaystyle p}approaches1{\displaystyle 1},f^{\displaystyle {\hat {f}}}converges to the "natural" spline interpolant to the given data.[7]Asp{\displaystyle p}approaches0{\displaystyle 0},f^{\displaystyle {\hat {f}}}converges to a straight line (the smoothest curve). Since finding a suitable value ofp{\displaystyle p}is a task of trial and error, a redundant constantS{\displaystyle S}was introduced for convenience.[8]S{\displaystyle S}is used to numerically determine the value ofp{\displaystyle p}so that the functionf^{\displaystyle {\hat {f}}}meets the following condition:
The algorithm described by de Boor starts withp=0{\displaystyle p=0}and increasesp{\displaystyle p}until the condition is met.[7]Ifδi{\displaystyle \delta _{i}}is an estimation of the standard deviation forYi{\displaystyle Y_{i}}, the constantS{\displaystyle S}is recommended to be chosen in the interval[n−2n,n+2n]{\displaystyle \left[n-{\sqrt {2n}},n+{\sqrt {2n}}\right]}. HavingS=0{\displaystyle S=0}means the solution is the "natural" spline interpolant.[8]IncreasingS{\displaystyle S}means we obtain a smoother curve by getting farther from the given data.
There are two main classes of method for generalizing from smoothing with respect to a scalarx{\displaystyle x}to smoothing with respect to a vectorx{\displaystyle x}. The first approach simply generalizes the spline smoothing penalty to the multidimensional setting. For example, if trying to estimatef(x,z){\displaystyle f(x,z)}we might use theThin plate splinepenalty and find thef^(x,z){\displaystyle {\hat {f}}(x,z)}minimizing
The thin plate spline approach can be generalized to smoothing with respect to more than two dimensions and to other orders of differentiation in the penalty.[1]As the dimension increases there are some restrictions on the smallest order of differential that can be used,[1]but actually Duchon's original paper,[9]gives slightly more complicated penalties that can avoid this restriction.
The thin plate splines are isotropic, meaning that if we rotate thex,z{\displaystyle x,z}co-ordinate system the estimate will not change, but also that we are assuming that the same level of smoothing is appropriate in all directions. This is often considered reasonable when smoothing with respect to spatial location, but in many other cases isotropy is not an appropriate assumption and can lead to sensitivity to apparently arbitrary choices of measurement units. For example, if smoothing with respect to distance and time an isotropic smoother will give different results if distance is measure in metres and time in seconds, to what will occur if we change the units to centimetres and hours.
The second class of generalizations to multi-dimensional smoothing deals directly with this scale invariance issue using tensor product spline constructions.[10][11][12]Such splines have smoothing penalties with multiple smoothing parameters, which is the price that must be paid for not assuming that the same degree of smoothness is appropriate in all directions.
Smoothing splines are related to, but distinct from:
Source code forsplinesmoothing can be found in the examples fromCarl de Boor'sbookA Practical Guide to Splines. The examples are in theFortranprogramming language. The updated sources are available also on Carl de Boor's official site[1].
|
https://en.wikipedia.org/wiki/Smoothing_spline
|
A variable is considereddependentif it depends on (or is hypothesized to depend on) anindependent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by amathematical function), on the values of other variables. Independent variables, on the other hand, are not seen as depending on any other variable in the scope of the experiment in question.[a]Rather, they are controlled by the experimenter.
In mathematics, afunctionis a rule for taking an input (in the simplest case, a number or set of numbers)[2]and providing an output (which may also be a number).[2]A symbol that stands for an arbitrary input is called anindependent variable, while a symbol that stands for an arbitrary output is called adependent variable.[3]The most common symbol for the input isx, and the most common symbol for the output isy; the function itself is commonly writteny=f(x).[3][4]
It is possible to have multiple independent variables or multiple dependent variables. For instance, inmultivariable calculus, one often encounters functions of the formz=f(x,y), wherezis a dependent variable andxandyare independent variables.[5]Functions with multiple outputs are often referred to asvector-valued functions.
Inmathematical modeling, the relationship between the set of dependent variables and set of independent variables is studied.[citation needed]
In the simplestochasticlinear modelyi= a + bxi+eithe termyiis theith value of the dependent variable andxiis theith value of the independent variable. The termeiis known as the "error" and contains the variability of the dependent variable not explained by the independent variable.[citation needed]
With multiple independent variables, the model isyi= a + bxi,1+ bxi,2+ ... + bxi,n+ei, wherenis the number of independent variables.[citation needed]
In statistics, more specifically inlinear regression, ascatter plotof data is generated withXas the independent variable andYas the dependent variable. This is also called a bivariate dataset,(x1,y1)(x2,y2) ...(xi,yi). The simple linear regression model takes the form ofYi= a + Bxi+Ui, fori= 1, 2, ... ,n. In this case,Ui, ... ,Unare independent random variables. This occurs when the measurements do not influence each other. Through propagation of independence, the independence ofUiimplies independence ofYi, even though eachYihas a different expectation value. EachUihas an expectation value of 0 and a variance ofσ2.[6]Expectation ofYiProof:[6]
E[Yi]=E[α+βxi+Ui]=α+βxi+E[Ui]=α+βxi.{\displaystyle \operatorname {E} [Y_{i}]=\operatorname {E} [\alpha +\beta x_{i}+U_{i}]=\alpha +\beta x_{i}+\operatorname {E} [U_{i}]=\alpha +\beta x_{i}.}
The line of best fit for thebivariate datasettakes the formy=α+βxand is called the regression line.αandβcorrespond to the intercept and slope, respectively.[6]
In anexperiment, the variable manipulated by an experimenter is something that is proven to work, called an independent variable.[7]The dependent variable is the event expected to change when the independent variable is manipulated.[8]
Indata miningtools (formultivariate statisticsandmachine learning), the dependent variable is assigned aroleastarget variable(or in some tools aslabel attribute), while an independent variable may be assigned a role asregular variable[9]or feature variable. Known values for the target variable are provided for the training data set andtest dataset, but should be predicted for other data. The target variable is used insupervised learningalgorithms but not in unsupervised learning.
Depending on the context, an independent variable is sometimes called a "predictor variable", "regressor", "covariate", "manipulated variable", "explanatory variable", "exposure variable" (seereliability theory), "risk factor" (seemedical statistics), "feature" (inmachine learningandpattern recognition) or "input variable".[10][11]Ineconometrics, the term "control variable" is usually used instead of "covariate".[12][13][14][15][16]
"Explanatory variable" is preferred by some authors over "independent variable" when the quantities treated as independent variables may not be statistically independent or independently manipulable by the researcher.[17][18]If the independent variable is referred to as an "explanatory variable" then the term "response variable" is preferred by some authors for the dependent variable.[11][17][18]
Depending on the context, a dependent variable is sometimes called a "response variable", "regressand", "criterion", "predicted variable", "measured variable", "explained variable", "experimental variable", "responding variable", "outcome variable", "output variable", "target" or "label".[11]In economics endogenous variables are usually referencing the target.
"Explained variable" is preferred by some authors over "dependent variable" when the quantities treated as "dependent variables" may not be statistically dependent.[19]If the dependent variable is referred to as an "explained variable" then the term "predictor variable" is preferred by some authors for the independent variable.[19]
An example is provided by the analysis of trend in sea level byWoodworth (1987). Here the dependent variable (and variable of most interest) was the annual mean sea level at a given location for which a series of yearly values were available. The primary independent variable was time. Use was made of a covariate consisting of yearly values of annual mean atmospheric pressure at sea level. The results showed that inclusion of the covariate allowed improved estimates of the trend against time to be obtained, compared to analyses which omitted the covariate.
A variable may be thought to alter the dependent or independent variables, but may not actually be the focus of the experiment. So that the variable will be kept constant or monitored to try to minimize its effect on the experiment. Such variables may be designated as either a "controlled variable", "control variable", or "fixed variable".
Extraneous variables, if included in aregression analysisas independent variables, may aid a researcher with accurate response parameter estimation,prediction, andgoodness of fit, but are not of substantive interest to thehypothesisunder examination. For example, in a study examining the effect of post-secondary education on lifetime earnings, some extraneous variables might be gender, ethnicity, social class, genetics, intelligence, age, and so forth. A variable is extraneous only when it can be assumed (or shown) to influence thedependent variable. If included in a regression, it can improve thefit of the model. If it is excluded from the regression and if it has a non-zerocovariancewith one or more of the independent variables of interest, its omission willbiasthe regression's result for the effect of that independent variable of interest. This effect is calledconfoundingoromitted variable bias; in these situations, design changes and/or controlling for a variable statistical control is necessary.
Extraneous variables are often classified into three types:
In modelling, variability that is not covered by the independent variable is designated byeI{\displaystyle e_{I}}and is known as the "residual", "side effect", "error", "unexplained share", "residual variable", "disturbance", or "tolerance".
|
https://en.wikipedia.org/wiki/Covariate
|
Instatistics,datatransformationis the application of adeterministicmathematicalfunctionto each point in adataset—that is, each data pointziis replaced with the transformed valueyi=f(zi), wherefis a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of astatistical inferenceprocedure that is to be applied, or to improve the interpretability or appearance ofgraphs.
Nearly always, the function that is used to transform the data isinvertible, and generally iscontinuous. The transformation is usually applied to a collection of comparable measurements. For example, if we are working with data on peoples' incomes in somecurrencyunit, it would be common to transform each person's income value by thelogarithmfunction.
Guidance for how data should be transformed, or whether a transformation should be applied at all, should come from the particular statistical analysis to be performed. For example, a simple way to construct an approximate 95%confidence intervalfor the population mean is to take thesample meanplus or minus twostandard errorunits. However, the constant factor 2 used here is particular to thenormal distribution, and is only applicable if the sample mean varies approximately normally. Thecentral limit theoremstates that in many situations, the sample mean does vary normally if the sample size is reasonably large. However, if thepopulationis substantiallyskewedand the sample size is at most moderate, the approximation provided by the central limit theorem can be poor, and the resulting confidence interval will likely have the wrongcoverage probability. Thus, when there is evidence of substantial skew in the data, it is common to transform the data to asymmetricdistribution[1]before constructing a confidence interval. If desired, the confidence interval for the quantiles (such as the median) can then be transformed back to the original scale using the inverse of the transformation that was applied to the data.[2][3]
Data can also be transformed to make them easier to visualize. For example, suppose we have a scatterplot in which the points are the countries of the world, and the data values being plotted are the land area and population of each country. If the plot is made using untransformed data (e.g. square kilometers for area and the number of people for population), most of the countries would be plotted in tight cluster of points in the lower left corner of the graph. The few countries with very large areas and/or populations would be spread thinly around most of the graph's area. Simply rescaling units (e.g., to thousand square kilometers, or to millions of people) will not change this. However, followinglogarithmictransformations of both area and population, the points will be spread more uniformly in the graph.
Another reason for applying data transformation is to improve interpretability, even if no formal statistical analysis or visualization is to be performed. For example, suppose we are comparing cars in terms of their fuel economy. These data are usually presented as "kilometers per liter" or "miles per gallon". However, if the goal is to assess how much additional fuel a person would use in one year when driving one car compared to another, it is more natural to work with the data transformed by applying thereciprocal function, yielding liters per kilometer, or gallons per mile.
Data transformation may be used as a remedial measure to make data suitable for modeling withlinear regressionif the original data violates one or more assumptions of linear regression.[4]For example, the simplest linear regression models assume alinearrelationship between theexpected valueofY(theresponse variableto be predicted) and eachindependent variable(when the other independent variables are held fixed). If linearity fails to hold, even approximately, it is sometimes possible to transform either the independent or dependent variables in the regression model to improve the linearity.[5]For example, addition of quadratic functions of the original independent variables may lead to a linear relationship withexpected valueofY,resulting in apolynomial regressionmodel, a special case of linear regression.
Another assumption of linear regression ishomoscedasticity, that is thevarianceoferrorsmust be the same regardless of the values of predictors. If this assumption is violated (i.e. if the data isheteroscedastic), it may be possible to find a transformation ofYalone, or transformations of bothX(thepredictor variables) andY, such that the homoscedasticity assumption (in addition to the linearity assumption) holds true on the transformed variables[5]and linear regression may therefore be applied on these.
Yet another application of data transformation is to address the problem of lack ofnormalityin error terms. Univariate normality is not needed forleast squaresestimates of the regression parameters to be meaningful (seeGauss–Markov theorem). However confidence intervals andhypothesis testswill have better statistical properties if the variables exhibitmultivariate normality. Transformations that stabilize the variance of error terms (i.e. those that address heteroscedasticity) often also help make the error terms approximately normal.[5][6]
Equation:Y=a+bX{\displaystyle Y=a+bX}
Equation:log(Y)=a+bX{\displaystyle \log(Y)=a+bX}
Equation:Y=a+blog(X){\displaystyle Y=a+b\log(X)}
Equation:log(Y)=a+blog(X){\displaystyle \log(Y)=a+b\log(X)}
Generalized linear models(GLMs) provide a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. GLMs allow the linear model to be related to the response variable via a link function and allow the magnitude of the variance of each measurement to be a function of its predicted value.[8][9]
Thelogarithmtransformationandsquare roottransformationare commonly used for positive data, and themultiplicative inversetransformation(reciprocal transformation) can be used for non-zero data. Thepower transformationis a family of transformations parameterized by a non-negative value λ that includes the logarithm, square root, and multiplicative inverse transformations as special cases. To approach data transformation systematically, it is possible to usestatistical estimationtechniques to estimate the parameter λ in the power transformation, thereby identifying the transformation that is approximately the most appropriate in a given setting. Since the power transformation family also includes the identity transformation, this approach can also indicate whether it would be best to analyze the data without a transformation. In regression analysis, this approach is known as theBox–Cox transformation.
The reciprocal transformation, some power transformations such as theYeo–Johnson transformation, and certain other transformations such as applying theinverse hyperbolic sine, can be meaningfully applied to data that include both positive and negative values[10](the power transformation is invertible over all real numbers if λ is an odd integer). However, when both negative and positive values are observed, it is sometimes common to begin by adding a constant to all values, producing a set of non-negative data to which any power transformation can be applied.[3]
A common situation where a data transformation is applied is when a value of interest ranges over severalorders of magnitude. Many physical and social phenomena exhibit such behavior — incomes, species populations, galaxy sizes, and rainfall volumes, to name a few. Power transforms, and in particular the logarithm, can often be used to induce symmetry in such data. The logarithm is often favored because it is easy to interpret its result in terms of "fold changes".
The logarithm also has a useful effect on ratios. If we are comparing positive quantitiesXandYusing the ratioX/Y, then ifX<Y, the ratio is in the interval (0,1), whereas ifX>Y, the ratio is in the half-line (1,∞), where the ratio of 1 corresponds to equality. In an analysis whereXandYare treated symmetrically, the log-ratio log(X/Y) is zero in the case of equality, and it has the property that ifXisKtimes greater thanY, the log-ratio is the equidistant from zero as in the situation whereYisKtimes greater thanX(the log-ratios are log(K) and −log(K) in these two situations).
If values are naturally restricted to be in the range 0 to 1, not including the end-points, then alogit transformationmay be appropriate: this yields values in the range (−∞,∞).
1. It is not always necessary or desirable to transform a data set to resemble a normal distribution. However, if symmetry or normality are desired, they can often be induced through one of the power transformations.
2. A linguistic power function is distributed according to theZipf-Mandelbrot law. The distribution is extremely spiky andleptokurtic, this is the reason why researchers had to turn their backs to statistics to solve e.g.authorship attributionproblems. Nevertheless, usage of Gaussian statistics is perfectly possible by applying data transformation.[11]
3. To assess whether normality has been achieved after transformation, any of the standardnormality testsmay be used. A graphical approach is usually more informative than a formal statistical test and hence anormal quantile plotis commonly used to assess the fit of a data set to a normal population. Alternatively, rules of thumb based on the sampleskewnessandkurtosishave also been proposed.[12][13]
If we observe a set ofnvaluesX1, ...,Xnwith no ties (i.e., there arendistinct values), we can replaceXiwith the transformed valueYi=k, wherekis defined such thatXiis thekthlargest among all theXvalues. This is called therank transform,[14]and creates data with a perfect fit to auniform distribution. This approach has apopulationanalogue.
Using theprobability integral transform, ifXis anyrandom variable, andFis thecumulative distribution functionofX, then as long asFis invertible, the random variableU=F(X) follows a uniform distribution on theunit interval[0,1].
From a uniform distribution, we can transform to any distribution with an invertible cumulative distribution function. IfGis an invertible cumulative distribution function, andUis a uniformly distributed random variable, then the random variableG−1(U) hasGas its cumulative distribution function.
Putting the two together, ifXis any random variable,Fis the invertible cumulative distribution function ofX, andGis an invertible cumulative distribution function then the random variableG−1(F(X)) hasGas its cumulative distribution function.
Many types of statistical data exhibit a "variance-on-mean relationship", meaning that the variability is different for data values with differentexpected values. As an example, in comparing different populations in the world, the variance of income tends to increase with mean income. If we consider a number of small area units (e.g., counties in the United States) and obtain the mean and variance of incomes within each county, it is common that the counties with higher mean income also have higher variances.
Avariance-stabilizing transformationaims to remove a variance-on-mean relationship, so that the variance becomes constant relative to the mean. Examples of variance-stabilizing transformations are theFisher transformationfor the sample correlation coefficient, thesquare roottransformation orAnscombe transformforPoissondata (count data), theBox–Cox transformationfor regression analysis, and thearcsine square root transformationor angular transformation for proportions (binomialdata). While commonly used for statistical analysis of proportional data, the arcsine square root transformation is not recommended becauselogistic regressionor alogit transformationare more appropriate for binomial or non-binomial proportions, respectively, especially due to decreasedtype-II error.[15][3]
Univariate functions can be applied point-wise to multivariate data to modify their marginal distributions. It is also possible to modify some attributes of a multivariate distribution using an appropriately constructed transformation. For example, when working withtime seriesand other types of sequential data, it is common todifferencethe data to improvestationarity. If data generated by a random vectorXare observed as vectorsXiof observations withcovariance matrixΣ, alinear transformationcan be used todecorrelatethe data. To do this, theCholesky decompositionis used to express Σ =AA'. Then the transformed vectorYi=A−1Xihas theidentity matrixas its covariance matrix.
|
https://en.wikipedia.org/wiki/Data_transformation_(statistics)
|
Inmachine learning(ML),feature learningorrepresentation learning[2]is a set of techniques that allow a system to automatically discover the representations needed forfeaturedetection or classification from raw data. This replaces manualfeature engineeringand allows a machine to both learn the features and use them to perform a specific task.
Feature learning is motivated by the fact that ML tasks such asclassificationoften require input that is mathematically and computationally convenient to process. However, real-world data, such as image, video, and sensor data, have not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms.
Feature learning can be either supervised, unsupervised, or self-supervised:
Supervisedfeature learning is learning features from labeled data. The data label allows the system to compute an error term, the degree to which the system fails to produce the label, which can then be used as feedback to correct the learning process (reduce/minimize the error). Approaches include:
Dictionary learning develops a set (dictionary) of representative elements from the input data such that each data point can be represented as a weighted sum of the representative elements. The dictionary elements and the weights may be found by minimizing the average representation error (over the input data), together withL1regularizationon the weights to enable sparsity (i.e., the representation of each data point has only a few nonzero weights).
Supervised dictionary learning exploits both the structure underlying the input data and the labels for optimizing the dictionary elements. For example, this[12]supervised dictionary learning technique applies dictionary learning on classification problems by jointly optimizing the dictionary elements, weights for representing data points, and parameters of the classifier based on the input data. In particular, a minimization problem is formulated, where the objective function consists of the classification error, the representation error, anL1regularization on the representing weights for each data point (to enable sparse representation of data), and anL2regularization on the parameters of the classifier.
Neural networksare a family of learning algorithms that use a "network" consisting of multiple layers of inter-connected nodes. It is inspired by the animal nervous system, where the nodes are viewed as neurons and edges are viewed as synapses. Each edge has an associated weight, and the network defines computational rules for passing input data from the network's input layer to the output layer. A network function associated with a neural network characterizes the relationship between input and output layers, which is parameterized by the weights. With appropriately defined network functions, various learning tasks can be performed by minimizing a cost function over the network function (weights).
Multilayerneural networkscan be used to perform feature learning, since they learn a representation of their input at the hidden layer(s) which is subsequently used for classification or regression at the output layer. The most popular network architecture of this type isSiamese networks.
Unsupervised feature learning is learning features from unlabeled data. The goal of unsupervised feature learning is often to discover low-dimensional features that capture some structure underlying the high-dimensional input data. When the feature learning is performed in an unsupervised way, it enables a form ofsemisupervised learningwhere features learned from an unlabeled dataset are then employed to improve performance in a supervised setting with labeled data.[13][14]Several approaches are introduced in the following.
K-means clusteringis an approach for vector quantization. In particular, given a set ofnvectors,k-means clustering groups them into k clusters (i.e., subsets) in such a way that each vector belongs to the cluster with the closest mean. The problem is computationallyNP-hard, although suboptimalgreedy algorithmshave been developed.
K-means clustering can be used to group an unlabeled set of inputs intokclusters, and then use thecentroidsof these clusters to produce features. These features can be produced in several ways. The simplest is to addkbinary features to each sample, where each featurejhas value oneiffthejth centroid learned byk-means is the closest to the sample under consideration.[6]It is also possible to use the distances to the clusters as features, perhaps after transforming them through aradial basis function(a technique that has been used to trainRBF networks[15]). Coates andNgnote that certain variants ofk-means behave similarly tosparse codingalgorithms.[16]
In a comparative evaluation of unsupervised feature learning methods, Coates, Lee and Ng found thatk-means clustering with an appropriate transformation outperforms the more recently invented auto-encoders and RBMs on an image classification task.[6]K-means also improves performance in the domain ofNLP, specifically fornamed-entity recognition;[17]there, it competes withBrown clustering, as well as with distributed word representations (also known as neural word embeddings).[14]
Principal component analysis(PCA) is often used for dimension reduction. Given an unlabeled set ofninput data vectors, PCA generatesp(which is much smaller than the dimension of the input data)right singular vectorscorresponding to theplargest singular values of the data matrix, where thekth row of the data matrix is thekth input data vector shifted by thesample meanof the input (i.e., subtracting the sample mean from the data vector). Equivalently, these singular vectors are theeigenvectorscorresponding to theplargest eigenvalues of thesample covariance matrixof the input vectors. Thesepsingular vectors are the feature vectors learned from the input data, and they represent directions along which the data has the largest variations.
PCA is a linear feature learning approach since thepsingular vectors are linear functions of the data matrix. The singular vectors can be generated via a simple algorithm withpiterations. In theith iteration, the projection of the data matrix on the(i-1)th eigenvector is subtracted, and theith singular vector is found as the right singular vector corresponding to the largest singular of the residual data matrix.
PCA has several limitations. First, it assumes that the directions with large variance are of most interest, which may not be the case. PCA only relies on orthogonal transformations of the original data, and it exploits only the first- and second-ordermomentsof the data, which may not well characterize the data distribution. Furthermore, PCA can effectively reduce dimension only when the input data vectors are correlated (which results in a few dominant eigenvalues).
Local linear embedding(LLE) is a nonlinear learning approach for generating low-dimensional neighbor-preserving representations from (unlabeled) high-dimension input. The approach was proposed by Roweis and Saul (2000).[18][19]The general idea of LLE is to reconstruct the original high-dimensional data using lower-dimensional points while maintaining some geometric properties of the neighborhoods in the original data set.
LLE consists of two major steps. The first step is for "neighbor-preserving", where each input data pointXiis reconstructed as a weighted sum ofKnearest neighbordata points, and the optimal weights are found by minimizing the average squared reconstruction error (i.e., difference between an input point and its reconstruction) under the constraint that the weights associated with each point sum up to one. The second step is for "dimension reduction," by looking for vectors in a lower-dimensional space that minimizes the representation error using the optimized weights in the first step. Note that in the first step, the weights are optimized with fixed data, which can be solved as aleast squaresproblem. In the second step, lower-dimensional points are optimized with fixed weights, which can be solved via sparse eigenvalue decomposition.
The reconstruction weights obtained in the first step capture the "intrinsic geometric properties" of a neighborhood in the input data.[19]It is assumed that original data lie on a smooth lower-dimensionalmanifold, and the "intrinsic geometric properties" captured by the weights of the original data are also expected to be on the manifold. This is why the same weights are used in the second step of LLE. Compared with PCA, LLE is more powerful in exploiting the underlying data structure.
Independent component analysis(ICA) is a technique for forming a data representation using a weighted sum of independent non-Gaussian components.[20]The assumption of non-Gaussian is imposed since the weights cannot be uniquely determined when all the components followGaussiandistribution.
Unsupervised dictionary learning does not utilize data labels and exploits the structure underlying the data for optimizing dictionary elements. An example of unsupervised dictionary learning issparse coding, which aims to learn basis functions (dictionary elements) for data representation from unlabeled input data. Sparse coding can be applied to learn overcomplete dictionaries, where the number of dictionary elements is larger than the dimension of the input data.[21]Aharonet al. proposed algorithmK-SVDfor learning a dictionary of elements that enables sparse representation.[22]
The hierarchical architecture of the biological neural system inspiresdeep learningarchitectures for feature learning by stacking multiple layers of learning nodes.[23]These architectures are often designed based on the assumption ofdistributed representation: observed data is generated by the interactions of many different factors on multiple levels. In a deep learning architecture, the output of each intermediate layer can be viewed as a representation of the original input data. Each level uses the representation produced by the previous, lower level as input, and produces new representations as output, which are then fed to higher levels. The input at the bottom layer is raw data, and the output of the final, highest layer is the final low-dimensional feature or representation.
Restricted Boltzmann machines(RBMs) are often used as a building block for multilayer learning architectures.[6][24]An RBM can be represented by an undirectedbipartite graphconsisting of a group ofbinaryhidden variables, a group of visible variables, and edges connecting the hidden and visible nodes. It is a special case of the more generalBoltzmann machineswith the constraint of no intra-node connections. Each edge in an RBM is associated with a weight. The weights together with the connections define anenergy function, based on which ajoint distributionof visible and hidden nodes can be devised. Based on the topology of the RBM, the hidden (visible) variables are independent, conditioned on the visible (hidden) variables.[clarification needed]Such conditional independence facilitates computations.
An RBM can be viewed as a single layer architecture for unsupervised feature learning. In particular, the visible variables correspond to input data, and the hidden variables correspond to feature detectors. The weights can be trained by maximizing the probability of visible variables usingHinton'scontrastive divergence(CD) algorithm.[24]
In general, training RBMs by solving the maximization problem tends to result in non-sparse representations. Sparse RBM[25]was proposed to enable sparse representations. The idea is to add aregularizationterm in the objective function of data likelihood, which penalizes the deviation of the expected hidden variables from a small constantp{\displaystyle p}. RBMs have also been used to obtaindisentangledrepresentations of data, where interesting features map to separate hidden units.[26]
Anautoencoderconsisting of an encoder and a decoder is a paradigm for deep learning architectures. An example is provided by Hinton and Salakhutdinov[24]where the encoder uses raw data (e.g., image) as input and produces feature or representation as output and the decoder uses the extracted feature from the encoder as input and reconstructs the original input raw data as output. The encoder and decoder are constructed by stacking multiple layers of RBMs. The parameters involved in the architecture were originally trained in agreedylayer-by-layer manner: after one layer of feature detectors is learned, they are fed up as visible variables for training the corresponding RBM. Current approaches typically apply end-to-end training withstochastic gradient descentmethods. Training can be repeated until some stopping criteria are satisfied.
Self-supervised representation learning is learning features by training on the structure of unlabeled data rather than relying on explicit labels for aninformation signal. This approach has enabled the combined use of deep neural network architectures and larger unlabeled datasets to produce deep feature representations.[9]Training tasks typically fall under the classes of either contrastive, generative or both.[27]Contrastive representation learning trains representations for associated data pairs, called positive samples, to be aligned, while pairs with no relation, called negative samples, are contrasted. A larger portion of negative samples is typically necessary in order to prevent catastrophic collapse, which is when all inputs are mapped to the same representation.[9]Generative representation learning tasks the model with producing the correct data to either match a restricted input or reconstruct the full input from a lower dimensional representation.[27]
A common setup for self-supervised representation learning of a certain data type (e.g. text, image, audio, video) is to pretrain the model using large datasets of general context, unlabeled data.[11]Depending on the context, the result of this is either a set of representations for common data segments (e.g. words) which new data can be broken into, or a neural network able to convert each new data point (e.g. image) into a set of lower dimensional features.[9]In either case, the output representations can then be used as an initialization in many different problem settings where labeled data may be limited. Specialization of the model to specific tasks is typically done with supervised learning, either by fine-tuning the model / representations with the labels as the signal, or freezing the representations and training an additional model which takes them as an input.[11]
Many self-supervised training schemes have been developed for use in representation learning of variousmodalities, often first showing successful application in text or image before being transferred to other data types.[9]
Word2vecis aword embeddingtechnique which learns to represent words through self-supervision over each word and its neighboring words in a sliding window across a large corpus of text.[28]The model has two possible training schemes to produce word vector representations, one generative and one contrastive.[27]The first is word prediction given each of the neighboring words as an input.[28]The second is training on the representation similarity for neighboring words and representation dissimilarity for random pairs of words.[10]A limitation of word2vec is that only the pairwise co-occurrence structure of the data is used, and not the ordering or entire set of context words. More recent transformer-based representation learning approaches attempt to solve this with word prediction tasks.[9]GPTspretrain on next word prediction using prior input words as context,[29]whereasBERTmasks random tokens in order to provide bidirectional context.[30]
Other self-supervised techniques extend word embeddings by finding representations for larger text structures such assentencesor paragraphs in the input data.[9]Doc2vecextends the generative training approach in word2vec by adding an additional input to the word prediction task based on the paragraph it is within, and is therefore intended to represent paragraph level context.[31]
The domain of image representation learning has employed many different self-supervised training techniques, including transformation,[32]inpainting,[33]patch discrimination[34]and clustering.[35]
Examples of generative approaches are Context Encoders, which trains anAlexNetCNNarchitecture to generate a removed image region given the masked image as input,[33]and iGPT, which applies theGPT-2language model architecture to images by training on pixel prediction after reducing theimage resolution.[36]
Many other self-supervised methods usesiamese networks, which generate different views of the image through various augmentations that are then aligned to have similar representations. The challenge is avoiding collapsing solutions where the model encodes all images to the same representation.[37]SimCLR is a contrastive approach which uses negative examples in order to generate image representations with aResNetCNN.[34]Bootstrap Your Own Latent (BYOL) removes the need for negative samples by encoding one of the views with a slow moving average of the model parameters as they are being modified during training.[38]
The goal of manygraphrepresentation learning techniques is to produce an embedded representation of eachnodebased on the overallnetwork topology.[39]node2vecextends theword2vectraining technique to nodes in a graph by using co-occurrence inrandom walksthrough the graph as the measure of association.[40]Another approach is to maximizemutual information, a measure of similarity, between the representations of associated structures within the graph.[9]An example is Deep Graph Infomax, which uses contrastive self-supervision based on mutual information between the representation of a “patch” around each node, and a summary representation of the entire graph. Negative samples are obtained by pairing the graph representation with either representations from another graph in a multigraph training setting, or corrupted patch representations in single graph training.[41]
With analogous results in masked prediction[42]and clustering,[43]video representation learning approaches are often similar to image techniques but must utilize the temporal sequence of video frames as an additional learned structure. Examples include VCP, which masks video clips and trains to choose the correct one given a set of clip options, and Xu et al., who train a 3D-CNN to identify the original order given a shuffled set of video clips.[44]
Self-supervised representation techniques have also been applied to many audio data formats, particularly forspeech processing.[9]Wav2vec 2.0 discretizes theaudio waveforminto timesteps via temporalconvolutions, and then trains atransformeron masked prediction of random timesteps using a contrastive loss.[45]This is similar to theBERT language model, except as in many SSL approaches to video, the model chooses among a set of options rather than over the entire word vocabulary.[30][45]
Self-supervised learning has also been used to develop joint representations of multiple data types.[9]Approaches usually rely on some natural or human-derived association between the modalities as an implicit label, for instance video clips of animals or objects with characteristic sounds,[46]or captions written to describe images.[47]CLIP produces a joint image-text representation space by training to align image and text encodings from a large dataset of image-caption pairs using a contrastive loss.[47]MERLOT Reserve trains a transformer-based encoder to jointly represent audio, subtitles and video frames from a large dataset of videos through 3 joint pretraining tasks: contrastive masked prediction of either audio or text segments given the video frames and surrounding audio and text context, along with contrastive alignment of video frames with their corresponding captions.[46]
Multimodal representation modelsare typically unable to assume direct correspondence of representations in the different modalities, since the precise alignment can often be noisy or ambiguous. For example, the text "dog" could be paired with many different pictures of dogs, and correspondingly a picture of a dog could be captioned with varying degrees of specificity. This limitation means that downstream tasks may require an additional generative mapping network between modalities to achieve optimal performance, such as inDALLE-2for text to image generation.[48]
Dynamic representation learning methods[49][50]generate latent embeddings for dynamic systems such as dynamic networks. Since particular distance functions are invariant under particular linear transformations, different sets of embedding vectors can actually represent the same/similar information. Therefore, for a dynamic system, a temporal difference in its embeddings may be explained by misalignment of embeddings due to arbitrary transformations and/or actual changes in the system.[51]Therefore, generally speaking, temporal embeddings learned via dynamic representation learning methods should be inspected for any spurious changes and be aligned before consequent dynamic analyses.
|
https://en.wikipedia.org/wiki/Feature_learning
|
Inmachine learning,feature hashing, also known as thehashing trick(by analogy to thekernel trick), is a fast and space-efficient way of vectorizingfeatures, i.e. turning arbitrary features into indices in a vector or matrix.[1][2]It works by applying ahash functionto the features and using their hash values as indices directly (after a modulo operation), rather than looking the indices up in anassociative array. In addition to its use for encoding non-numeric values, feature hashing can also be used fordimensionality reduction.[2]
This trick is often attributed to Weinberger et al. (2009),[2]but there exists a much earlier description of this method published by John Moody in 1989.[1]
In a typicaldocument classificationtask, the input to the machine learning algorithm (both during learning and classification) is free text. From this, abag of words(BOW) representation is constructed: the individualtokensare extracted and counted, and each distinct token in the training set defines afeature(independent variable) of each of the documents in both the training and test sets.
Machine learning algorithms, however, are typically defined in terms of numerical vectors. Therefore, the bags of words for a set of documents is regarded as aterm-document matrixwhere each row is a single document, and each column is a single feature/word; the entryi,jin such a matrix captures the frequency (or weight) of thej'th term of thevocabularyin documenti. (An alternative convention swaps the rows and columns of the matrix, but this difference is immaterial.)
Typically, these vectors are extremelysparse—according toZipf's law.
The common approach is to construct, at learning time or prior to that, adictionaryrepresentation of the vocabulary of the training set, and use that to map words to indices.Hash tablesandtriesare common candidates for dictionary implementation. E.g., the three documents
can be converted, using the dictionary
to the term-document matrix
(Punctuation was removed, as is usual in document classification and clustering.)
The problem with this process is that such dictionaries take up a large amount of storage space and grow in size as the training set grows.[3]On the contrary, if the vocabulary is kept fixed and not increased with a growing training set, an adversary may try to invent new words or misspellings that are not in the stored vocabulary so as to circumvent a machine learned filter. To address this challenge,Yahoo! Researchattempted to use feature hashing for their spam filters.[4]
Note that the hashing trick isn't limited to text classification and similar tasks at the document level, but can be applied to any problem that involves large (perhaps unbounded) numbers of features.
Mathematically, a token is an elementt{\displaystyle t}in a finite (or countably infinite) setT{\displaystyle T}. Suppose we only need to process a finite corpus, then we can put all tokens appearing in the corpus intoT{\displaystyle T}, meaning thatT{\displaystyle T}is finite. However, suppose we want to process all possible words made of the English letters, thenT{\displaystyle T}is countably infinite.
Most neural networks can only operate on real vector inputs, so we must construct a "dictionary" functionϕ:T→Rn{\displaystyle \phi :T\to \mathbb {R} ^{n}}.
WhenT{\displaystyle T}is finite, of size|T|=m≤n{\displaystyle |T|=m\leq n}, then we can useone-hot encodingto map it intoRn{\displaystyle \mathbb {R} ^{n}}. First,arbitrarilyenumerateT={t1,t2,..,tm}{\displaystyle T=\{t_{1},t_{2},..,t_{m}\}}, then defineϕ(ti)=ei{\displaystyle \phi (t_{i})=e_{i}}. In other words, we assign a unique indexi{\displaystyle i}to each token, then map the token with indexi{\displaystyle i}to the unit basis vectorei{\displaystyle e_{i}}.
One-hot encoding is easy to interpret, but it requires one to maintain the arbitrary enumeration ofT{\displaystyle T}. Given a tokent∈T{\displaystyle t\in T}, to computeϕ(t){\displaystyle \phi (t)}, we must find out the indexi{\displaystyle i}of the tokent{\displaystyle t}. Thus, to implementϕ{\displaystyle \phi }efficiently, we need a fast-to-compute bijectionh:T→{1,...,m}{\displaystyle h:T\to \{1,...,m\}}, then we haveϕ(t)=eh(t){\displaystyle \phi (t)=e_{h(t)}}.
In fact, we can relax the requirement slightly: It suffices to have a fast-to-computeinjectionh:T→{1,...,n}{\displaystyle h:T\to \{1,...,n\}}, then useϕ(t)=eh(t){\displaystyle \phi (t)=e_{h(t)}}.
In practice, there is no simple way to construct an efficient injectionh:T→{1,...,n}{\displaystyle h:T\to \{1,...,n\}}. However, we do not need a strict injection, but only anapproximateinjection. That is, whent≠t′{\displaystyle t\neq t'}, we shouldprobablyhaveh(t)≠h(t′){\displaystyle h(t)\neq h(t')}, so thatprobablyϕ(t)≠ϕ(t′){\displaystyle \phi (t)\neq \phi (t')}.
At this point, we have just specified thath{\displaystyle h}should be a hashing function. Thus we reach the idea of feature hashing.
The basic feature hashing algorithm presented in (Weinberger et al. 2009)[2]is defined as follows.
First, one specifies two hash functions: thekernel hashh:T→{1,2,...,n}{\displaystyle h:T\to \{1,2,...,n\}}, and thesign hashζ:T→{−1,+1}{\displaystyle \zeta :T\to \{-1,+1\}}. Next, one defines the feature hashing function:ϕ:T→Rn,ϕ(t)=ζ(t)eh(t){\displaystyle \phi :T\to \mathbb {R} ^{n},\quad \phi (t)=\zeta (t)e_{h(t)}}Finally, extend this feature hashing function to strings of tokens byϕ:T∗→Rn,ϕ(t1,...,tk)=∑j=1kϕ(tj){\displaystyle \phi :T^{*}\to \mathbb {R} ^{n},\quad \phi (t_{1},...,t_{k})=\sum _{j=1}^{k}\phi (t_{j})}whereT∗{\displaystyle T^{*}}is the set ofall finite strings consisting of tokensinT{\displaystyle T}.
Equivalently,ϕ(t1,...,tk)=∑j=1kζ(tj)eh(tj)=∑i=1n(∑j:h(tj)=iζ(tj))ei{\displaystyle \phi (t_{1},...,t_{k})=\sum _{j=1}^{k}\zeta (t_{j})e_{h(t_{j})}=\sum _{i=1}^{n}\left(\sum _{j:h(t_{j})=i}\zeta (t_{j})\right)e_{i}}
We want to say something about the geometric property ofϕ{\displaystyle \phi }, butT{\displaystyle T}, by itself, is just a set of tokens, we cannot impose a geometric structure on it except the discrete topology, which is generated by thediscrete metric. To make it nicer, we lift it toT→RT{\displaystyle T\to \mathbb {R} ^{T}}, and liftϕ{\displaystyle \phi }fromϕ:T→Rn{\displaystyle \phi :T\to \mathbb {R} ^{n}}toϕ:RT→Rn{\displaystyle \phi :\mathbb {R} ^{T}\to \mathbb {R} ^{n}}by linear extension:ϕ((xt)t∈T)=∑t∈Txtζ(t)eh(t)=∑i=1n(∑t:h(t)=ixtζ(t))ei{\displaystyle \phi ((x_{t})_{t\in T})=\sum _{t\in T}x_{t}\zeta (t)e_{h(t)}=\sum _{i=1}^{n}\left(\sum _{t:h(t)=i}x_{t}\zeta (t)\right)e_{i}}There is an infinite sum there, which must be handled at once. There are essentially only two ways to handle infinities. One may impose a metric, then take itscompletion, to allow well-behaved infinite sums, or one may demand that nothing isactually infinite, only potentially so. Here, we go for the potential-infinity way, by restrictingRT{\displaystyle \mathbb {R} ^{T}}to contain only vectors withfinite support:∀(xt)t∈T∈RT{\displaystyle \forall (x_{t})_{t\in T}\in \mathbb {R} ^{T}}, only finitely many entries of(xt)t∈T{\displaystyle (x_{t})_{t\in T}}are nonzero.
Define aninner productonRT{\displaystyle \mathbb {R} ^{T}}in the obvious way:⟨et,et′⟩={1,ift=t′,0,else.⟨x,x′⟩=∑t,t′∈Txtxt′⟨et,et′⟩{\displaystyle \langle e_{t},e_{t'}\rangle ={\begin{cases}1,{\text{ if }}t=t',\\0,{\text{ else.}}\end{cases}}\quad \langle x,x'\rangle =\sum _{t,t'\in T}x_{t}x_{t'}\langle e_{t},e_{t'}\rangle }As a side note, ifT{\displaystyle T}is infinite, then the inner product spaceRT{\displaystyle \mathbb {R} ^{T}}is notcomplete. Taking its completion would get us to aHilbert space, which allows well-behaved infinite sums.
Now we have an inner product space, with enough structure to describe the geometry of the feature hashing functionϕ:RT→Rn{\displaystyle \phi :\mathbb {R} ^{T}\to \mathbb {R} ^{n}}.
First, we can see whyh{\displaystyle h}is called a "kernel hash": it allows us to define akernelK:T×T→R{\displaystyle K:T\times T\to \mathbb {R} }byK(t,t′)=⟨eh(t),eh(t′)⟩{\displaystyle K(t,t')=\langle e_{h(t)},e_{h(t')}\rangle }In the language of the "kernel trick",K{\displaystyle K}is the kernel generated by the "feature map"φ:T→Rn,φ(t)=eh(t){\displaystyle \varphi :T\to \mathbb {R} ^{n},\quad \varphi (t)=e_{h(t)}}Note that this is not the feature map we were using, which isϕ(t)=ζ(t)eh(t){\displaystyle \phi (t)=\zeta (t)e_{h(t)}}. In fact, we have been usinganother kernelKζ:T×T→R{\displaystyle K_{\zeta }:T\times T\to \mathbb {R} }, defined byKζ(t,t′)=⟨ζ(t)eh(t),ζ(t′)eh(t′)⟩{\displaystyle K_{\zeta }(t,t')=\langle \zeta (t)e_{h(t)},\zeta (t')e_{h(t')}\rangle }The benefit of augmenting the kernel hashh{\displaystyle h}with the binary hashζ{\displaystyle \zeta }is the following theorem, which states thatϕ{\displaystyle \phi }is an isometry "on average".
Theorem (intuitively stated)—If the binary hashζ{\displaystyle \zeta }is unbiased (meaning that it takes value−1,+1{\displaystyle -1,+1}with equal probability), thenϕ:RT→Rn{\displaystyle \phi :\mathbb {R} ^{T}\to \mathbb {R} ^{n}}is an isometry in expectation:E[⟨ϕ(x),ϕ(x′)⟩]=⟨x,x′⟩.{\displaystyle \mathbb {E} [\langle \phi (x),\phi (x')\rangle ]=\langle x,x'\rangle .}
By linearity of expectation,E[⟨ϕ(x),ϕ(x′)⟩]=∑t,t′∈T(xtxt′′)⋅E[ζ(t)ζ(t′)]⋅⟨eh(t),eh(t′)⟩{\displaystyle \mathbb {E} [\langle \phi (x),\phi (x')\rangle ]=\sum _{t,t'\in T}(x_{t}x'_{t'})\cdot \mathbb {E} [\zeta (t)\zeta (t')]\cdot \langle e_{h(t)},e_{h(t')}\rangle }Now,E[ζ(t)ζ(t′)]={1ift=t′0ift≠t′{\displaystyle \mathbb {E} [\zeta (t)\zeta (t')]={\begin{cases}1\quad {\text{ if }}t=t'\\0\quad {\text{ if }}t\neq t'\\\end{cases}}}, since we assumedζ{\displaystyle \zeta }is unbiased. So we continueE[⟨ϕ(x),ϕ(x′)⟩]=∑t∈T(xtxt′)⟨eh(t),eh(t)⟩=⟨x,x′⟩{\displaystyle \mathbb {E} [\langle \phi (x),\phi (x')\rangle ]=\sum _{t\in T}(x_{t}x'_{t})\langle e_{h(t)},e_{h(t)}\rangle =\langle x,x'\rangle }
The above statement and proof interprets the binary hash functionζ{\displaystyle \zeta }not as a deterministic function of typeT→{−1,+1}{\displaystyle T\to \{-1,+1\}}, but as a random binary vector{−1,+1}T{\displaystyle \{-1,+1\}^{T}}with unbiased entries, meaning thatPr(ζ(t)=+1)=Pr(ζ(t)=−1)=12{\displaystyle Pr(\zeta (t)=+1)=Pr(\zeta (t)=-1)={\frac {1}{2}}}for anyt∈T{\displaystyle t\in T}.
This is a good intuitive picture, though not rigorous. For a rigorous statement and proof, see[2]
Instead of maintaining a dictionary, a feature vectorizer that uses the hashing trick can build a vector of a pre-defined length by applying a hash functionhto the features (e.g., words), then using the hash values directly as feature indices and updating the resulting vector at those indices. Here, we assume that feature actually means feature vector.
Thus, if our feature vector is ["cat","dog","cat"] and hash function ish(xf)=1{\displaystyle h(x_{f})=1}ifxf{\displaystyle x_{f}}is "cat" and2{\displaystyle 2}ifxf{\displaystyle x_{f}}is "dog". Let us take the output feature vector dimension (N) to be 4. Then outputxwill be [0,2,1,0].
It has been suggested that a second, single-bit output hash functionξbe used to determine the sign of the update value, to counter the effect ofhash collisions.[2]If such a hash function is used, the algorithm becomes
The above pseudocode actually converts each sample into a vector. An optimized version would instead only generate a stream of(h,ζ){\displaystyle (h,\zeta )}pairs and let the learning and prediction algorithms consume such streams; alinear modelcan then be implemented as a single hash table representing the coefficient vector.
Feature hashing generally suffers from hash collision, which means that there exist pairs of different tokens with the same hash:t≠t′,ϕ(t)=ϕ(t′)=v{\displaystyle t\neq t',\phi (t)=\phi (t')=v}. A machine learning model trained on feature-hashed words would then have difficulty distinguishingt{\displaystyle t}andt′{\displaystyle t'}, essentially becausev{\displaystyle v}ispolysemic.
Ift′{\displaystyle t'}is rare, then performance degradation is small, as the model could always just ignore the rare case, and pretend allv{\displaystyle v}meanst{\displaystyle t}. However, if both are common, then the degradation can be serious.
To handle this, one can train supervised hashing functions that avoids mapping common tokens to the same feature vectors.[5]
Ganchev and Dredze showed that in text classification applications with random hash functions and several tens of thousands of columns in the output vectors, feature hashing need not have an adverse effect on classification performance, even without the signed hash function.[3]
Weinberger et al. (2009) applied their version of feature hashing tomulti-task learning, and in particular,spam filtering, where the input features are pairs (user, feature) so that a single parameter vector captured per-user spam filters as well as a global filter for several hundred thousand users, and found that the accuracy of the filter went up.[2]
Chen et al. (2015) combined the idea of feature hashing andsparse matrixto construct "virtual matrices": large matrices with small storage requirements. The idea is to treat a matrixM∈Rn×n{\displaystyle M\in \mathbb {R} ^{n\times n}}as a dictionary, with keys inn×n{\displaystyle n\times n}, and values inR{\displaystyle \mathbb {R} }. Then, as usual in hashed dictionaries, one can use a hash functionh:N×N→m{\displaystyle h:\mathbb {N} \times \mathbb {N} \to m}, and thus represent a matrix as a vector inRm{\displaystyle \mathbb {R} ^{m}}, no matter how bign{\displaystyle n}is. With virtual matrices, they constructedHashedNets, which are large neural networks taking only small amounts of storage.[6]
Implementations of the hashing trick are present in:
|
https://en.wikipedia.org/wiki/Hashing_trick
|
Instatistics,econometrics,epidemiologyand related disciplines, the method ofinstrumental variables(IV) is used to estimatecausal relationshipswhencontrolled experimentsare not feasible or when a treatment is not successfully delivered to every unit in a randomized experiment.[1]Intuitively, IVs are used when an explanatory variable of interest is correlated with the error term (endogenous), in which caseordinary least squaresandANOVAgivebiasedresults. A valid instrument induces changes in the explanatory variable (is correlated with the endogenous variable) but has no independent effect on the dependent variable and is not correlated with the error term, allowing a researcher to uncover the causal effect of the explanatory variable on the dependent variable.
Instrumental variable methods allow forconsistentestimation when theexplanatory variables(covariates) arecorrelatedwith theerror termsin aregressionmodel. Such correlation may occur when:
Explanatory variables that suffer from one or more of these issues in the context of a regression are sometimes referred to asendogenous. In this situation,ordinary least squaresproduces biased and inconsistent estimates.[2]However, if aninstrumentis available, consistent estimates may still be obtained. An instrument is a variable that does not itself belong in the explanatory equation but is correlated with theendogenousexplanatory variables, conditionally on the value of other covariates.
In linear models, there are two main requirements for using IVs:
Informally, in attempting to estimate the causal effect of some variableX("covariate" or "explanatory variable") on anotherY("dependent variable"), aninstrumentis a third variableZwhich affectsYonly through its effect onX.
For example, suppose a researcher wishes to estimate the causal effect of smoking (X) on general health (Y).[5]Correlation between smoking and health does not imply that smoking causes poor health because other variables, such as depression, may affect both health and smoking, or because health may affect smoking. It is not possible to conduct controlled experiments on smoking status in the general population. The researcher may attempt to estimate the causal effect of smoking on health from observational data by using the tax rate for tobacco products (Z) as an instrument for smoking. The tax rate for tobacco products is a reasonable choice for an instrument because the researcher assumes that it can only be correlated with health through its effect on smoking. If the researcher then finds tobacco taxes and state of health to be correlated, this may be viewed as evidence that smoking causes changes in health.
The first use of an instrument variable occurred in a 1928 book byPhilip G. Wright, best known for his excellent description of the production, transport and sale of vegetable and animal oils in the early 1900s in the United States.[6][7]In 1945,Olav Reiersølapplied the same approach in the context oferrors-in-variables modelsin his dissertation, giving the method its name.[8]
Wright attempted to determine the supply and demand for butter usingpanel dataon prices and quantities sold in the United States. The idea was that a regression analysis could produce a demand or supply curve because they are formed by the path between prices and quantities demanded or supplied. The problem was that the observational data did not form a demand or supply curve as such, but rather a cloud of point observations that took different shapes under varying market conditions. It seemed that making deductions from the data remained elusive.
The problem was that price affected both supply and demand so that a function describing only one of the two could not be constructed directly from the observational data. Wright correctly concluded that he needed a variable that correlated with either demand or supply but not both – that is, an instrumental variable.
After much deliberation, Wright decided to use regional rainfall as his instrumental variable: he concluded that rainfall affected grass production and hence milk production and ultimately butter supply, but not butter demand. In this way he was able to construct a regression equation with only the instrumental variable of price and supply.[9]
Formal definitions of instrumental variables, using counterfactuals and graphical criteria, were given byJudea Pearlin 2000.[10]AngristandKrueger(2001) present a survey of the history and uses of instrumental variable techniques.[11]Notions of causality in econometrics, and their relationship with instrumental variables and other methods, are discussed byHeckman(2008).[12]
While the ideas behind IV extend to a broad class of models, a very common context for IV is inlinear regression. Traditionally,[13]an instrumental variable is defined
as a variableZ{\displaystyle Z}that is correlated with the independent variableX{\displaystyle X}and uncorrelated with the "error term" U in the linear equation
Y{\displaystyle Y}is a vector.X{\displaystyle X}is a matrix, usually with a column of ones and perhaps with additional columns for other covariates. Consider how an instrument allowsβ{\displaystyle \beta }to be recovered. Recall thatOLSsolves forβ^{\displaystyle {\widehat {\beta }}}such thatcov(X,U^)=0{\displaystyle \operatorname {cov} (X,{\widehat {U}})=0}(when we minimize the sum of squared errors,minβ(Y−Xβ)′(Y−Xβ){\displaystyle \min _{\beta }(Y-X\beta )'(Y-X\beta )}, the first-order condition is exactlyX′(Y−Xβ^)=X′U^=0{\displaystyle X'(Y-X{\widehat {\beta }})=X'{\widehat {U}}=0}.) If the true model is believed to havecov(X,U)≠0{\displaystyle \operatorname {cov} (X,U)\neq 0}due to any of the reasons listed above—for example, if there is anomitted variablewhich affects bothX{\displaystyle X}andY{\displaystyle Y}separately—then thisOLSprocedure willnotyield the causal impact ofX{\displaystyle X}onY{\displaystyle Y}. OLS will simply pick the parameter that makes the resulting errors appear uncorrelated withX{\displaystyle X}.
Consider for simplicity the single-variable case. Suppose we are considering a regression with one variable and a constant (perhaps no other covariates are necessary, or perhaps we havepartialed outany other relevant covariates):
In this case, the coefficient on the regressor of interest is given byβ^=cov(x,y)var(x){\displaystyle {\widehat {\beta }}={\frac {\operatorname {cov} (x,y)}{\operatorname {var} (x)}}}. Substituting fory{\displaystyle y}gives
whereβ∗{\displaystyle \beta ^{*}}is what the estimated coefficient vector would be ifcov(x,u)=0{\displaystyle \operatorname {cov} (x,u)=0}. In this case, it can be shown thatβ∗{\displaystyle \beta ^{*}}is an unbiased estimator ofβ{\displaystyle \beta }.
Ifcov(x,u)≠0{\displaystyle \operatorname {cov} (x,u)\neq 0}in the underlying model that we believe, thenOLSgives an inconsistent estimate which doesnotreflect the underlying causal effect of interest. IV helps to fix this problem by identifying the parametersβ{\displaystyle {\beta }}not based on whetherx{\displaystyle x}is uncorrelated withu{\displaystyle u}, but based on whether another variablez{\displaystyle z}is uncorrelated withu{\displaystyle u}. If theory suggests thatz{\displaystyle z}is related tox{\displaystyle x}(the first stage) but uncorrelated withu{\displaystyle u}(the exclusion restriction), then IV may identify the causal parameter of interest where OLS fails. Because there are multiple specific ways of using and deriving IV estimators even in just the linear case (IV, 2SLS, GMM), we save further discussion for theEstimationsection below.
IV techniques have been developed among a much broader class of non-linear models. General definitions of instrumental variables, using counterfactual and graphical formalism, were given by Pearl (2000; p. 248).[10]The graphical definition requires thatZsatisfy the following conditions:
where⊥⊥{\displaystyle \perp \!\!\!\perp }stands ford-separationandGX¯{\displaystyle G_{\overline {X}}}stands for thegraphin which all arrows enteringXare cut off.
The counterfactual definition requires thatZsatisfies
whereYxstands for the value thatYwould attain hadXbeenxand⊥⊥{\displaystyle \perp \!\!\!\perp }stands for independence.
If there are additional covariatesWthen the above definitions are modified so thatZqualifies as an instrument if the given criteria hold conditional onW.
The essence of Pearl's definition is:
These conditions do not rely on specific functional
form of the equations and are applicable therefore to
nonlinear equations, whereUcan be non-additive
(see Non-parametric analysis). They are also applicable to a system of multiple
equations, in whichX(and other factors) affectYthrough
several intermediate variables. An instrumental variable need not be
a cause ofX; a proxy of such cause may also be
used, if it satisfies conditions 1–5.[10]The exclusion restriction (condition 4) is redundant; it follows from conditions 2 and 3.
SinceUis unobserved, the requirement thatZbe independent ofUcannot be inferred from data and must instead be determined from the model structure, i.e., the data-generating process.Causal graphsare a representation of this structure, and the graphical definition given above can be used to quickly determine whether a variableZqualifies as an instrumental variable given a set of covariatesW. To see how, consider the following example.
Suppose that we wish to estimate the effect of a university tutoring program on grade point average (GPA). The relationship between attending the tutoring program and GPA may be confounded by a number of factors. Students who attend the tutoring program may care more about their grades or may be struggling with their work. This confounding is depicted in the Figures 1–3 on the right through the bidirected arc between Tutoring Program and GPA. If students are assigned to dormitories at random, the proximity of the student's dorm to the tutoring program is a natural candidate for being an instrumental variable.
However, what if the tutoring program is located in the college library? In that case, Proximity may also cause students to spend more time at the library, which in turn improves their GPA (see Figure 1). Using the causal graph depicted in the Figure 2, we see that Proximity does not qualify as an instrumental variable because it is connected to GPA through the path Proximity→{\displaystyle \rightarrow }Library Hours→{\displaystyle \rightarrow }GPA inGX¯{\displaystyle G_{\overline {X}}}. However, if we control for Library Hours by adding it as a covariate then Proximity becomes an instrumental variable, since Proximity is separated from GPA given Library Hours inGX¯{\displaystyle G_{\overline {X}}}[citation needed].
Now, suppose that we notice that a student's "natural ability" affects his or her number of hours in the library as well as his or her GPA, as in Figure 3. Using the causal graph, we see that Library Hours is a collider and conditioning on it opens the path Proximity→{\displaystyle \rightarrow }Library Hours↔{\displaystyle \leftrightarrow }GPA. As a result, Proximity cannot be used as an instrumental variable.
Finally, suppose that Library Hours does not actually affect GPA because students who do not study in the library simply study elsewhere, as in Figure 4. In this case, controlling for Library Hours still opens a spurious path from Proximity to GPA. However, if we do not control for Library Hours and remove it as a covariate then Proximity can again be used an instrumental variable.
We now revisit and expand upon the mechanics of IV in greater detail. Suppose the data are generated by a process of the form
where
The parameter vectorβ{\displaystyle \beta }is the causal effect onyi{\displaystyle y_{i}}of a one unit change in each element ofXi{\displaystyle X_{i}}, holding all other causes ofyi{\displaystyle y_{i}}constant. The econometric goal is to estimateβ{\displaystyle \beta }. For simplicity's sake assume the draws ofeare uncorrelated and that they are drawn from distributions with the samevariance(that is, that the errors are serially uncorrelated andhomoskedastic).
Suppose also that a regression model of nominally the same form is proposed. Given a random sample ofTobservations from this process, theordinary least squaresestimator is
whereX,yandedenote column vectors of lengthT. This equation is similar to the equation involvingcov(X,y){\displaystyle \operatorname {cov} (X,y)}in the introduction (this is the matrix version of that equation). WhenXandeareuncorrelated, under certain regularity conditions the second term has an expected value conditional onXof zero and converges to zero in the limit, so the estimator isunbiasedand consistent. WhenXand the other unmeasured, causal variables collapsed into theeterm are correlated, however, the OLS estimator is generally biased and inconsistent forβ. In this case, it is valid to use the estimates to predict values ofygiven values ofX, but the estimate does not recover the causal effect ofXony.
To recover the underlying parameterβ{\displaystyle \beta }, we introduce a set of variablesZthat is highly correlated with eachendogenouscomponent ofXbut (in our underlying model) is not correlated withe. For simplicity, one might considerXto be aT× 2 matrix composed of a column of constants and one endogenous variable, andZto be aT× 2 consisting of a column of constants and one instrumental variable. However, this technique generalizes toXbeing a matrix of a constant and, say, 5 endogenous variables, withZbeing a matrix composed of a constant and 5 instruments. In the discussion that follows, we will assume thatXis aT×Kmatrix and leave this valueKunspecified. An estimator in whichXandZare bothT×Kmatrices is referred to asjust-identified.
Suppose that the relationship between each endogenous componentxiand the instruments is given by
The most common IV specification uses the following estimator:
This specification approaches the true parameter as the sample gets large, so long asZTe=0{\displaystyle Z^{\mathrm {T} }e=0}in the true model:
As long asZTe=0{\displaystyle Z^{\mathrm {T} }e=0}in the underlying process which generates the data, the appropriate use of the IV estimator will identify this parameter. This works because IV solves for the unique parameter that satisfiesZTe=0{\displaystyle Z^{\mathrm {T} }e=0}, and therefore hones in on the true underlying parameter as the sample size grows.
Now an extension: suppose that there are more instruments than there are covariates in the equation of interest, so thatZis aT × Mmatrix withM > K. This is often called theover-identifiedcase. In this case, thegeneralized method of moments(GMM) can be used. The GMM IV estimator is
wherePZ{\displaystyle P_{Z}}refers to theprojection matrixPZ=Z(ZTZ)−1ZT{\displaystyle P_{Z}=Z(Z^{\mathrm {T} }Z)^{-1}Z^{\mathrm {T} }}.
This expression collapses to the first when the number of instruments is equal to the number of covariates in the equation of interest. The over-identified IV is therefore a generalization of the just-identified IV.
Developing theβGMM{\displaystyle \beta _{\text{GMM}}}expression:
In the just-identified case, we have as many instruments as covariates, so that the dimension ofXis the same as that ofZ. Hence,XTZ,ZTZ{\displaystyle X^{\mathrm {T} }Z,Z^{\mathrm {T} }Z}andZTX{\displaystyle Z^{\mathrm {T} }X}are all squared matrices of the same dimension. We can expand the inverse, using the fact that, for any invertiblen-by-nmatricesAandB, (AB)−1=B−1A−1(seeInvertible matrix#Properties):
Reference: see Davidson and Mackinnnon (1993)[14]: 218
There is an equivalentunder-identifiedestimator for the case wherem < k. Since the parameters are the solutions to a set of linear equations, an under-identified model using the set of equationsZ′v=0{\displaystyle Z'v=0}does not have a unique solution.
One computational method which can be used to calculate IV estimates is two-stage least squares (2SLS or TSLS). In the first stage, each explanatory variable that is an endogenous covariate in the equation of interest is regressed on all of the exogenous variables in the model, including both exogenous covariates in the equation of interest and the excluded instruments. The predicted values from these regressions are obtained:
Stage 1:Regress each column ofXonZ, (X=Zδ+errors{\displaystyle X=Z\delta +{\text{errors}}}):
and save the predicted values:
In the second stage, the regression of interest is estimated as usual, except that in this stage each endogenous covariate is replaced with the predicted values from the first stage:
Stage 2:RegressYon the predicted values from the first stage:
which gives
This method is only valid in linear models. For categorical endogenous covariates, one might be tempted to use a different first stage than ordinary least squares, such as aprobit modelfor the first stage followed by OLS for the second. This is commonly known in the econometric literature as theforbidden regression,[15]because second-stage IV parameter estimates are consistent only in special cases.[16]
The usual OLS estimator is:(X^TX^)−1X^TY{\displaystyle ({\widehat {X}}^{\mathrm {T} }{\widehat {X}})^{-1}{\widehat {X}}^{\mathrm {T} }Y}.
ReplacingX^=PZX{\displaystyle {\widehat {X}}=P_{Z}X}and noting thatPZ{\displaystyle P_{Z}}is a symmetric andidempotentmatrix, so thatPZTPZ=PZPZ=PZ{\displaystyle P_{Z}^{\mathrm {T} }P_{Z}=P_{Z}P_{Z}=P_{Z}}
The resulting estimator ofβ{\displaystyle \beta }is numerically identical to the expression displayed above. A small correction must be made to the sum-of-squared residuals in the second-stage fitted model in order that the covariance matrix ofβ{\displaystyle \beta }is calculated correctly.
When the form of the structural equations is unknown, an instrumental variableZ{\displaystyle Z}can still be defined through the equations:
wheref{\displaystyle f}andg{\displaystyle g}are two arbitrary functions andZ{\displaystyle Z}is independent ofU{\displaystyle U}. Unlike linear models, however, measurements ofZ,X{\displaystyle Z,X}andY{\displaystyle Y}do not allow for the identification of the average causal effect ofX{\displaystyle X}onY{\displaystyle Y}, denoted ACE
Balke and Pearl [1997] derived tight bounds on ACE and showed that these can provide valuable information on the sign and size of ACE.[17]
In linear analysis, there is no test to falsify the assumption theZ{\displaystyle Z}is instrumental relative to the pair(X,Y){\displaystyle (X,Y)}. This is not the case whenX{\displaystyle X}is discrete. Pearl (2000) has shown that, for allf{\displaystyle f}andg{\displaystyle g}, the following constraint, called "Instrumental Inequality" must hold wheneverZ{\displaystyle Z}satisfies the two equations above:[10]
The exposition above assumes that the causal effect of interest does not vary across observations, that is, thatβ{\displaystyle \beta }is a constant. Generally, different subjects will respond in different ways to changes in the "treatment"x. When this possibility is recognized, the average effect in the population of a change inxonymay differ from the effect in a given subpopulation. For example, the average effect of a job training program may substantially differ across the group of people who actually receive the training and the group which chooses not to receive training. For these reasons, IV methods invoke implicit assumptions on behavioral response, or more generally assumptions over the correlation between the response to treatment and propensity to receive treatment.[18]
The standard IV estimator can recoverlocal average treatment effects(LATE) rather thanaverage treatment effects(ATE).[1]Imbens and Angrist (1994) demonstrate that the linear IV estimate can be interpreted under weak conditions as a weighted average of local average treatment effects, where the weights depend on the elasticity of the endogenous regressor to changes in the instrumental variables. Roughly, that means that the effect of a variable is only revealed for the subpopulations affected by the observed changes in the instruments, and that subpopulations which respond most to changes in the instruments will have the largest effects on the magnitude of the IV estimate.
For example, if a researcher uses presence of a land-grant college as an instrument for college education in an earnings regression, she identifies the effect of college on earnings in the subpopulation which would obtain a college degree if a college is present but which would not obtain a degree if a college is not present. This empirical approach does not, without further assumptions, tell the researcher anything about the effect of college among people who would either always or never get a college degree regardless of whether a local college exists.
As Bound,Jaeger, and Baker (1995) note, a problem is caused by the selection of "weak" instruments, instruments that are poor predictors of the endogenous question predictor in the first-stage equation.[19]In this case, the prediction of the question predictor by the instrument will be poor and the predicted values will have very little variation. Consequently, they are unlikely to have much success in predicting the ultimate outcome when they are used to replace the question predictor in the second-stage equation.
In the context of the smoking and health example discussed above, tobacco taxes are weak instruments for smoking if smoking status is largely unresponsive to changes in taxes. If higher taxes do not induce people to quit smoking (or not start smoking), then variation in tax rates tells us nothing about the effect of smoking on health. If taxes affect health through channels other than through their effect on smoking, then the instruments are invalid and the instrumental variables approach may yield misleading results. For example, places and times with relatively health-conscious populations may both implement high tobacco taxes and exhibit better health even holding smoking rates constant, so we would observe a correlation between health and tobacco taxes even if it were the case that smoking has no effect on health. In this case, we would be mistaken to infer a causal effect of smoking on health from the observed correlation between tobacco taxes and health.
The strength of the instruments can be directly assessed because both the endogenous covariates and the instruments are observable.[20]A common rule of thumb for models with one endogenous regressor is: theF-statisticagainst thenullthat the excluded instruments are irrelevant in the first-stage regression should be larger than 10.
When the covariates are exogenous, the small-sample properties of the OLS estimator can be derived in a straightforward manner by calculating moments of the estimator conditional onX. When some of the covariates are endogenous so that instrumental variables estimation is implemented, simple expressions for the moments of the estimator cannot be so obtained. Generally, instrumental variables estimators only have desirable asymptotic, not finite sample, properties, and inference is based on asymptotic approximations to thesampling distributionof the estimator. Even when the instruments are uncorrelated with the error in the equation of interest and when the instruments are not weak, the finite sample properties of the instrumental variables estimator may be poor. For example, exactly identified models produce finite sample estimators with no moments, so the estimator can be said to be neither biased nor unbiased, the nominal size of test statistics may be substantially distorted, and the estimates may commonly be far away from the true value of the parameter.[21]
The assumption that the instruments are not correlated with the error term in the equation of interest is not testable in exactly identified models. If the model is overidentified, there is information available which may be used to test this assumption. The most common test of theseoveridentifying restrictions, called theSargan–Hansen test, is based on the observation that the residuals should be uncorrelated with the set of exogenous variables if the instruments are truly exogenous.[22]The Sargan–Hansentest statisticcan be calculated asTR2{\displaystyle TR^{2}}(the number of observations multiplied by thecoefficient of determination) from the OLS regression of the residuals onto the set of exogenous variables. This statistic will be asymptotically chi-squared withm−kdegrees of freedom under the null that the error term is uncorrelated with the instruments.
|
https://en.wikipedia.org/wiki/Instrumental_variables_estimation
|
Thesedatasetsare used inmachine learning (ML)research and have been cited inpeer-reviewedacademic journals. Datasets are an integral part of the field of machine learning. Major advances in this field can result from advances in learningalgorithms(such asdeep learning),computer hardware, and, less-intuitively, the availability of high-quality training datasets.[1]High-quality labeled training datasets forsupervisedandsemi-supervisedmachine learningalgorithmsare usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets forunsupervisedlearning can also be difficult and costly to produce.[2][3][4]
Many organizations, including governments, publish and share theirdatasets. The datasets are classified, based on the licenses, asOpen dataandNon-Open data.
The datasets from variousgovernmental-bodiesare presented inList of open government data sites. The datasets are ported onopen data portals. They are made available for searching, depositing and accessing through interfaces likeOpen API. The datasets are made available as various sorted types and subtypes.
The data portal is classified based on its type of license. Theopen source licensebased data portals are known asopen data portalswhich are used by manygovernment organizationsandacademic institutions.
https://github.com/sebneu/ckan_instances/blob/master/instances.csv
https://dataverse.org/metrics
The data portal sometimes lists a wide variety of subtypes of datasets pertaining to manymachine learning applications.
The data portals which are suitable for a specific subtype ofmachine learning applicationare listed in the subsequent sections.
These datasets consist primarily of text for tasks such asnatural language processing,sentiment analysis, translation, andcluster analysis.
Categorization
citation analysis
These datasets consist of sounds and sound features used for tasks such asspeech recognitionandspeech synthesis.
(WAV)
Datasets containing electric signal information requiring some sort ofsignal processingfor further analysis.
Datasets from physical systems.
Datasets from biological systems.
Dataset[259]
[329][330]
[331]
This section includes datasets that deals with structured data.
Further details are provided in theproject's GitHub repositoryand respectiveHugging Face dataset card.
This section includes datasets that contains multi-turn text with at least two actors, a "user" and an "agent". The user makes requests for the agent, which performs the request.
Taskmaster-2: 17,289 dialogs in the seven domains (restaurants, food ordering, movies, hotels, flights, music and sports).
Taskmaster-3: 23,757 movie ticketing dialogs.
Taskmaster-3: conversation id, utterances, vertical, scenario, instructions.
For further details check theproject's GitHub repositoryor the Hugging Face dataset cards (taskmaster-1,taskmaster-2,taskmaster-3).
Additionally, each ask contains a task definition.
Further information is provided in theGitHub repositoryof the project and theHugging Face data card.
The dataset can be downloadedhere, and the rejected datahere.
The scripts to process the data are available in the GitHub repo mentioned on the paper:https://github.com/google-research/FLAN/tree/main/flan.
AnotherFLAN GitHub repowas created as well. This is the one associated with the dataset card in Hugging Face.
Mechanisms of AttackDomains of Attack
Software DevelopmentHardware Design[permanent dead link]Research Concepts
2009,20102011,2012,2013,2014,2015,2016,2017,2018,2019,2020,2021,2022.
Data files can also be downloadedhere.
Data is also availablehere.
Alternate list of reports.
Workshops
As datasets come in myriad formats and can sometimes be difficult to use, there has been considerable work put into curating and standardizing the format of datasets to make them easier to use for machine learning research.
|
https://en.wikipedia.org/wiki/List_of_datasets_for_machine_learning_research
|
Scale co-occurrence matrix (SCM)is a method for imagefeature extractionwithinscale spaceafterwavelet transformation, proposed by Wu Jun and Zhao Zhongming (Institute of Remote Sensing Application,China). In practice, we first do discrete wavelet transformation for one gray image and get sub images with different scales. Then we construct a series of scale based concurrent matrices, every matrix describing the gray level variation between two adjacent scales. Last we use selected functions (such as Harris statistical approach) to calculate measurements with SCM and do feature extraction and classification.
One basis of the method is the fact: way texture information changes from one scale to another can represent that texture in some extent thus it can be used as a criterion for feature extraction. The matrix captures the relation of features between different scales rather than the features within a single scale space, which can represent the scale property of texture better. Also, there are several experiments showing that it can get more accurate results for texture classification than the traditional texture classification.[1]
Texture can be regarded as a similarity grouping in an image. Traditional texture analysis can be divided into four major issues: feature extraction, texture discrimination, texture classification and shape from texture(to reconstruct 3D surface geometry from texture information). For tradition feature extraction, approaches are usually categorized into structural, statistical, model based and transform.[2]Wavelet transformation is a popular method in numerical analysis and functional analysis, which captures both frequency and location information. Gray level co-occurrence matrix provides an important basis for SCM construction.
SCM based on discrete wavelet frame transformation make use of both correlations and feature information so that it combines structural and statistical benefits.
In order to do SCM we have to use discrete wavelet frame (DWF) transformation first to get a series of sub images. The discrete wavelet frames is nearly identical to the standard wavelet transform,[3]except that one upsamples the filters, rather than downsamples the image. Given an image, the DWF decomposes its channel using the same method as the wavelet transform, but without the subsampling process. This results in four filtered images with the same size as the input image. The decomposition is then continued in the LL channels only as in the wavelet transform, but since the image is not subsampled, the filter has to be upsampled by inserting zeros in between its coefficients. The number of channels, hence the number of features for DWF is given by 3 × l − 1.[4]One dimension discrete wavelet frame decompose the image in this way:
If there are two sub imagesX1andX0from the parent imageX(in practiceX=X0),X1= [1 1;1 2],X2= [1 1;1 4],the grayscale is 4 so that we can getk= 1,G= 4.X1(1,1), (1,2) and (2,1) are 1, whileX0(1,1), (1,2) and (2,1) are 1, thus Φ1(1,1) = 3; Similarly, Φ1(2,4) = 1.
The SCM is as following:
|
https://en.wikipedia.org/wiki/Scale_co-occurrence_matrix
|
Thespace mappingmethodology for modeling and design optimization ofengineering systemswas first discovered byJohn Bandlerin 1993. It uses relevant existing knowledge to speed up model generation and designoptimizationof a system. The knowledge is updated with new validation information from the system when available.
The space mapping methodology employs a "quasi-global" formulation that intelligently links companion "coarse" (ideal or low-fidelity) and "fine" (practical or high-fidelity) models of different complexities. In engineering design, space mapping aligns a very fast coarse model with the expensive-to-compute fine model so as to avoid direct expensive optimization of the fine model. The alignment can be done either off-line (model enhancement) or on-the-fly with surrogate updates (e.g., aggressive space mapping).
At the core of the process is a pair of models: one very accurate but too expensive to use directly with a conventional optimization routine, and one significantly less expensive and, accordingly, less accurate. The latter (fast model) is usually referred to as the "coarse" model (coarse space). The former (slow model) is usually referred to as the "fine" model. A validation space ("reality") represents the fine model, for example, a high-fidelity physics model. The optimization space, where conventional optimization is carried out, incorporates the coarse model (orsurrogate model), for example, the low-fidelity physics or "knowledge" model. In a space-mapping design optimization phase, there is a prediction or "execution" step, where the results of an optimized "mapped coarse model" (updated surrogate) are assigned to the fine model for validation. After the validation process, if the design specifications are not satisfied, relevant data is transferred to the optimization space ("feedback"), where the mapping-augmented coarse model or surrogate is updated (enhanced, realigned with the fine model) through an iterative optimization process termed "parameter extraction". The mapping formulation itself incorporates "intuition", part of the engineer's so-called "feel" for a problem.[1]In particular, the Aggressive Space Mapping (ASM) process displays key characteristics of cognition (an expert's approach to a problem), and is often illustrated in simple cognitive terms.
FollowingJohn Bandler's concept in 1993,[1][2]algorithms have utilized Broyden updates (aggressive space mapping),[3]trust regions,[4]andartificial neural networks.[5]Developments include implicit space mapping,[6]in which we allow preassigned parameters not used in the optimization process to change in the coarse model, and output space mapping, where a transformation is applied to the response of the model. A 2004 paper reviews the state of the art after the first ten years of development and implementation.[7]Tuning space mapping[8]utilizes a so-called tuning model—constructed invasively from the fine model—as well as a calibration process that translates the adjustment of the optimized tuning model parameters into relevant updates of the design variables. The space mapping concept has been extended to neural-based space mapping forlarge-signalstatistical modelingofnonlinearmicrowavedevices.[9][10]Space mapping is supported by sound convergence theory and is related to the defect-correction approach.[11]
A 2016 state-of-the-art review is devoted to aggressive space mapping.[12]It spans two decades of development and engineering applications. A comprehensive 2021 review paper[13]discusses space mapping in the context ofradio frequencyandmicrowavedesign optimization; in the context of engineeringsurrogate model, feature-based and cognition-driven design; and in the context ofmachine learning,intuition, and human intelligence.
The space mapping methodology can also be used to solveinverse problems. Proven techniques include the Linear Inverse Space Mapping (LISM) algorithm,[14]as well as the Space Mapping with Inverse Difference (SM-ID) method.[15]
Space mapping optimization belongs to the class of surrogate-based optimization methods,[16]that is to say, optimization methods that rely on asurrogate model.
The space mapping technique has been applied in a variety of disciplines including microwave andelectromagneticdesign, civil and mechanical applications,aerospace engineering, and biomedical research. Some examples:
Various simulators can be involved in a space mapping optimization and modeling processes.
Three international workshops have focused significantly on the art, the science and the technology of space mapping.
There is a wide spectrum of terminology associated with space mapping: ideal model, coarse model, coarse space, fine model, companion model, cheap model, expensive model,surrogate model, low fidelity (resolution) model, high fidelity (resolution) model, empirical model, simplified physics model, physics-based model, quasi-global model, physically expressive model, device under test, electromagnetics-based model,simulationmodel, computational model, tuning model, calibration model, surrogate model, surrogate update, mapped coarse model, surrogate optimization, parameter extraction, target response, optimization space, validation space, neuro-space mapping, implicit space mapping, output space mapping, port tuning, predistortion (of design specifications), manifold mapping, defect correction, model management, multi-fidelity models, variable fidelity/variable complexity,multigrid method, coarse grid, fine grid, surrogate-driven, simulation-driven, model-driven, feature-based modeling.
|
https://en.wikipedia.org/wiki/Space_mapping
|
Automated machine learning(AutoML) is the process ofautomatingthe tasks of applyingmachine learningto real-world problems. It is the combination of automation and ML.[1]
AutoML potentially includes every stage from beginning with a raw dataset to building a machine learning model ready for deployment. AutoML was proposed as anartificial intelligence-based solution to the growing challenge of applying machine learning.[2][3]The high degree of automation in AutoML aims to allow non-experts to make use of machine learning models and techniques without requiring them to become experts in machine learning. Automating the process of applying machine learning end-to-end additionally offers the advantages of producing simpler solutions, faster creation of those solutions, and models that often outperform hand-designed models.[4]
Common techniques used in AutoML includehyperparameter optimization,meta-learningandneural architecture search.
In a typical machine learning application, practitioners have a set of input data points to be used for training.[5]The raw data may not be in a form that all algorithms can be applied to. To make the data amenable for machine learning, an expert may have to apply appropriatedata pre-processing,feature engineering,feature extraction, andfeature selectionmethods. After these steps, practitioners must then performalgorithm selectionandhyperparameter optimizationto maximize the predictive performance of their model. If deep learning is used, the architecture of the neural network must also be chosen manually by the machine learning expert.
Each of these steps may be challenging, resulting in significant hurdles to using machine learning. AutoML aims to simplify these steps for non-experts, and to make it easier for them to use machine learning techniques correctly and effectively.
AutoML plays an important role within the broader approach of automatingdata science, which also includes challenging tasks such as data engineering, data exploration and model interpretation and prediction.[6]
Automated machine learning can target various stages of the machine learning process.[3]Steps to automate are:
There are a number of key challenges being tackled around automated machine learning. A big issue surrounding the field is referred to as "development as a cottage industry".[8]This phrase refers to the issue in machine learning where development relies on manual decisions and biases of experts. This is contrasted to the goal of machine learning which is to create systems that can learn and improve from their own usage and analysis of the data. Basically, it's the struggle between how much experts should get involved in the learning of the systems versus how much freedom they should be giving the machines. However, experts and developers must help create and guide these machines to prepare them for their own learning. To create this system, it requires labor intensive work with knowledge of machine learning algorithms andsystem design.[9]
Additionally, some other challenges include meta-learning challenges[10]and computational resource allocation.
|
https://en.wikipedia.org/wiki/Automated_machine_learning
|
Big dataprimarily refers todata setsthat are too large or complex to be dealt with by traditionaldata-processingsoftware. Data with many entries (rows) offer greaterstatistical power, while data with higher complexity (more attributes or columns) may lead to a higherfalse discovery rate.[1]
Big data analysis challenges includecapturing data,data storage,data analysis, search,sharing,transfer,visualization,querying, updating,information privacy, and data source. Big data was originally associated with three key concepts:volume,variety, andvelocity.[2]The analysis of big data presents challenges in sampling, and thus previously allowing for only observations and sampling. Thus a fourth concept,veracity,refers to the quality or insightfulness of the data.[3]Without sufficient investment in expertise for big data veracity, the volume and variety of data can produce costs and risks that exceed an organization's capacity to create and capturevaluefrom big data.[4]
Current usage of the termbig datatends to refer to the use ofpredictive analytics,user behavior analytics, or certain other advanced data analytics methods that extractvaluefrom big data, and seldom to a particular size of data set. "There is little doubt that the quantities of data now available are indeed large, but that's not the most relevant characteristic of this new data ecosystem."[5]Analysis of data sets can find new correlations to "spot business trends, prevent diseases, combat crime and so on".[6]Scientists, business executives, medical practitioners, advertising andgovernmentsalike regularly meet difficulties with large data-sets in areas includingInternet searches,fintech, healthcare analytics, geographic information systems,urban informatics, andbusiness informatics. Scientists encounter limitations ine-Sciencework, includingmeteorology,genomics,[7]connectomics, complex physics simulations, biology, and environmental research.[8]
The size and number of available data sets have grown rapidly as data is collected by devices such asmobile devices, cheap and numerous information-sensingInternet of thingsdevices, aerial (remote sensing) equipment, software logs,cameras, microphones,radio-frequency identification(RFID) readers andwireless sensor networks.[9][10]The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s;[11]as of 2012[update], every day 2.5exabytes(2.17×260bytes) of data are generated.[12]Based on anIDCreport prediction, the global data volume was predicted to grow exponentially from 4.4zettabytesto 44 zettabytes between 2013 and 2020. By 2025, IDC predicts there will be 163 zettabytes of data.[13]According to IDC, global spending on big data and business analytics (BDA) solutions is estimated to reach $215.7 billion in 2021.[14][15]WhileStatistareport, the global big data market is forecasted to grow to $103 billion by 2027.[16]In 2011McKinsey & Companyreported, if US healthcare were to use big data creatively and effectively to drive efficiency and quality, the sector could create more than $300 billion in value every year.[17]In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data.[17]And users of services enabled by personal-location data could capture $600 billion in consumer surplus.[17]One question for large enterprises is determining who should own big-data initiatives that affect the entire organization.[18]
Relational database management systemsand desktop statistical software packages used to visualize data often have difficulty processing and analyzing big data. The processing and analysis of big data may require "massively parallel software running on tens, hundreds, or even thousands of servers".[19]What qualifies as "big data" varies depending on the capabilities of those analyzing it and their tools. Furthermore, expanding capabilities make big data a moving target. "For some organizations, facing hundreds ofgigabytesof data for the first time may trigger a need to reconsider data management options. For others, it may take tens or hundreds of terabytes before data size becomes a significant consideration."[20]
The termbig datahas been in use since the 1990s, with some giving credit toJohn Masheyfor popularizing the term.[21][22]Big data usually includes data sets with sizes beyond the ability of commonly used software tools tocapture,curate, manage, and process data within a tolerable elapsed time.[23][page needed]Big data philosophy encompasses unstructured, semi-structured and structured data; however, the main focus is on unstructured data.[24]Big data "size" is a constantly moving target; as of 2012[update]ranging from a few dozen terabytes to manyzettabytesof data.[25]Big data requires a set of techniques and technologies with new forms ofintegrationto reveal insights fromdata-setsthat are diverse, complex, and of a massive scale.[26]
"Volume", "variety", "velocity", and various other "Vs" are added by some organizations to describe it, a revision challenged by some industry authorities.[27]The Vs of big data were often referred to as the "three Vs", "four Vs", and "five Vs". They represented the qualities of big data in volume, variety, velocity, veracity, and value.[3]Variability is often included as an additional quality of big data.
A 2018 definition states "Big data is whereparallel computingtools are needed to handle data", and notes, "This represents a distinct and clearly defined change in the computer science used, via parallel programming theories, and losses of some of the guarantees and capabilities made byCodd's relational model."[28]
In a comparative study of big datasets,Kitchinand McArdle found that none of the commonly considered characteristics of big data appear consistently across all of the analyzed cases.[29]For this reason, other studies identified the redefinition of power dynamics in knowledge discovery as the defining trait.[30]Instead of focusing on the intrinsic characteristics of big data, this alternative perspective pushes forward a relational understanding of the object claiming that what matters is the way in which data is collected, stored, made available and analyzed.
The growing maturity of the concept more starkly delineates the difference between "big data" and "business intelligence":[31]
Big data can be described by the following characteristics:
Other possible characteristics of big data are:[40]
Big data repositories have existed in many forms, often built by corporations with a special need. Commercial vendors historically offered parallel database management systems for big data beginning in the 1990s. For many years, WinterCorp published the largest database report.[41][promotional source?]
TeradataCorporation in 1984 marketed the parallel processingDBC 1012system. Teradata systems were the first to store and analyze 1 terabyte of data in 1992. Hard disk drives were 2.5 GB in 1991 so the definition of big data continuously evolves. Teradata installed the first petabyte class RDBMS based system in 2007. As of 2017[update], there are a few dozen petabyte class Teradata relational databases installed, the largest of which exceeds 50 PB. Systems up until 2008 were 100% structured relational data. Since then, Teradata has added semi structured data types includingXML,JSON, andAvro.
In 2000, Seisint Inc. (nowLexisNexis Risk Solutions) developed aC++-based distributed platform for data processing and querying known as theHPCC Systemsplatform. This system automatically partitions, distributes, stores and delivers structured, semi-structured, and unstructured data across multiple commodity servers. Users can write data processing pipelines and queries in a declarative dataflow programming language called ECL. Data analysts working in ECL are not required to define data schemas upfront and can rather focus on the particular problem at hand, reshaping data in the best possible manner as they develop the solution. In 2004, LexisNexis acquired Seisint Inc.[42]and their high-speed parallel processing platform and successfully used this platform to integrate the data systems of Choicepoint Inc. when they acquired that company in 2008.[43]In 2011, the HPCC systems platform was open-sourced under the Apache v2.0 License.
CERNand other physics experiments have collected big data sets for many decades, usually analyzed viahigh-throughput computingrather than the map-reduce architectures usually meant by the current "big data" movement.
In 2004,Googlepublished a paper on a process calledMapReducethat uses a similar architecture. The MapReduce concept provides a parallel processing model, and an associated implementation was released to process huge amounts of data. With MapReduce, queries are split and distributed across parallel nodes and processed in parallel (the "map" step). The results are then gathered and delivered (the "reduce" step). The framework was very successful,[44]so others wanted to replicate the algorithm. Therefore, animplementationof the MapReduce framework was adopted by an Apache open-source project named "Hadoop".[45]Apache Sparkwas developed in 2012 in response to limitations in the MapReduce paradigm, as it addsin-memory processingand the ability to set up many operations (not just map followed by reducing).
MIKE2.0is an open approach to information management that acknowledges the need for revisions due to big data implications identified in an article titled "Big Data Solution Offering".[46]The methodology addresses handling big data in terms of usefulpermutationsof data sources,complexityin interrelationships, and difficulty in deleting (or modifying) individual records.[47]
Studies in 2012 showed that a multiple-layer architecture was one option to address the issues that big data presents. Adistributed parallelarchitecture distributes data across multiple servers; these parallel execution environments can dramatically improve data processing speeds. This type of architecture inserts data into a parallel DBMS, which implements the use of MapReduce and Hadoop frameworks. This type of framework looks to make the processing power transparent to the end-user by using a front-end application server.[48]
Thedata lakeallows an organization to shift its focus from centralized control to a shared model to respond to the changing dynamics of information management. This enables quick segregation of data into the data lake, thereby reducing the overhead time.[49][50]
A 2011McKinsey Global Institutereport characterizes the main components and ecosystem of big data as follows:[51]
Multidimensional big data can also be represented asOLAPdata cubes or, mathematically,tensors.Array database systemshave set out to provide storage and high-level query support on this data type.
Additional technologies being applied to big data include efficient tensor-based computation,[52]such asmultilinear subspace learning,[53]massively parallel-processing (MPP) databases,search-based applications,data mining,[54]distributed file systems, distributed cache (e.g.,burst bufferandMemcached),distributed databases,cloudandHPC-basedinfrastructure (applications, storage and computing resources),[55]and the Internet.[citation needed]Although, many approaches and technologies have been developed, it still remains difficult to carry out machine learning with big data.[56]
SomeMPPrelational databases have the ability to store and manage petabytes of data. Implicit is the ability to load, monitor, back up, and optimize the use of the large data tables in theRDBMS.[57][promotional source?]
DARPA'sTopological Data Analysisprogram seeks the fundamental structure of massive data sets and in 2008 the technology went public with the launch of a company called "Ayasdi".[58][independent source needed]
The practitioners of big data analytics processes are generally hostile to slower shared storage,[59]preferring direct-attached storage (DAS) in its various forms from solid state drive (SSD) to high capacitySATAdisk buried inside parallel processing nodes. The perception of shared storage architectures—storage area network(SAN) andnetwork-attached storage(NAS)— is that they are relatively slow, complex, and expensive. These qualities are not consistent with big data analytics systems that thrive on system performance, commodity infrastructure, and low cost.
Real or near-real-time information delivery is one of the defining characteristics of big data analytics. Latency is therefore avoided whenever and wherever possible. Data in direct-attached memory or disk is good—data on memory or disk at the other end of anFCSANconnection is not. The cost of anSANat the scale needed for analytics applications is much higher than other storage techniques.
Big data has increased the demand of information management specialists so much so thatSoftware AG,Oracle Corporation,IBM,Microsoft,SAP,EMC,HP, andDellhave spent more than $15 billion on software firms specializing in data management and analytics. In 2010, this industry was worth more than $100 billion and was growing at almost 10 percent a year, about twice as fast as the software business as a whole.[6]
Developed economies increasingly use data-intensive technologies. There are 4.6 billion mobile-phone subscriptions worldwide, and between 1 billion and 2 billion people accessing the internet.[6]Between 1990 and 2005, more than 1 billion people worldwide entered the middle class, which means more people became more literate, which in turn led to information growth. The world's effective capacity to exchange information through telecommunication networks was 281petabytesin 1986, 471 petabytes in 1993, 2.2 exabytes in 2000, 65exabytesin 2007[11]and predictions put the amount of internet traffic at 667 exabytes annually by 2014.[6]According to one estimate, one-third of the globally stored information is in the form of alphanumeric text and still image data,[60]which is the format most useful for most big data applications. This also shows the potential of yet unused data (i.e. in the form of video and audio content).
While many vendors offer off-the-shelf products for big data, experts promote the development of in-house custom-tailored systems if the company has sufficient technical capabilities.[61]
The use and adoption of big data within governmental processes allows efficiencies in terms of cost, productivity, and innovation,[62]but comes with flaws. Data analysis often requires multiple parts of government (central and local) to work in collaboration and create new and innovative processes to deliver the desired outcome. A common government organization that makes use of big data is the National Security Administration (NSA), which monitors the activities of the Internet constantly in search for potential patterns of suspicious or illegal activities their system may pick up.
Civil registration and vital statistics(CRVS) collects all certificates status from birth to death. CRVS is a source of big data for governments.
Research on the effective usage of information and communication technologies for development (also known as "ICT4D") suggests that big data technology can make important contributions but also present unique challenges tointernational development.[63][64]Advancements in big data analysis offer cost-effective opportunities to improve decision-making in critical development areas such as health care, employment,economic productivity, crime, security, andnatural disasterand resource management.[65][page needed][66][67]Additionally, user-generated data offers new opportunities to give the unheard a voice.[68]However, longstanding challenges for developing regions such as inadequate technological infrastructure and economic and human resource scarcity exacerbate existing concerns with big data such as privacy, imperfect methodology, and interoperability issues.[65][page needed]The challenge of "big data for development"[65][page needed]is currently evolving toward the application of this data through machine learning, known as "artificial intelligence for development (AI4D).[69]
A major practical application of big data for development has been "fighting poverty with data".[70]In 2015, Blumenstock and colleagues estimated predicted poverty and wealth from mobile phone metadata[71]and in 2016 Jean and colleagues combined satellite imagery and machine learning to predict poverty.[72]Using digital trace data to study the labor market and the digital economy in Latin America,Hilbertand colleagues[73][74]argue that digital trace data has several benefits such as:
At the same time, working with digital trace data instead of traditional survey data does not eliminate the traditional challenges involved when working in the field of international quantitative analysis. Priorities change, but the basic discussions remain the same. Among the main challenges are:
Big Data is being rapidly adopted in Finance to 1) speed up processing and 2) deliver better, more informed inferences, both internally and to the clients of the financial institutions.[76]The financial applications of Big Data range from investing decisions and trading (processing volumes of available price data, limit order books, economic data and more, all at the same time), portfolio management (optimizing over an increasingly large array of financial instruments, potentially selected from different asset classes), risk management (credit rating based on extended information), and any other aspect where the data inputs are large.[77]Big Data has also been a typical concept within the field ofalternative financial service. Some of the major areas involve crowd-funding platforms and crypto currency exchanges.[78]
Big data analytics has been used in healthcare in providing personalized medicine andprescriptive analytics, clinical risk intervention and predictive analytics, waste and care variability reduction, automated external and internal reporting of patient data, standardized medical terms and patient registries.[79][80][81][82]Some areas of improvement are more aspirational than actually implemented. The level of data generated withinhealthcare systemsis not trivial. With the added adoption of mHealth, eHealth and wearable technologies the volume of data will continue to increase. This includeselectronic health recorddata, imaging data, patient generated data, sensor data, and other forms of difficult to process data. There is now an even greater need for such environments to pay greater attention to data and information quality.[83]"Big data very often means 'dirty data' and the fraction of data inaccuracies increases with data volume growth." Human inspection at the big data scale is impossible and there is a desperate need in health service for intelligent tools for accuracy and believability control and handling of information missed.[84]While extensive information in healthcare is now electronic, it fits under the big data umbrella as most is unstructured and difficult to use.[85]The use of big data in healthcare has raised significant ethical challenges ranging from risks for individual rights, privacy andautonomy, to transparency and trust.[86]
Big data in health research is particularly promising in terms of exploratory biomedical research, as data-driven analysis can move forward more quickly than hypothesis-driven research.[87]Then, trends seen in data analysis can be tested in traditional, hypothesis-driven follow up biological research and eventually clinical research.
A related application sub-area, that heavily relies on big data, within the healthcare field is that ofcomputer-aided diagnosisin medicine.[88][page needed]For instance, forepilepsymonitoring it is customary to create 5 to 10 GB of data daily.[89]Similarly, a single uncompressed image of breasttomosynthesisaverages 450 MB of data.[90]These are just a few of the many examples wherecomputer-aided diagnosisuses big data. For this reason, big data has been recognized as one of the seven key challenges that computer-aided diagnosis systems need to overcome in order to reach the next level of performance.[91]
AMcKinsey Global Institutestudy found a shortage of 1.5 million highly trained data professionals and managers[51]and a number of universities[92][better source needed]includingUniversity of TennesseeandUC Berkeley, have created masters programs to meet this demand. Private boot camps have also developed programs to meet that demand, including paid programs likeThe Data IncubatororGeneral Assembly.[93]In the specific field of marketing, one of the problems stressed by Wedel and Kannan[94]is that marketing has several sub domains (e.g., advertising, promotions,
product development, branding) that all use different types of data.
To understand how the media uses big data, it is first necessary to provide some context into the mechanism used for media process. It has been suggested by Nick Couldry and Joseph Turow that practitioners in media and advertising approach big data as many actionable points of information about millions of individuals. The industry appears to be moving away from the traditional approach of using specific media environments such as newspapers, magazines, or television shows and instead taps into consumers with technologies that reach targeted people at optimal times in optimal locations. The ultimate aim is to serve or convey, a message or content that is (statistically speaking) in line with the consumer's mindset. For example, publishing environments are increasingly tailoring messages (advertisements) and content (articles) to appeal to consumers that have been exclusively gleaned through variousdata-miningactivities.[95]
Channel 4, the Britishpublic-servicetelevision broadcaster, is a leader in the field of big data anddata analysis.[97]
Health insurance providers are collecting data onsocial "determinants of health"such as food andTV consumption, marital status, clothing size, and purchasing habits, from which they make predictions on health costs, in order to spot health issues in their clients. It is controversial whether these predictions are currently being used for pricing.[98]
Big data and the IoT work in conjunction. Data extracted from IoT devices provides a mapping of device inter-connectivity. Such mappings have been used by the media industry, companies, and governments to more accurately target their audience and increase media efficiency. The IoT is also increasingly adopted as a means of gathering sensory data, and this sensory data has been used in medical,[99]manufacturing[100]and transportation[101]contexts.
Kevin Ashton, the digital innovation expert who is credited with coining the term,[102]defines the Internet of things in this quote: "If we had computers that knew everything there was to know about things—using data they gathered without any help from us—we would be able to track and count everything, and greatly reduce waste, loss, and cost. We would know when things needed replacing, repairing, or recalling, and whether they were fresh or past their best."
Especially since 2015, big data has come to prominence within business operations as a tool to help employees work more efficiently and streamline the collection and distribution ofinformation technology(IT). The use of big data to resolve IT anddata collectionissues within an enterprise is calledIT operations analytics(ITOA).[103]By applying big data principles into the concepts ofmachine intelligenceand deep computing, IT departments can predict potential issues and prevent them.[103]ITOA businesses offer platforms forsystems managementthat bringdata silostogether and generate insights from the whole of the system rather than from isolated pockets of data.
Compared tosurvey-based data collection, big data has low cost per data point, applies analysis techniques viamachine learninganddata mining, and includes diverse and new data sources, e.g., registers, social media, apps, and other forms digital data. Since 2018, survey scientists have started to examine how big data and survey science can complement each other to allow researchers and practitioners to improve the production of statistics and its quality. There have been three Big Data Meets Survey Science (BigSurv) conferences in 2018, 2020 (virtual), 2023, and as of 2023[update]one conference forthcoming in 2025,[104]a special issue in theSocial Science Computer Review,[105]a special issue inJournal of the Royal Statistical Society,[106]and a special issue inEP J Data Science,[107]and a book calledBig Data Meets Social Sciences[108]edited byCraig Hilland five otherFellows of the American Statistical Association. In 2021, the founding members of BigSurv received the Warren J. Mitofsky Innovators Award from theAmerican Association for Public Opinion Research.[109]
Big data is notable in marketing due to the constant "datafication"[110]of everyday consumers of the internet, in which all forms of data are tracked. The datafication of consumers can be defined as quantifying many of or all human behaviors for the purpose of marketing.[110]The increasingly digital world of rapid datafication makes this idea relevant to marketing because the amount of data constantly grows exponentially. It is predicted to increase from 44 to 163 zettabytes within the span of five years.[111]The size of big data can often be difficult to navigate for marketers.[112]As a result, adopters of big data may find themselves at a disadvantage. Algorithmic findings can be difficult to achieve with such large datasets.[113]Big data in marketing is a highly lucrative tool that can be used for large corporations, its value being as a result of the possibility of predicting significant trends, interests, or statistical outcomes in a consumer-based manner.[114]
There are three significant factors in the use of big data in marketing:
Examples of uses of big data in public services:
Big data can be used to improve training and understanding competitors, using sport sensors. It is also possible to predict winners in a match using big data analytics.[158]Future performance of players could be predicted as well.[159]Thus, players' value and salary is determined by data collected throughout the season.[160]
InFormula Oneraces, race cars with hundreds of sensors generate terabytes of data. These sensors collect data points from tire pressure to fuel burn efficiency.[161]Based on the data, engineers and data analysts decide whether adjustments should be made in order to win a race. Besides, using big data, race teams try to predict the time they will finish the race beforehand, based on simulations using data collected over the season.[162]
During theCOVID-19 pandemic, big data was raised as a way to minimise the impact of the disease. Significant applications of big data included minimising the spread of the virus, case identification and development of medical treatment.[168]
Governments used big data to track infected people to minimise spread. Early adopters included China, Taiwan, South Korea, and Israel.[169][170][171]
Encrypted search and cluster formation in big data were demonstrated in March 2014 at the American Society of Engineering Education. Gautam Siwach engaged atTackling the challenges of Big DatabyMIT Computer Science and Artificial Intelligence Laboratoryand Amir Esmailpour at the UNH Research Group investigated the key features of big data as the formation of clusters and their interconnections. They focused on the security of big data and the orientation of the term towards the presence of different types of data in an encrypted form at cloud interface by providing the raw definitions and real-time examples within the technology. Moreover, they proposed an approach for identifying the encoding technique to advance towards an expedited search over encrypted text leading to the security enhancements in big data.[172]
In March 2012, The White House announced a national "Big Data Initiative" that consisted of six federal departments and agencies committing more than $200 million to big data research projects.[173]
The initiative included a National Science Foundation "Expeditions in Computing" grant of $10 million over five years to the AMPLab[174]at the University of California, Berkeley.[175]The AMPLab also received funds fromDARPA, and over a dozen industrial sponsors and uses big data to attack a wide range of problems from predicting traffic congestion[176]to fighting cancer.[177]
The White House Big Data Initiative also included a commitment by the Department of Energy to provide $25 million in funding over five years to establish the Scalable Data Management, Analysis and Visualization (SDAV) Institute,[178]led by the Energy Department'sLawrence Berkeley National Laboratory. The SDAV Institute aims to bring together the expertise of six national laboratories and seven universities to develop new tools to help scientists manage and visualize data on the department's supercomputers.
The U.S. state ofMassachusettsannounced the Massachusetts Big Data Initiative in May 2012, which provides funding from the state government and private companies to a variety of research institutions.[179]TheMassachusetts Institute of Technologyhosts the Intel Science and Technology Center for Big Data in theMIT Computer Science and Artificial Intelligence Laboratory, combining government, corporate, and institutional funding and research efforts.[180]
The European Commission is funding the two-year-long Big Data Public Private Forum through their Seventh Framework Program to engage companies, academics and other stakeholders in discussing big data issues. The project aims to define a strategy in terms of research and innovation to guide supporting actions from the European Commission in the successful implementation of the big data economy. Outcomes of this project will be used as input forHorizon 2020, their nextframework program.[181]
The British government announced in March 2014 the founding of theAlan Turing Institute, named after the computer pioneer and code-breaker, which will focus on new ways to collect and analyze large data sets.[182]
At theUniversity of Waterloo Stratford CampusCanadian Open Data Experience (CODE) Inspiration Day, participants demonstrated how using data visualization can increase the understanding and appeal of big data sets and communicate their story to the world.[183]
Computational social sciences– Anyone can use application programming interfaces (APIs) provided by big data holders, such as Google and Twitter, to do research in the social and behavioral sciences.[184]Often these APIs are provided for free.[184]Tobias Preiset al. usedGoogle Trendsdata to demonstrate that Internet users from countries with a higher per capita gross domestic products (GDPs) are more likely to search for information about the future than information about the past. The findings suggest there may be a link between online behaviors and real-world economic indicators.[185][186][187]The authors of the study examined Google queries logs made by ratio of the volume of searches for the coming year (2011) to the volume of searches for the previous year (2009), which they call the "future orientation index".[188]They compared the future orientation index to the per capita GDP of each country, and found a strong tendency for countries where Google users inquire more about the future to have a higher GDP.
Tobias Preisand his colleagues Helen Susannah Moat andH. Eugene Stanleyintroduced a method to identify online precursors for stock market moves, using trading strategies based on search volume data provided by Google Trends.[189]Their analysis ofGooglesearch volume for 98 terms of varying financial relevance, published inScientific Reports,[190]suggests that increases in search volume for financially relevant search terms tend to precede large losses in financial markets.[191][192][193][194][195][196][197]
Big data sets come with algorithmic challenges that previously did not exist. Hence, there is seen by some to be a need to fundamentally change the processing ways.[198]
A research question that is asked about big data sets is whether it is necessary to look at the full data to draw certain conclusions about the properties of the data or if is a sample is good enough. The name big data itself contains a term related to size and this is an important characteristic of big data. Butsamplingenables the selection of right data points from within the larger data set to estimate the characteristics of the whole population. In manufacturing different types of sensory data such as acoustics, vibration, pressure, current, voltage, and controller data are available at short time intervals. To predict downtime it may not be necessary to look at all the data but a sample may be sufficient. Big data can be broken down by various data point categories such as demographic, psychographic, behavioral, and transactional data. With large sets of data points, marketers are able to create and use more customized segments of consumers for more strategic targeting.
Critiques of the big data paradigm come in two flavors: those that question the implications of the approach itself, and those that question the way it is currently done.[199]One approach to this criticism is the field ofcritical data studies.
"A crucial problem is that we do not know much about the underlying empirical micro-processes that lead to the emergence of the[se] typical network characteristics of Big Data."[23][page needed]In their critique, Snijders, Matzat, andReipspoint out that often very strong assumptions are made about mathematical properties that may not at all reflect what is really going on at the level of micro-processes. Mark Graham has leveled broad critiques atChris Anderson's assertion that big data will spell the end of theory:[200]focusing in particular on the notion that big data must always be contextualized in their social, economic, and political contexts.[201]Even as companies invest eight- and nine-figure sums to derive insight from information streaming in from suppliers and customers, less than 40% of employees have sufficiently mature processes and skills to do so. To overcome this insight deficit, big data, no matter how comprehensive or well analyzed, must be complemented by "big judgment", according to an article in theHarvard Business Review.[202]
Much in the same line, it has been pointed out that the decisions based on the analysis of big data are inevitably "informed by the world as it was in the past, or, at best, as it currently is".[65][page needed]Fed by a large number of data on past experiences, algorithms can predict future development if the future is similar to the past.[203]If the system's dynamics of the future change (if it is not astationary process), the past can say little about the future. In order to make predictions in changing environments, it would be necessary to have a thorough understanding of the systems dynamic, which requires theory.[203]As a response to this critique Alemany Oliver and Vayre suggest to use "abductive reasoning as a first step in the research process in order to bring context to consumers' digital traces and make new theories emerge".[204]Additionally, it has been suggested to combine big data approaches with computer simulations, such asagent-based models[65][page needed]andcomplex systems. Agent-based models are increasingly getting better in predicting the outcome of social complexities of even unknown future scenarios through computer simulations that are based on a collection of mutually interdependent algorithms.[205][206]Finally, the use of multivariate methods that probe for the latent structure of the data, such asfactor analysisandcluster analysis, have proven useful as analytic approaches that go well beyond the bi-variate approaches (e.g.contingency tables) typically employed with smaller data sets.
In health and biology, conventional scientific approaches are based on experimentation. For these approaches, the limiting factor is the relevant data that can confirm or refute the initial hypothesis.[207]A new postulate is accepted now in biosciences: the information provided by the data in huge volumes (omics) without prior hypothesis is complementary and sometimes necessary to conventional approaches based on experimentation.[208][209]In the massive approaches it is the formulation of a relevant hypothesis to explain the data that is the limiting factor.[210]The search logic is reversed and the limits of induction ("Glory of Science and Philosophy scandal",C. D. Broad, 1926) are to be considered.[citation needed]
Privacyadvocates are concerned about the threat to privacy represented by increasing storage and integration ofpersonally identifiable information; expert panels have released various policy recommendations to conform practice to expectations of privacy.[211]The misuse of big data in several cases by media, companies, and even the government has allowed for abolition of trust in almost every fundamental institution holding up society.[212]
Barocas and Nissenbaum argue that one way of protecting individual users is by being informed about the types of information being collected, with whom it is shared, under what constraints and for what purposes.[213]
The "V" model of big data is concerning as it centers around computational scalability and lacks in a loss around the perceptibility and understandability of information. This led to the framework ofcognitive big data, which characterizes big data applications according to:[214]
Large data sets have been analyzed by computing machines for well over a century, including the US census analytics performed byIBM's punch-card machines which computed statistics including means and variances of populations across the whole continent. In more recent decades, science experiments such asCERNhave produced data on similar scales to current commercial "big data". However, science experiments have tended to analyze their data using specialized custom-builthigh-performance computing(super-computing) clusters and grids, rather than clouds of cheap commodity computers as in the current commercial wave, implying a difference in both culture and technology stack.
Ulf-Dietrich Reipsand Uwe Matzat wrote in 2014 that big data had become a "fad" in scientific research.[184]ResearcherDanah Boydhas raised concerns about the use of big data in science neglecting principles such as choosing arepresentative sampleby being too concerned about handling the huge amounts of data.[215]This approach may lead to results that have abiasin one way or another.[216]Integration across heterogeneous data resources—some that might be considered big data and others not—presents formidable logistical as well as analytical challenges, but many researchers argue that such integrations are likely to represent the most promising new frontiers in science.[217]In the provocative article "Critical Questions for Big Data",[218]the authors title big data a part ofmythology: "large data sets offer a higher form of intelligence and knowledge [...], with the aura of truth, objectivity, and accuracy". Users of big data are often "lost in the sheer volume of numbers", and "working with Big Data is still subjective, and what it quantifies does not necessarily have a closer claim on objective truth".[218]Recent developments in BI domain, such as pro-active reporting especially target improvements in the usability of big data, through automatedfilteringofnon-useful data and correlations.[219]Big structures are full of spurious correlations[220]either because of non-causal coincidences (law of truly large numbers), solely nature of big randomness[221](Ramsey theory), or existence ofnon-included factorsso the hope, of early experimenters to make large databases of numbers "speak for themselves" and revolutionize scientific method, is questioned.[222]Catherine Tuckerhas pointed to "hype" around big data, writing "By itself, big data is unlikely to be valuable." The article explains: "The many contexts where data is cheap relative to the cost of retaining talent to process it, suggests that processing skills are more important than data itself in creating value for a firm."[223]
Big data analysis is often shallow compared to analysis of smaller data sets.[224]In many big data projects, there is no large data analysis happening, but the challenge is theextract, transform, loadpart of data pre-processing.[224]
Big data is abuzzwordand a "vague term",[225][226]but at the same time an "obsession"[226]with entrepreneurs, consultants, scientists, and the media. Big data showcases such asGoogle Flu Trendsfailed to deliver good predictions in recent years, overstating the flu outbreaks by a factor of two. Similarly,Academy Awardsand election predictions solely based on Twitter were more often off than on target.
Big data often poses the same challenges as small data; adding more data does not solve problems of bias, but may emphasize other problems. In particular data sources such as Twitter are not representative of the overall population, and results drawn from such sources may then lead to wrong conclusions.Google Translate—which is based on big data statistical analysis of text—does a good job at translating web pages. However, results from specialized domains may be dramatically skewed.
On the other hand, big data may also introduce new problems, such as themultiple comparisons problem: simultaneously testing a large set of hypotheses is likely to produce many false results that mistakenly appear significant.
Ioannidis argued that "most published research findings are false"[227]due to essentially the same effect: when many scientific teams and researchers each perform many experiments (i.e. process a big amount of scientific data; although not with big data technology), the likelihood of a "significant" result being false grows fast – even more so, when only positive results are published.
Furthermore, big data analytics results are only as good as the model on which they are predicated. In an example, big data took part in attempting to predict the results of the 2016 U.S. presidential election[228]with varying degrees of success.
Big data has been used in policing and surveillance by institutions likelaw enforcementand corporations (see:corporate surveillanceandsurveillance capitalism).[229]Due to the less visible nature of data-based surveillance as compared to traditional methods of policing, objections to big data policing are less likely to arise. According to Sarah Brayne'sBig Data Surveillance: The Case of Policing,[230]big data policing can reproduce existingsocietal inequalitiesin three ways:
If these potential problems are not corrected or regulated, the effects of big data policing may continue to shape societal hierarchies. Conscientious usage of big data policing could prevent individual level biases from becoming institutional biases, Brayne also notes.
|
https://en.wikipedia.org/wiki/Big_data
|
Differentiable programmingis aprogramming paradigmin which a numeric computer program can bedifferentiatedthroughout viaautomatic differentiation.[1][2][3][4][5]This allows forgradient-based optimizationof parameters in the program, often viagradient descent, as well as other learning approaches that are based on higher-order derivative information. Differentiable programming has found use in a wide variety of areas, particularlyscientific computingandmachine learning.[5]One of the early proposals to adopt such a framework in a systematic fashion to improve upon learning algorithms was made by theAdvanced Concepts Teamat theEuropean Space Agencyin early 2016.[6]
Most differentiable programming frameworks work by constructing a graph containing the control flow anddata structuresin the program.[7]Attempts generally fall into two groups:
The use of just-in-time compilation has emerged recently[when?]as a possible solution to overcome some of the bottlenecks of interpreted languages. The C++heyokaand Python packageheyoka.pymake large use of this technique to offer advanced differentiable programming capabilities (also at high orders). A package for theJuliaprogramming language –Zygote– works directly on Julia'sintermediate representation.[7][11][5]
A limitation of earlier approaches is that they are only able to differentiate code written in a suitable manner for the framework, limiting their interoperability with other programs. Newer approaches resolve this issue by constructing the graph from the language's syntax or IR, allowing arbitrary code to be differentiated.[7][9]
Differentiable programming has been applied in areas such as combiningdeep learningwithphysics enginesinrobotics,[12]solvingelectronic-structureproblems with differentiabledensity functional theory,[13]differentiableray tracing,[14]differentiable imaging,[15]image processing,[16]andprobabilistic programming.[5]
Differentiable programming is making significant strides in various fields beyond its traditional applications. In healthcare and life sciences, for example, it is being used for deep learning in biophysics-based modelling of molecular mechanisms, in areas such as protein structure prediction and drug discovery. These applications demonstrate the potential of differentiable programming in contributing to significant advancements in understanding complex biological systems and improving healthcare solutions.[17]
|
https://en.wikipedia.org/wiki/Differentiable_programming
|
Thesedatasetsare used inmachine learning (ML)research and have been cited inpeer-reviewedacademic journals. Datasets are an integral part of the field of machine learning. Major advances in this field can result from advances in learningalgorithms(such asdeep learning),computer hardware, and, less-intuitively, the availability of high-quality training datasets.[1]High-quality labeled training datasets forsupervisedandsemi-supervisedmachine learningalgorithmsare usually difficult and expensive to produce because of the large amount of time needed to label the data. Although they do not need to be labeled, high-quality datasets forunsupervisedlearning can also be difficult and costly to produce.[2][3][4]
Many organizations, including governments, publish and share theirdatasets. The datasets are classified, based on the licenses, asOpen dataandNon-Open data.
The datasets from variousgovernmental-bodiesare presented inList of open government data sites. The datasets are ported onopen data portals. They are made available for searching, depositing and accessing through interfaces likeOpen API. The datasets are made available as various sorted types and subtypes.
The data portal is classified based on its type of license. Theopen source licensebased data portals are known asopen data portalswhich are used by manygovernment organizationsandacademic institutions.
https://github.com/sebneu/ckan_instances/blob/master/instances.csv
https://dataverse.org/metrics
The data portal sometimes lists a wide variety of subtypes of datasets pertaining to manymachine learning applications.
The data portals which are suitable for a specific subtype ofmachine learning applicationare listed in the subsequent sections.
These datasets consist primarily of text for tasks such asnatural language processing,sentiment analysis, translation, andcluster analysis.
Categorization
citation analysis
These datasets consist of sounds and sound features used for tasks such asspeech recognitionandspeech synthesis.
(WAV)
Datasets containing electric signal information requiring some sort ofsignal processingfor further analysis.
Datasets from physical systems.
Datasets from biological systems.
Dataset[259]
[329][330]
[331]
This section includes datasets that deals with structured data.
Further details are provided in theproject's GitHub repositoryand respectiveHugging Face dataset card.
This section includes datasets that contains multi-turn text with at least two actors, a "user" and an "agent". The user makes requests for the agent, which performs the request.
Taskmaster-2: 17,289 dialogs in the seven domains (restaurants, food ordering, movies, hotels, flights, music and sports).
Taskmaster-3: 23,757 movie ticketing dialogs.
Taskmaster-3: conversation id, utterances, vertical, scenario, instructions.
For further details check theproject's GitHub repositoryor the Hugging Face dataset cards (taskmaster-1,taskmaster-2,taskmaster-3).
Additionally, each ask contains a task definition.
Further information is provided in theGitHub repositoryof the project and theHugging Face data card.
The dataset can be downloadedhere, and the rejected datahere.
The scripts to process the data are available in the GitHub repo mentioned on the paper:https://github.com/google-research/FLAN/tree/main/flan.
AnotherFLAN GitHub repowas created as well. This is the one associated with the dataset card in Hugging Face.
Mechanisms of AttackDomains of Attack
Software DevelopmentHardware Design[permanent dead link]Research Concepts
2009,20102011,2012,2013,2014,2015,2016,2017,2018,2019,2020,2021,2022.
Data files can also be downloadedhere.
Data is also availablehere.
Alternate list of reports.
Workshops
As datasets come in myriad formats and can sometimes be difficult to use, there has been considerable work put into curating and standardizing the format of datasets to make them easier to use for machine learning research.
|
https://en.wikipedia.org/wiki/List_of_datasets_for_machine-learning_research
|
Inmachine learningandcomputer vision,M-theoryis a learning framework inspired by feed-forward processing in theventral streamofvisual cortexand originally developed for recognition and classification of objects in visual scenes. M-theory was later applied to other areas, such asspeech recognition. On certain image recognition tasks, algorithms based on a specific instantiation of M-theory, HMAX, achieved human-level performance.[1]
The core principle of M-theory is extracting representations invariant under various transformations of images (translation, scale, 2D and 3D rotation and others). In contrast with other approaches using invariant representations, in M-theory they are not hardcoded into the algorithms, but learned. M-theory also shares some principles withcompressed sensing. The theory proposes multilayered hierarchical learning architecture, similar to that of visual cortex.
A great challenge in visual recognition tasks is that the same object can be seen in a variety of conditions. It can be seen from different distances, different viewpoints, under different lighting, partially occluded, etc. In addition, for particular classes objects, such as faces, highly complex specific transformations may be relevant, such as changing facial expressions. For learning to recognize images, it is greatly beneficial to factor out these variations. It results in much simpler classification problem and, consequently, in great reduction ofsample complexityof the model.
A simple computational experiment illustrates this idea. Two instances of a classifier were trained to distinguish images of planes from those of cars. For training and testing of the first instance, images with arbitrary viewpoints were used. Another instance received only images seen from a particular viewpoint, which was equivalent to training and testing the system on invariant representation of the images. One can see that the second classifier performed quite well even after receiving a single example from each category, while performance of the first classifier was close to random guess even after seeing 20 examples.
Invariant representations has been incorporated into several learning architectures, such asneocognitrons. Most of these architectures, however, provided invariance through custom-designed features or properties of architecture itself. While it helps to take into account some sorts of transformations, such as translations, it is very nontrivial to accommodate for other sorts of transformations, such as 3D rotations and changing facial expressions. M-theory provides a framework of how such transformations can be learned. In addition to higher flexibility, this theory also suggests how human brain may have similar capabilities.
Another core idea of M-theory is close in spirit to ideas from the field ofcompressed sensing. An implication fromJohnson–Lindenstrauss lemmasays that a particular number of images can be embedded into a low-dimensionalfeature spacewith the same distances between images by using random projections. This result suggests thatdot productbetween the observed image and some other image stored in memory, called template, can be used as a feature helping to distinguish the image from other images. The template need not to be anyhow related to the image, it could be chosen randomly.
The two ideas outlined in previous sections can be brought together to construct a framework for learning invariant representations. The key observation is how dot product between imageI{\displaystyle I}and a templatet{\displaystyle t}behaves when image is transformed (by such transformations as translations, rotations, scales, etc.). If transformationg{\displaystyle g}is a member of aunitary groupof transformations, then the following holds:
⟨gI,t⟩=⟨I,g−1t⟩(1){\displaystyle \langle gI,t\rangle =\langle I,g^{-1}t\rangle \qquad (1)}
In other words, the dot product of transformed image and a template is equal to the dot product of original image and inversely transformed template. For instance, for image rotated by 90 degrees, the inversely transformed template would be rotated by −90 degrees.
Consider the set of dot products of an imageI{\displaystyle I}to all possible transformations of template:{⟨I,g′t⟩∣g′∈G}{\displaystyle \lbrace \langle I,g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace }. If one applies a transformationg{\displaystyle g}toI{\displaystyle I}, the set would become{⟨gI,g′t⟩∣g′∈G}{\displaystyle \lbrace \langle gI,g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace }. But because of the property (1), this is equal to{⟨I,g−1g′t⟩∣g′∈G}{\displaystyle \lbrace \langle I,g^{-1}g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace }. The set{g−1g′∣g′∈G}{\displaystyle \lbrace g^{-1}g^{\prime }\mid g^{\prime }\in G\rbrace }is equal to just the set of all elements inG{\displaystyle G}. To see this, note that everyg−1g′{\displaystyle g^{-1}g^{\prime }}is inG{\displaystyle G}due to the closure property ofgroups, and for everyg′′{\displaystyle g^{\prime \prime }}in G there exist its prototypeg′{\displaystyle g^{\prime }}such asg′′=g−1g′{\displaystyle g^{\prime \prime }=g^{-1}g^{\prime }}(namely,g′=gg′′{\displaystyle g^{\prime }=gg^{\prime \prime }}). Thus,{⟨I,g−1g′t⟩∣g′∈G}={⟨I,g′′t⟩∣g′′∈G}{\displaystyle \lbrace \langle I,g^{-1}g^{\prime }t\rangle \mid g^{\prime }\in G\rbrace =\lbrace \langle I,g^{\prime \prime }t\rangle \mid g^{\prime \prime }\in G\rbrace }. One can see that the set of dot products remains the same despite that a transformation was applied to the image! This set by itself may serve as a (very cumbersome) invariant representation of an image. More practical representations can be derived from it.
In the introductory section, it was claimed that M-theory allows to learn invariant representations. This is because templates and their transformed versions can be learned from visual experience – by exposing the system to sequences of transformations of objects. It is plausible that similar visual experiences occur in early period of human life, for instance when infants twiddle toys in their hands. Because templates may be totally unrelated to images that the system later will try to classify, memories of these visual experiences may serve as a basis for recognizing many different kinds of objects in later life. However, as it is shown later, for some kinds of transformations, specific templates are needed.
To implement the ideas described in previous sections, one need to know how to derive a computationally efficient invariant representation of an image. Such unique representation for each image can be characterized as it appears by a set of one-dimensional probability distributions (empirical distributions of the dot-products between image and a set of templates stored during unsupervised learning). These probability distributions in their turn can be described by either histograms or a set of statistical moments of it, as it will be shown below.
OrbitOI{\displaystyle O_{I}}is a set of imagesgI{\displaystyle gI}generated from a single imageI{\displaystyle I}under the action of the groupG,∀g∈G{\displaystyle G,\forall g\in G}.
In other words, images of an object and of its transformations correspond to an orbitOI{\displaystyle O_{I}}. If two orbits have a point in common they are identical everywhere,[2]i.e. an orbit is an invariant and unique representation of an image. So, two images are called equivalent when they belong to the same orbit:I∼I′{\displaystyle I\sim I^{\prime }}if∃g∈G{\displaystyle \exists g\in G}such thatI′=gI{\displaystyle I^{\prime }=gI}. Conversely, two orbits are different if none of the images in one orbit coincide with any image in the other.[3]
A natural question arises: how can one compare two orbits? There are several possible approaches. One of them employs the fact that intuitively two empirical orbits are the same irrespective of the ordering of their points. Thus, one can consider a probability distributionPI{\displaystyle P_{I}}induced by the group's action on imagesI{\displaystyle I}(gI{\displaystyle gI}can be seen as a realization of a random variable).
This probability distributionPI{\displaystyle P_{I}}can be almost uniquely characterized byK{\displaystyle K}one-dimensional probability distributionsP⟨I,tk⟩{\displaystyle P_{\langle I,t^{k}\rangle }}induced by the (one-dimensional) results of projections⟨I,tk⟩{\displaystyle \langle I,t^{k}\rangle }, wheretk,k=1,…,K{\displaystyle t^{k},k=1,\ldots ,K}are a set of templates (randomly chosen images) (based on the Cramer–Wold theorem[4]and concentration of measures).
Considern{\displaystyle n}imagesXn∈X{\displaystyle X_{n}\in X}. LetK≥2cε2lognδ{\displaystyle K\geq {\frac {2}{c\varepsilon ^{2}}}\log {\frac {n}{\delta }}}, wherec{\displaystyle c}is a universal constant. Then
with probability1−δ2{\displaystyle 1-\delta ^{2}}, for allI,I′{\displaystyle I,I^{\prime }}∈{\displaystyle \in }Xn{\displaystyle X_{n}}.
This result (informally) says that an approximately invariant and unique representation of an imageI{\displaystyle I}can be obtained from the estimates ofK{\displaystyle K}1-D probability distributionsP⟨I,tk⟩{\displaystyle P_{\langle I,t^{k}\rangle }}fork=1,…,K{\displaystyle k=1,\ldots ,K}. The numberK{\displaystyle K}of projections needed to discriminaten{\displaystyle n}orbits, induced byn{\displaystyle n}images, up to precisionε{\displaystyle \varepsilon }(and with confidence1−δ2{\displaystyle 1-\delta ^{2}}) isK≥2cε2lognδ{\displaystyle K\geq {\frac {2}{c\varepsilon ^{2}}}\log {\frac {n}{\delta }}}, wherec{\displaystyle c}is a universal constant.
To classify an image, the following "recipe" can be used:
Estimates of such one-dimensional probability density functions (PDFs)P⟨I,tk⟩{\displaystyle P_{\langle I,t^{k}\rangle }}can be written in terms of histograms asμnk(I)=1/|G|∑i=1|G|ηn(⟨I,gitk⟩){\displaystyle \mu _{n}^{k}(I)=1/\left|G\right|\sum _{i=1}^{\left|G\right|}\eta _{n}(\langle I,g_{i}t^{k}\rangle )}, whereηn,n=1,…,N{\displaystyle \eta _{n},n=1,\ldots ,N}is a set of nonlinear functions. These 1-D probability distributions can be characterized with N-bin histograms or set of statistical moments. For example, HMAX represents an architecture in which pooling is done with a max operation.
In the "recipe" for image classification, groups of transformations are approximated with finite number of transformations. Such approximation is possible only when the group iscompact.
Such groups as all translations and all scalings of the image are not compact, as they allow arbitrarily big transformations. However, they arelocally compact. For locally compact groups, invariance is achievable within certain range of transformations.[2]
Assume thatG0{\displaystyle G_{0}}is a subset of transformations fromG{\displaystyle G}for which the transformed patterns exist in memory. For an imageI{\displaystyle I}and templatetk{\displaystyle t_{k}}, assume that⟨I,g−1tk⟩{\displaystyle \langle I,g^{-1}t_{k}\rangle }is equal to zero everywhere except some subset ofG0{\displaystyle G_{0}}. This subset is calledsupportof⟨I,g−1tk⟩{\displaystyle \langle I,g^{-1}t_{k}\rangle }and denoted assupp(⟨I,g−1tk⟩){\displaystyle \operatorname {supp} (\langle I,g^{-1}t_{k}\rangle )}. It can be proven that if for a transformationg′{\displaystyle g^{\prime }}, support set will also lie withing′G0{\displaystyle g^{\prime }G_{0}}, then signature ofI{\displaystyle I}is invariant with respect tog′{\displaystyle g^{\prime }}.[2]This theorem determines the range of transformations for which invariance is guaranteed to hold.
One can see that the smaller issupp(⟨I,g−1tk⟩){\displaystyle \operatorname {supp} (\langle I,g^{-1}t_{k}\rangle )}, the larger is the range of transformations for which invariance is guaranteed to hold. It means that for a group that is only locally compact, not all templates would work equally well anymore. Preferable templates are those with a reasonably smallsupp(⟨gI,tk⟩){\displaystyle \operatorname {supp} (\langle gI,t_{k}\rangle )}for a generic image. This property is called localization: templates are sensitive only to images within a small range of transformations. Although minimizingsupp(⟨gI,tk⟩){\displaystyle \operatorname {supp} (\langle gI,t_{k}\rangle )}is not absolutely necessary for the system to work, it improves approximation of invariance. Requiring localization simultaneously for translation and scale yields a very specific kind of templates:Gabor functions.[2]
The desirability of custom templates for non-compact group is in conflict with the principle of learning invariant representations. However, for certain kinds of regularly encountered image transformations, templates might be the result of evolutionary adaptations. Neurobiological data suggests that there is Gabor-like tuning in the first layer of visual cortex.[5]The optimality of Gabor templates for translations and scales is a possible explanation of this phenomenon.
Many interesting transformations of images do not form groups. For instance, transformations of images associated with 3D rotation of corresponding 3D object do not form a group, because it is impossible to define an inverse transformation (two objects may looks the same from one angle but different from another angle). However, approximate invariance is still achievable even for non-group transformations, if localization condition for templates holds and transformation can be locally linearized.
As it was said in the previous section, for specific case of translations and scaling, localization condition can be satisfied by use of generic Gabor templates. However, for general case (non-group) transformation, localization condition can be satisfied only for specific class of objects.[2]More specifically, in order to satisfy the condition, templates must be similar to the objects one would like to recognize. For instance, if one would like to build a system to recognize 3D rotated faces, one need to use other 3D rotated faces as templates. This may explain the existence of such specialized modules in the brain as one responsible forface recognition.[2]Even with custom templates, a noise-like encoding of images and templates is necessary for localization. It can be naturally achieved if the non-group transformation is processed on any layer other than the first in hierarchical recognition architecture.
The previous section suggests one motivation for hierarchical image recognition architectures. However, they have other benefits as well.
Firstly, hierarchical architectures best accomplish the goal of ‘parsing’ a complex visual scene with many objects consisting of many parts, whose relative position may greatly vary. In this case, different elements of the system must react to different objects and parts. In hierarchical architectures, representations of parts at different levels of embedding hierarchy can be stored at different layers of hierarchy.
Secondly, hierarchical architectures which have invariant representations for parts of objects may facilitate learning of complex compositional concepts. This facilitation may happen through reusing of learned representations of parts that were constructed before in process of learning of other concepts. As a result, sample complexity of learning compositional concepts may be greatly reduced.
Finally, hierarchical architectures have better tolerance to clutter. Clutter problem arises when the target object is in front of a non-uniform background, which functions as a distractor for the visual task. Hierarchical architecture provides signatures for parts of target objects, which do not include parts of background and are not affected by background variations.[6]
In hierarchical architectures, one layer is not necessarily invariant to all transformations that are handled by the hierarchy as a whole. Some transformations may pass through that layer to upper layers, as in the case of non-group transformations described in the previous section. For other transformations, an element of the layer may produce invariant representations only within small range of transformations. For instance, elements of the lower layers in hierarchy have small visual field and thus can handle only a small range of translation. For such transformations, the layer should providecovariantrather than invariant, signatures. The property of covariance can be written asdistr(⟨μl(gI),μl(t)⟩)=distr(⟨μl(I),μl(g−1t)⟩){\displaystyle \operatorname {distr} (\langle \mu _{l}(gI),\mu _{l}(t)\rangle )=\operatorname {distr} (\langle \mu _{l}(I),\mu _{l}(g^{-1}t)\rangle )}, wherel{\displaystyle l}is a layer,μl(I){\displaystyle \mu _{l}(I)}is the signature of image on that layer, anddistr{\displaystyle \operatorname {distr} }stands for "distribution of values of the expression for allg∈G{\displaystyle g\in G}".
M-theory is based on a quantitative theory of the ventral stream of visual cortex.[7][8]Understanding how visual cortex works in object recognition is still a challenging task for neuroscience. Humans and primates are able to memorize and recognize objects after seeing just couple of examples unlike any state-of-the art machine vision systems that usually require a lot of data in order to recognize objects. Prior to the use of visual neuroscience in computer vision has been limited to early vision for deriving stereo algorithms (e.g.,[9]) and to justify the use of DoG (derivative-of-Gaussian) filters and more recently of Gabor filters.[10][11]No real attention has been given to biologically plausible features of higher complexity. While mainstream computer vision has always been inspired and challenged by human vision, it seems to have never advanced past the very first stages of processing in the simple cells in V1 and V2. Although some of the systems inspired – to various degrees – by neuroscience, have been tested on at least some natural images, neurobiological models of object recognition in cortex have not yet been extended to deal with real-world image databases.[12]
M-theory learning framework employs a novel hypothesis about the main computational function of the ventral stream: the representation of new objects/images in terms of a signature, which is invariant to transformations learned during visual experience. This allows recognition from very few labeled examples – in the limit, just one.
Neuroscience suggests that natural functionals for a neuron to compute is a high-dimensional dot product between an "image patch" and another image patch (called template)
which is stored in terms of synaptic weights (synapses per neuron). The standard computational model of a neuron is based on a dot product and a threshold. Another important feature of the visual cortex is that it consists of simple and complex cells. This idea was originally proposed by Hubel and Wiesel.[9]M-theory employs this idea. Simple cells compute dot products of an image and transformations of templates⟨I,gitk⟩{\displaystyle \langle I,g_{i}t^{k}\rangle }fori=1,…,|G|{\displaystyle i=1,\ldots ,|G|}(|G|{\displaystyle |G|}is a number of simple cells). Complex cells are responsible for pooling and computing empirical histograms or statistical moments of it. The following formula for constructing histogram can be computed by neurons:
whereσ{\displaystyle \sigma }is a smooth version of step function,Δ{\displaystyle \Delta }is the width of a histogram bin, andn{\displaystyle n}is the number of the bin.
In[clarification needed][13][14]authors applied M-theory to unconstrained face recognition in natural photographs. Unlike the DAR (detection, alignment, and recognition) method, which handles clutter by detecting objects and cropping closely around them so that very little background remains, this approach accomplishes detection and alignment implicitly by storing transformations of training images (templates) rather than explicitly detecting and aligning or cropping faces at test time. This system is built according to the principles of a recent theory of invariance in hierarchical networks and can evade the clutter problem generally problematic for feedforward systems.
The resulting end-to-end system achieves a drastic improvement in the state of the art on this end-to-end task, reaching the same level of performance as the best systems operating on aligned, closely cropped images (no outside training data). It also performs well on two newer datasets, similar to LFW, but more difficult: significantly jittered (misaligned) version of LFW and SUFR-W (for example, the model's accuracy in the LFW "unaligned & no outside data used" category is 87.55±1.41% compared to state-of-the-art APEM (adaptive probabilistic elastic matching): 81.70±1.78%).
The theory was also applied to a range of recognition tasks: from invariant single object recognition in clutter to multiclass categorization problems on publicly available data sets (CalTech5, CalTech101, MIT-CBCL) and complex (street) scene understanding tasks that requires the recognition of both shape-based as well as texture-based objects (on StreetScenes data set).[12]The approach performs really well: It has the capability of learning from only a few training examples and was shown to outperform several more complex state-of-the-art systems constellation models, the hierarchical SVM-based face-detection system. A key element in the approach is a new set of scale and position-tolerant feature detectors, which are biologically plausible and agree quantitatively with the tuning properties of cells along the ventral stream of visual cortex. These features are adaptive to the training set, though we also show that a universal feature set, learned from a set of natural images unrelated to any categorization task, likewise achieves good performance.
This theory can also be extended for the speech recognition domain.
As an example, in[15]an extension of a theory for unsupervised learning of invariant visual representations to the auditory domain and empirically evaluated its validity for voiced speech sound classification was proposed. Authors empirically demonstrated that a single-layer, phone-level representation, extracted from base speech features, improves segment classification accuracy and decreases the number of training examples in comparison with standard spectral and cepstral features for an acoustic classification task on TIMIT dataset.[16]
|
https://en.wikipedia.org/wiki/M-theory_(learning_framework)
|
Machine unlearningis a branch ofmachine learningfocused on removing specific undesired element, such as private data, outdated information, copyrighted material, harmful content, dangerous abilities, or misinformation, without needing to rebuild models from the ground up.
Large language models, like the ones poweringChatGPT, may be asked not just to remove specific elements but also to unlearn a "concept," "fact," or "knowledge," which aren't easily linked to specific examples. New terms such as "model editing," "concept editing," and "knowledge unlearning" have emerged to describe this process.[1]
Early research efforts were largely motivated by Article 17 of theGDPR, the European Union's privacy regulation commonly known as the "right to be forgotten" (RTBF), introduced in 2014.[2]
The GDPR did not anticipate that the development oflarge language modelswould make data erasure a complex task. This issue has since led to research on "machine unlearning," with a growing focus on removing copyrighted material, harmful content, dangerous capabilities, and misinformation. Just as early experiences in humans shape later ones, some concepts are more fundamental and harder to unlearn. A piece of knowledge may be so deeply embedded in the model’s knowledge graph that unlearning it could cause internal contradictions, requiring adjustments to other parts of the graph to resolve them.[citation needed]
|
https://en.wikipedia.org/wiki/Machine_unlearning
|
Solomonoff's theory of inductive inferenceproves that, under its common sense assumptions (axioms), the best possible scientific model is the shortest algorithm that generates the empirical data under consideration. In addition to the choice of data, other assumptions are that, to avoid the post-hoc fallacy, the programming language must be chosen prior to the data[1]and that the environment being observed is generated by an unknown algorithm. This is also called a theory ofinduction. Due to its basis in the dynamical (state-space model) character ofAlgorithmic Information Theory, it encompassesstatisticalas well as dynamical information criteria for model selection. It was introduced byRay Solomonoff, based onprobability theoryandtheoretical computer science.[2][3][4]In essence, Solomonoff's induction derives theposterior probabilityof anycomputabletheory, given a sequence of observed data. This posterior probability is derived fromBayes' ruleand someuniversalprior, that is, a prior that assigns a positive probability to any computable theory.
Solomonoff proved that this induction isincomputable(or more precisely, lower semi-computable), but noted that "this incomputability is of a very benign kind", and that it "in no way inhibits its use for practical prediction" (as it can be approximated from below more accurately with more computational resources).[3]It is only "incomputable" in the benign sense that no scientific consensus is able to prove that the best currentscientific theoryis the best of all possible theories. However, Solomonoff's theory does provide an objective criterion for deciding among the current scientific theories explaining a given set of observations.
Solomonoff's induction naturally formalizesOccam's razor[5][6][7][8][9]by assigning larger prior credences to theories that require a shorter algorithmic description.
The theory is based in philosophical foundations, and was founded byRay Solomonoffaround 1960.[10]It is a mathematically formalized combination ofOccam's razor[5][6][7][8][9]and thePrinciple of Multiple Explanations.[11]Allcomputabletheories which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories.Marcus Hutter'suniversal artificial intelligencebuilds upon this to calculate theexpected valueof an action.
Solomonoff's induction has been argued to be the computational formalization of pureBayesianism.[4]To understand, recall that Bayesianism derives the posterior probabilityP[T|D]{\displaystyle \mathbb {P} [T|D]}of a theoryT{\displaystyle T}given dataD{\displaystyle D}by applying Bayes rule, which yields
where theoriesA{\displaystyle A}are alternatives to theoryT{\displaystyle T}. For this equation to make sense, the quantitiesP[D|T]{\displaystyle \mathbb {P} [D|T]}andP[D|A]{\displaystyle \mathbb {P} [D|A]}must be well-defined for all theoriesT{\displaystyle T}andA{\displaystyle A}. In other words, any theory must define a probability distribution over observable dataD{\displaystyle D}. Solomonoff's induction essentially boils down to demanding that all such probability distributions becomputable.
Interestingly, the set of computable probability distributions is a subset of the set of all programs, which iscountable. Similarly, the sets of observable data considered by Solomonoff were finite.Without loss of generality, we can thus consider that any observable data is a finitebit string. As a result, Solomonoff's induction can be defined by only invoking discrete probability distributions.
Solomonoff's induction then allows to make probabilistic predictions of future dataF{\displaystyle F}, by simply obeying the laws of probability. Namely, we haveP[F|D]=ET[P[F|T,D]]=∑TP[F|T,D]P[T|D]{\displaystyle \mathbb {P} [F|D]=\mathbb {E} _{T}[\mathbb {P} [F|T,D]]=\sum _{T}\mathbb {P} [F|T,D]\mathbb {P} [T|D]}. This quantity can be interpreted as the average predictionsP[F|T,D]{\displaystyle \mathbb {P} [F|T,D]}of all theoriesT{\displaystyle T}given past dataD{\displaystyle D}, weighted by their posterior credencesP[T|D]{\displaystyle \mathbb {P} [T|D]}.
The proof of the "razor" is based on the known mathematical properties of a probability distribution over acountable set. These properties are relevant because theinfinite setof all programs is a denumerable set. The sum S of the probabilities of all programs must be exactly equal to one (as per the definition ofprobability) thus the probabilities must roughly decrease as we enumerate the infinite set of all programs, otherwise S will be strictly greater than one. To be more precise, for everyϵ{\displaystyle \epsilon }> 0, there is some lengthlsuch that the probability of all programs longer thanlis at mostϵ{\displaystyle \epsilon }. This does not, however, preclude very long programs from having very high probability.
Fundamental ingredients of the theory are the concepts ofalgorithmic probabilityandKolmogorov complexity. The universalprior probabilityof any prefixpof a computable sequencexis the sum of the probabilities of all programs (for auniversal computer) that compute something starting withp. Given somepand any computable but unknown probability distribution from whichxis sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts ofxin optimal fashion.
The remarkable property of Solomonoff's induction is its completeness. In essence, the completeness theorem guarantees that the expected cumulative errors made by the predictions based on Solomonoff's induction are upper-bounded by theKolmogorov complexityof the (stochastic) data generating process. The errors can be measured using theKullback–Leibler divergenceor the square of the difference between the induction's prediction and the probability assigned by the (stochastic) data generating process.
Unfortunately, Solomonoff also proved that Solomonoff's induction is uncomputable. In fact, he showed thatcomputabilityand completeness are mutually exclusive: any complete theory must be uncomputable. The proof of this is derived from a game between the induction and the environment. Essentially, any computable induction can be tricked by a computable environment, by choosing the computable environment that negates the computable induction's prediction. This fact can be regarded as an instance of theno free lunch theorem.
Though Solomonoff's inductive inference is notcomputable, severalAIXI-derived algorithms approximate it in order to make it run on a modern computer. The more computing power they are given, the closer their predictions are to the predictions of inductive inference (their mathematicallimitis Solomonoff's inductive inference).[12][13][14]
Another direction of inductive inference is based onE. Mark Gold's model oflearning in the limitfrom 1967 and has developed since then more and more models of learning.[15]The general scenario is the following: Given a classSof computable functions, is there a learner (that is, recursive functional) which for any input of the form (f(0),f(1),...,f(n)) outputs a hypothesis (an indexewith respect to a previously agreed on acceptable numbering of all computable functions; the indexed function may be required consistent with the given values off). A learnerMlearns a functionfif almost all its hypotheses are the same indexe, which generates the functionf;MlearnsSifMlearns everyfinS. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of all computable functions is not learnable.[citation needed]Many related models have been considered and also the learning of classes of recursively enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. A far reaching extension of the Gold’s approach is developed by Schmidhuber's theory of generalized Kolmogorov complexities,[16]which are kinds ofsuper-recursive algorithms.
|
https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference
|
Abehavior treeis amathematical modelofplanexecution used incomputer science,robotics,control systemsandvideo games. They describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create very complex tasks composed of simple tasks, without worrying how the simple tasks are implemented. Behavior trees present some similarities tohierarchical state machineswith the key difference that the main building block of a behavior is a task rather than a state. Its ease of human understanding make behavior trees less error prone and very popular in the game developer community. Behavior trees have been shown to generalize several other control architectures.[1][2]
A behavior based control structure has been initially proposed by Rodney Brooks in his paper titled 'A robust layered control system for a mobile robot'. In the initial proposal a list of behaviors could work as alternative one another, later the approach has been extended and generalized in a tree-like organization of behaviors, with extensive application in the game industry[citation needed]as a powerful tool tomodel the behaviorofnon-player characters(NPCs).[3][4][5][6]They have been extensively used in high-profile video games such asHalo,Bioshock, andSpore. Recent works propose behavior trees as a multi-mission control framework forUAV, complex robots, robotic manipulation, and multi-robot systems.[7][8][9][10][11][12]Behavior trees have now reached the maturity to be treated in Game AI textbooks,[13][14]as well as generic game environments such asUnity (game engine)andUnreal Engine(see links below).
Behavior trees became popular for their development paradigm: being able to create a complex behavior by only programming the NPC's actions and then designing a tree structure (usually throughdrag and drop) whose leaf nodes are actions and whose inner nodes determine the NPC's decision making. Behavior trees are visually intuitive and easy to design, test, and debug, and provide more modularity, scalability, and reusability than other behavior creation methods.
Over the years, the diverse implementations of behavior trees kept improving both in efficiency and capabilities to satisfy the demands of the industry, until they evolved intoevent-drivenbehavior trees.[15][5]Event-driven behavior trees solved some scalability issues of classical behavior trees by changing how the tree internally handles its execution, and by introducing a new type of node that can react to events and abort running nodes. Nowadays, the concept of event-driven behavior tree is a standard and used in most of the implementations, even though they are still called "behavior trees" for simplicity.
A behavior tree is graphically represented as a directedtreein which the nodes are classified as root, control flow nodes, or execution nodes (tasks). For each pair of connected nodes the outgoing node is called parent and the incoming node is called child. The root has no parents and exactly one child, the control flow nodes have one parent and at least one child, and the execution nodes have one parent and no children. Graphically, the children of a control flow node are placed below it, ordered from left to right.[16]
The execution of a behavior tree starts from the root which sends ticks with a certain frequency to its child. A tick is an enabling signal that allows the execution of a child. When the execution of a node in the behavior tree is allowed, it returns to the parent a statusrunningif its execution has not finished yet,successif it has achieved its goal, orfailureotherwise.
A control flow node is used to control the subtasks of which it is composed. A control flow node may be either a selector (fallback) node or a sequence node. They run each of their subtasks in turn. When a subtask is completed and returns its status (success or failure), the control flow node decides whether to execute the next subtask or not.
Fallback nodes are used to find and execute the first child that does not fail. A fallback node will return with a status code of success or running immediately when one of its children returns success or running (see Figure I and the pseudocode below). The children are ticked in order of importance, from left to right.
In pseudocode, the algorithm for a fallback composition is:
Sequence nodes are used to find and execute the first child that has not yet succeeded. A sequence node will return with a status code of failure or running immediately when one of its children returns failure or running (see Figure II and the pseudocode below). The children are ticked in order, from left to right.
In pseudocode, the algorithm for a sequence composition is:
In order to apply control theory tools to the analysis of behavior trees, they can be defined as three-tuple.[17]
Ti={fi,ri,Δt},{\displaystyle T_{i}=\{f_{i},r_{i},\Delta t\},}
wherei∈N{\displaystyle i\in \mathbb {N} }is the index of the tree,fi:Rn→Rn{\displaystyle f_{i}:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{n}}is a vector field representing the right hand side of an ordinary difference equation,Δt{\displaystyle \Delta t}is a time step andri:Rn→{Ri,Si,Fi}{\displaystyle r_{i}:\mathbb {R} ^{n}\rightarrow \{R_{i},S_{i},F_{i}\}}is the return status, that can be equal to either
RunningRi{\displaystyle R_{i}},
SuccessSi{\displaystyle S_{i}}, or
FailureFi{\displaystyle F_{i}}.
Note: A task is a degenerate behavior tree with no parent and no child.
The execution of a behavior tree is described by the following standard ordinary difference equations:
xk+1(tk+1)=fi(xk(tk)){\displaystyle x_{k+1}(t_{k+1})=f_{i}(x_{k}(t_{k}))}
tk+1=tk+Δt{\displaystyle t_{k+1}=t_{k}+\Delta t}
wherek∈N{\displaystyle k\in \mathbb {N} }represent the discrete time, andx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}is the state space of the system modelled by the behavior tree.
Two behavior treesTi{\displaystyle T_{i}}andTj{\displaystyle T_{j}}can be composed into a more complex behavior treeT0{\displaystyle T_{0}}using a Sequence operator.
T0=sequence(Ti,Tj).{\displaystyle T_{0}={\mbox{sequence}}(T_{i},T_{j}).}
Then return statusr0{\displaystyle r_{0}}and the vector fieldf0{\displaystyle f_{0}}associated withT0{\displaystyle T_{0}}are defined (forS1{\displaystyle {\mathcal {S}}_{1}}[definition needed]) as follows:
r0(xk)={rj(xk)ifxk∈S1ri(xk)otherwise.{\displaystyle r_{0}(x_{k})={\begin{cases}r_{j}(x_{k})&{\text{ if }}x_{k}\in {\mathcal {S}}_{1}\\r_{i}(x_{k})&{\text{ otherwise }}.\end{cases}}}
f0(xk)={fj(xk)ifxk∈S1fi(xk)otherwise.{\displaystyle f_{0}(x_{k})={\begin{cases}f_{j}(x_{k})&{\text{ if }}x_{k}\in {\mathcal {S}}_{1}\\f_{i}(x_{k})&{\text{ otherwise }}.\end{cases}}}
|
https://en.wikipedia.org/wiki/Behavior_tree_(artificial_intelligence,_robotics_and_control)
|
Inmachine learning(ML),boostingis anensemblemetaheuristicfor primarily reducingbias (as opposed to variance).[1]It can also improve thestabilityand accuracy of MLclassificationandregressionalgorithms. Hence, it is prevalent insupervised learningfor converting weak learners to strong learners.[2]
The concept of boosting is based on the question posed byKearnsandValiant(1988, 1989):[3][4]"Can a set of weak learners create a single strong learner?" A weak learner is defined as aclassifierthat is only slightly correlated with the true classification. A strong learner is a classifier that is arbitrarily well-correlated with the true classification.Robert Schapireanswered the question in the affirmative in a paper published in 1990.[5]This has had significant ramifications in machine learning andstatistics, most notably leading to the development of boosting.[6]
Initially, thehypothesis boosting problemsimply referred to the process of turning a weak learner into a strong learner.[3]Algorithms that achieve this quickly became known as "boosting".Freundand Schapire's arcing (Adapt[at]ive Resampling and Combining),[7]as a general technique, is more or less synonymous with boosting.[8]
While boosting is not algorithmically constrained, most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are weighted in a way that is related to the weak learners' accuracy. After a weak learner is added, the data weights are readjusted, known as "re-weighting". Misclassified input data gain a higher weight and examples that are classified correctly lose weight.[note 1]Thus, future weak learners focus more on the examples that previous weak learners misclassified.
There are many boosting algorithms. The original ones, proposed byRobert Schapire(arecursivemajority gate formulation),[5]andYoav Freund(boost by majority),[9]were notadaptiveand could not take full advantage of the weak learners. Schapire and Freund then developedAdaBoost, an adaptive boosting algorithm that won the prestigiousGödel Prize.
Only algorithms that are provable boosting algorithms in theprobably approximately correct learningformulation can accurately be calledboosting algorithms. Other algorithms that are similar in spirit[clarification needed]to boosting algorithms are sometimes called "leveraging algorithms", although they are also sometimes incorrectly called boosting algorithms.[9]
The main variation between many boosting algorithms is their method ofweightingtraining datapoints andhypotheses.AdaBoostis very popular and the most significant historically as it was the first algorithm that could adapt to the weak learners. It is often the basis of introductory coverage of boosting in university machine learning courses.[10]There are many more recent algorithms such asLPBoost, TotalBoost,BrownBoost,xgboost, MadaBoost,LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework,[9]which shows that boosting performsgradient descentin afunction spaceusing aconvexcost function.
Given images containing various known objects in the world, a classifier can be learned from them to automaticallyclassifythe objects in future images. Simple classifiers built based on someimage featureof the object tend to be weak in categorization performance. Using boosting methods for object categorization is a way to unify the weak classifiers in a special way to boost the overall ability of categorization.[citation needed]
Object categorizationis a typical task ofcomputer visionthat involves determining whether or not an image contains some specific category of object. The idea is closely related with recognition, identification, and detection. Appearance based object categorization typically containsfeature extraction,learningaclassifier, and applying the classifier to new examples. There are many ways to represent a category of objects, e.g. fromshape analysis,bag of words models, or local descriptors such asSIFT, etc. Examples ofsupervised classifiersareNaive Bayes classifiers,support vector machines,mixtures of Gaussians, andneural networks. However, research[which?]has shown that object categories and their locations in images can be discovered in anunsupervised manneras well.[11]
The recognition of object categories in images is a challenging problem incomputer vision, especially when the number of categories is large. This is due to high intra class variability and the need for generalization across variations of objects within the same category. Objects within one category may look quite different. Even the same object may appear unalike under different viewpoint,scale, andillumination. Background clutter and partial occlusion add difficulties to recognition as well.[12]Humans are able to recognize thousands of object types, whereas most of the existingobject recognitionsystems are trained to recognize only a few,[quantify]e.g.human faces,cars, simple objects, etc.[13][needs update?]Research has been very active on dealing with more categories and enabling incremental additions of new categories, and although the general problem remains unsolved, several multi-category objects detectors (for up to hundreds or thousands of categories[14]) have been developed. One means is byfeaturesharing and boosting.
AdaBoost can be used for face detection as an example ofbinary categorization. The two categories are faces versus background. The general algorithm is as follows:
After boosting, a classifier constructed from 200 features could yield a 95% detection rate under a10−5{\displaystyle 10^{-5}}false positive rate.[15]
Another application of boosting for binary categorization is a system that detects pedestrians usingpatternsof motion and appearance.[16]This work is the first to combine both motion information and appearance information as features to detect a walking person. It takes a similar approach to theViola-Jones object detection framework.
Compared with binary categorization,multi-class categorizationlooks for common features that can be shared across the categories at the same time. They turn to be more genericedgelike features. During learning, the detectors for each category can be trained jointly. Compared with training separately, itgeneralizesbetter, needs less training data, and requires fewer features to achieve the same performance.
The main flow of the algorithm is similar to the binary case. What is different is that a measure of the joint training error shall be defined in advance. During each iteration the algorithm chooses a classifier of a single feature (features that can be shared by more categories shall be encouraged). This can be done via convertingmulti-class classificationinto a binary one (a set of categories versus the rest),[17]or by introducing a penalty error from the categories that do not have the feature of the classifier.[18]
In the paper "Sharing visual features for multiclass and multiview object detection", A. Torralba et al. usedGentleBoostfor boosting and showed that when training data is limited, learning via sharing features does a much better job than no sharing, given same boosting rounds. Also, for a given performance level, the total number of features required (and therefore the run time cost of the classifier) for the feature sharing detectors, is observed to scale approximatelylogarithmicallywith the number of class, i.e., slower thanlineargrowth in the non-sharing case. Similar results are shown in the paper "Incremental learning of object detectors using a visual shape alphabet", yet the authors usedAdaBoostfor boosting.
Boosting algorithms can be based onconvexor non-convex optimization algorithms. Convex algorithms, such asAdaBoostandLogitBoost, can be "defeated" by random noise such that they can't learn basic and learnable combinations of weak hypotheses.[19][20]This limitation was pointed out by Long & Servedio in 2008. However, by 2009, multiple authors demonstrated that boosting algorithms based on non-convex optimization, such asBrownBoost, can learn from noisy datasets and can specifically learn the underlying classifier of the Long–Servedio dataset.
|
https://en.wikipedia.org/wiki/Boosting_(machine_learning)
|
Corporate financeis an area offinancethat deals with the sources of funding, and thecapital structureof businesses, the actions that managers take to increase thevalueof the firm to theshareholders, and the tools and analysis used to allocate financial resources. The primary goal of corporate finance is tomaximizeor increaseshareholder value.[1]
Correspondingly, corporate finance comprises two main sub-disciplines.[citation needed]Capital budgetingis concerned with the setting of criteria about which value-addingprojectsshould receive investmentfunding, and whether to finance that investment withequityordebtcapital.Working capitalmanagement is the management of the company's monetary funds that deal with the short-term operating balance ofcurrent assetsandcurrent liabilities; the focus here is on managing cash,inventories, and short-term borrowing and lending (such as the terms on credit extended to customers).
The terms corporate finance andcorporate financierare also associated withinvestment banking. The typical role of an investment bank is to evaluate the company's financial needs and raise the appropriate type of capital that best fits those needs. Thus, the terms "corporate finance" and "corporate financier" may be associated with transactions in which capital is raised in order to create, develop, grow or acquire businesses.[2]
Although it is in principle different frommanagerial financewhich studies the financial management of all firms, rather thancorporationsalone, the main concepts in the study of corporate finance are applicable to the financial problems of all kinds of firms.Financial managementoverlaps with the financial function of theaccounting profession. However,financial accountingis the reporting of historical financial information, while financial management is concerned with the deployment of capital resources to increase a firm's value to the shareholders.
Corporate finance for the pre-industrial world began to emerge in theItalian city-statesand thelow countriesof Europe from the 15th century.
The Dutch East India Company (also known by the abbreviation "VOC" in Dutch) was the firstpublicly listed companyever to pay regulardividends.[3][4][5]The VOC was also the first recordedjoint-stock companyto get a fixedcapital stock. Public markets for investment securities developed in theDutch Republicduring the 17th century.[6][7][8]
By the early 1800s,Londonacted as a center of corporate finance for companies around the world, which innovated new forms of lending and investment; seeCity of London § Economy.
The twentieth century brought the rise ofmanagerial capitalismand common stock finance, withshare capitalraised throughlistings, in preference to othersources of capital.
Modern corporate finance, alongsideinvestment management, developed in the second half of the 20th century, particularly driven by innovations in theory and practice in theUnited Statesand Britain.[9][10][11][12][13][14]Here, see the later sections ofHistory of banking in the United Statesand ofHistory of private equity and venture capital.
The primary goal of financial management[15]is to maximize or to continually increase shareholder value (seeFisher separation theorem).[a]Here, the three main questions that corporate finance addresses are:what long-term investments should we make?What methods should we employ to finance the investment?How do we manage our day-to-day financial activities?These three questions lead to the primary areas of concern in corporate finance: capital budgeting, capital structure, and working capital management.[19][20]This then requires that managers find an appropriate balance between: investments in"projects"that increase the firm's long term profitability; and paying excess cash in the form of dividends to shareholders; short term considerations, such as paying back creditor-related debt, will also feature.[15][21]
Choosing between investment projects will thus be based upon several inter-related criteria.[1](1) Corporate management seeks to maximize the value of the firm by investing in projects which yield a positive net present value when valued using an appropriate discount rate -"hurdle rate"- in consideration of risk. (2) These projects must also be financed appropriately. (3) If no growth is possible by the company and excess cash surplus is not needed to the firm, then financial theory suggests that management should return some or all of the excess cash to shareholders (i.e., distribution via dividends).[22]
The first two criteria concern "capital budgeting", the planning of value-adding, long-term corporate financial projects relating to investments funded through and affecting the firm'scapital structure, and where management must allocate the firm's limited resources between competing opportunities ("projects").[23]Capital budgeting is thus also concerned with the setting of criteria about which projects should receive investment funding to increase the value of the firm, and whether to finance that investment with equity or debt capital.[24]Investments should be made on the basis of value-added to the future of the corporation. Projects that increase a firm's value may include a wide variety of different types of investments, including but not limited to, expansion policies, ormergers and acquisitions.
The third criterion relates todividend policy.
In general, managers ofgrowth companies(i.e. firms that earn high rates of return on invested capital) will use most of the firm's capital resources and surplus cash on investments and projects so the company can continue to expand its business operations into the future. Whencompanies reach maturity levelswithin their industry (i.e. companies that earn approximately average or lower returns on invested capital), managers of these companies will use surplus cash to payout dividends to shareholders.
Thus, when no growth or expansion is likely, and excess cash surplus exists and is not needed, then management is expected to pay out some or all of those surplus earnings in the form of cash dividends or to repurchase the company's stock through a share buyback program.[25][26]
Achieving the goals of corporate finance requires that any corporate investment be financed appropriately.[27]The sources of financing are, generically,capital self-generated by the firmand capital from external funders, obtained by issuing newdebtandequity(andhybrid-orconvertible securities). However, as above, since both hurdle rate and cash flows (and hence the riskiness of the firm) will be affected, the financing mix will impact the valuation of the firm, and a considered decision[28]is required here.
SeeBalance sheet,WACC.
Finally, there is much theoretical discussion as to other considerations that management might weigh here.
Corporations, as outlined, may rely on borrowed funds (debt capital orcredit) as sources of investment to sustain ongoing business operations or to fund future growth. Debt comes in several forms, such as through bank loans, notes payable, orbondsissued to the public. Bonds require the corporation to make regularinterestpayments (interest expenses) on the borrowed capital until the debt reaches its maturity date, therein the firm must pay back the obligation in full. (An exception iszero-coupon bonds- or "zeros"). Debt payments can also be made in the form of asinking fundprovision, whereby the corporation pays annual installments of the borrowed debt above regular interest charges. Corporations that issuecallable bondsare entitled to pay back the obligation in full whenever the company feels it is in their best interest to pay off the debt payments. If interest expenses cannot be made by the corporation through cash payments, the firm may also usecollateralassets as a form of repaying their debt obligations (or through the process ofliquidation).
Especially re debt funded corporations, seeBankruptcyandFinancial distress.
Under some treatments (especially for valuation)leasesare regarded as debt: the payments are set; they are tax deductible; failing to make them results in the loss of the asset.[29]
Corporations can alternatively sellshares of the companyto investors to raise capital. Investors, orshareholders, expect that there will be an upward trend in value of the company (or appreciate in value) over time to make their investment a profitable purchase. As outlined:
Shareholder value is increased when corporations invest equity capital and other funds into projects (or investments) that earn a positive rate of return for the owners. Investors then prefer to buy shares of stock in companies that will consistently earn a positive rate ofreturn on capital(on equity) in the future, thus increasing the market value of the stock of that corporation.
Shareholder value may also be increased when corporations payout excess cash surplus (funds that are not needed for business) in the form ofdividends.Internal financing, often, is constituted ofretained earnings, i.e. those remaining after dividends; this provides,per some measures, the cheapest form of funding.
Preferred stockis a specialized form of financing which combines properties of common stock and debt instruments, and may then be considered ahybrid security. Preferreds are senior (i.e. higher ranking) tocommon stock, but subordinate tobondsin terms of claim (or rights to their share of the assets of the company).[30]Preferred stock usually carries novoting rights,[31]but may carry adividendand may have priority over common stock in the payment of dividends and uponliquidation. Terms of the preferred stock are stated in a "Certificate of Designation".
Similar to bonds, preferred stocks are rated by the major credit-rating companies. The rating for preferreds is generally lower, since preferred dividends do not carry the same guarantees as interest payments from bonds and they are junior to all creditors.[32]Preferred stock is then a special class of shares which may have any combination of features not possessed by common stock.
The following features are usually associated with preferred stock:[33]
As outlined, the financing "mix" will impact the valuation (as well as the cashflows) of the firm, and must therefore be structured appropriately:
there are then two interrelated considerations[28]here:
The above, are the primary objectives in deciding on the firm's capitalization structure. Parallel considerations, also, will factor into management's thinking.
The starting point for discussion here is theModigliani–Miller theorem.
This states, through two connected Propositions, that in a "perfect market" how a firm is financed is irrelevant to its value:
(i) the value of a company is independent of its capital structure; (ii) the cost of equity will be the same for a leveraged firm and an unleveraged firm.
"Modigliani and Miller", however,is generally viewedas a theoretical result, and in practice, management will here too focus on enhacing firm value and / or reducing the cost of funding.
Re value, much of the discussion falls under the umbrella of theTrade-Off Theoryin which firms are assumed to trade-off thetax benefits of debtwith thebankruptcy costs of debtwhen choosing how to allocate the company's resources, finding an optimum re firm value.
Thecapital structure substitution theoryhypothesizes that management manipulates the capital structure such thatearnings per share(EPS) are maximized.
Re cost of funds, thePecking Order Theory(Stewart Myers) suggests that firms avoidexternal financingwhile they haveinternal financingavailable and avoid new equity financing while they can engage in new debt financing at reasonably lowinterest rates.
One of the more recent innovations in this area from a theoretical point of view is themarket timing hypothesis. This hypothesis, inspired by thebehavioral financeliterature, states that firms look for the cheaper type of financing regardless of their current levels of internal resources, debt and equity.
The process of allocating financial resources to majorinvestment- orcapital expenditureis known ascapital budgeting.[38][23]Consistent with the overall goal of increasingfirm value, the decisioning here focuses on whether the investment in question is worthy of funding through the firm's capitalization structures (debt, equity or retained earnings as above).
To be considered acceptable, the investment must bevalue additivere: (i) improvedoperating profitandcash flows; as combined with (ii) anynewfunding commitments and capital implications.
Re the latter: if the investment is large in the context of the firm as a whole, so the discount rate applied by outside investors to the (private) firm's equity may be adjusted upwards to reflect the new level of risk,[39]thus impacting future financing activities andoverallvaluation.
More sophisticated treatments will thus produce accompanyingsensitivity- andrisk metrics, and will incorporate anyinherent contingencies.
The focus of capital budgeting is on major "projects" - ofteninvestments in other firms, or expansion into new marketsor geographies- but may extend also tonew plants, new / replacement machinery,new products, andresearch and developmentprograms;
day to dayoperational expenditureis the realm offinancial managementasbelow.
DCF valuation formula, where thevalue of the firmor project is the sum of its forecastedfree cash flowsdiscounted to the present using theweighted average cost of capital, i.e.cost of equityandcost of debt, with the former (often) derived using the CAPM. The final part is theterminal value, aggregating all cash flows beyond theexplicitforecast period, for anappropriate long-term growthin earnings.
In general,[40]each "project's" value will be estimated using adiscounted cash flow(DCF) valuation, and the opportunity with the highest value, as measured by the resultantnet present value(NPV) will be selected (first applied in a corporate finance setting byJoel Deanin 1951). This requires estimating the size and timing of all of theincrementalcash flowsresulting from the project. Such future cash flows are thendiscountedto determine theirpresent value(seeTime value of money). These present values are then summed, and this sum net of the initial investment outlay is theNPV. SeeFinancial modeling § Accountingfor general discussion, andValuation using discounted cash flowsfor the mechanics, with discussion re modifications for corporate finance.
The NPV is greatly affected by thediscount rate. Thus, identifying the proper discount rate – often termed, the project "hurdle rate"[41]– is critical to choosing appropriate projects and investments for the firm. The hurdle rate is the minimum acceptablereturnon an investment – i.e., theproject appropriate discount rate. The hurdle rate should reflect the riskiness of the investment, typically measured byvolatilityof cash flows, and must take into account the project-relevant financing mix.[42]Managers use models such as theCAPMor theAPTto estimate a discount rate appropriate for a particular project, and use theweighted average cost of capital(WACC) to reflect the financing mix selected. (A common error in choosing a discount rate for a project is to apply a WACC that applies to the entire firm. Such an approach may not be appropriate where the risk of a particular project differs markedly from that of the firm's existing portfolio of assets.)
In conjunction with NPV, there are several other measures used as (secondary)selection criteriain corporate finance; seeCapital budgeting § Ranked projects. These are visible from the DCF and includediscounted payback period,IRR,Modified IRR,equivalent annuity,capital efficiency, andROI.
Alternatives (complements) to the standard DCF, modeleconomic profitas opposed tofree cash flow; these includeresidual income valuation,MVA/EVA(Joel Stern,Stern Stewart & Co) andAPV(Stewart Myers). With the cost of capital correctly and correspondingly adjusted, these valuations should yield the same result as the DCF. These may, however, be considered more appropriate for projects with negative free cash flow several years out, but which are expected to generate positive cash flow thereafter (and may also be less sensitive to terminal value).
Given theuncertaintyinherent in project forecasting and valuation,[43][44][45]analysts will wish to assess thesensitivityof project NPV to the various inputs (i.e. assumptions) to the DCFmodel. In a typicalsensitivity analysisthe analyst will vary one key factor while holding all other inputs constant,ceteris paribus. The sensitivity of NPV to a change in that factor is then observed, and is calculated as a "slope": ΔNPV / Δfactor. For example, the analyst will determine NPV at variousgrowth ratesinannual revenueas specified (usually at set increments, e.g. -10%, -5%, 0%, 5%...), and then determine the sensitivity using this formula. Often, several variables may be of interest, and their various combinations produce a "value-surface"[46](or even a "value-space"), where NPV is then afunction of several variables. See alsoStress testing.
Using a related technique, analysts also runscenario basedforecasts of NPV. Here, a scenario comprises a particular outcome for economy-wide, "global" factors (demand for the product,exchange rates,commodity prices, etc.)as well asfor company-specific factors (unit costs, etc.). As an example, the analyst may specify various revenue growth scenarios (e.g. -5% for "Worst Case", +5% for "Likely Case" and +15% for "Best Case"), where all key inputs are adjusted so as to be consistent with the growth assumptions, and calculate the NPV for each. Note that for scenario based analysis, the various combinations of inputs must beinternally consistent(seediscussionatFinancial modeling), whereas for the sensitivity approach these need not be so. An application of this methodology is to determine an "unbiased" NPV, where management determines a (subjective) probability for each scenario – the NPV for the project is then theprobability-weighted averageof the various scenarios; seeFirst Chicago Method. (See alsorNPV, where cash flows, as opposed to scenarios, are probability-weighted.)
A further advancement which "overcomes the limitations of sensitivity and scenario analyses by examining the effects of all possible combinations of variables and their realizations"[47]is to constructstochastic[48]orprobabilisticfinancial models – as opposed to the traditional static anddeterministicmodels as above.[44]For this purpose, the most common method is to useMonte Carlo simulationto analyze the project's NPV. This method was introduced to finance byDavid B. Hertzin 1964, although it has only recently become common: today analysts are even able to run simulations inspreadsheetbased DCF models, typically using a risk-analysisadd-in, such as@RiskorCrystal Ball. Here, the cash flow components that are (heavily) impacted by uncertainty are simulated, mathematically reflecting their "random characteristics". In contrast to the scenario approach above, the simulation produces severalthousandrandombut possible outcomes, or trials, "covering all conceivable real world contingencies in proportion to their likelihood;"[49]seeMonte Carlo Simulation versus "What If" Scenarios. The output is then ahistogramof project NPV, and the average NPV of the potential investment – as well as itsvolatilityand other sensitivities – is then observed. This histogram provides information not visible from the static DCF: for example, it allows for an estimate of the probability that a project has a net present value greater than zero (or any other value).
Continuing the above example: instead of assigning three discrete values to revenue growth, and to the other relevant variables, the analyst would assign an appropriateprobability distributionto each variable (commonlytriangularorbeta), and, where possible, specify the observed or supposedcorrelationbetween the variables. These distributions would then be "sampled" repeatedly –incorporating this correlation– so as to generate several thousand random but possible scenarios, with corresponding valuations, which are then used to generate the NPV histogram. The resultant statistics (averageNPV andstandard deviationof NPV) will be a more accurate mirror of the project's "randomness" than the variance observed under the scenario based approach. (These are often used as estimates of theunderlying"spot price" and volatility for the real option valuation below; seeReal options valuation § Valuation inputs.) A more robust Monte Carlo model would include the possible occurrence of risk events - e.g., acredit crunch- that drive variations in one or more of the DCF model inputs.
Often - for exampleR&Dprojects - a project may open (or close) various paths of action to the company, but this reality will not (typically) be captured in a strict NPV approach.[50]Some analysts account for this uncertainty by[43]adjusting the discount rate (e.g. by increasing thecost of capital) or the cash flows (usingcertainty equivalents, or applying (subjective) "haircuts" to the forecast numbers; seePenalized present value).[51][52]Even when employed, however, these latter methods do not normally properly account for changes in risk over the project's lifecycle and hence fail to appropriately adapt the risk adjustment.[53][54]Management will therefore (sometimes) employ tools which place an explicit value on these options. So, whereas in a DCF valuation themost likelyor average orscenario specificcash flows are discounted, here the "flexible and staged nature" of the investment ismodelled, and hence "all" potentialpayoffsare considered. SeefurtherunderReal options valuation. The difference between the two valuations is the "value of flexibility" inherent in the project.
The two most common tools areDecision Tree Analysis(DTA)[43]andreal options valuation(ROV);[55]they may often be used interchangeably:
Dividend policy is concerned with financial policies regarding the payment of a cash dividend in the present, orretaining earningsand then paying an increased dividend at a later stage.
The policy will be set based upon the type of company and what management determines is the best use of those dividend resources for the firm and its shareholders.
Practical and theoretical considerations - interacting with the above funding and investment decisioning, and re overall firm value - will inform this thinking.[56][57]
In general, whether[58]to issue dividends,[56]and what amount, is determined on the basis of the company's unappropriatedprofit(excess cash) and influenced by the company's long-term earning power. In all instances, as above, the appropriate dividend policy is in parallel directed by that which maximizes long-term shareholder value.
When cash surplus exists and is not needed by the firm, then management is expected to pay out some or all of those surplus earnings in the form of cash dividends or to repurchase the company's stock through a share buyback program.
Thus, if there are no NPV positive opportunities, i.e. projects wherereturnsexceed the hurdle rate, and excess cash surplus is not needed, then management should return (some or all of) the excess cash to shareholders as dividends.
This is the general case, however the"style" of the stockmay also impact the decision. Shareholders of a "growth stock", for example, expect that the company will retain (most of) the excess cash surplus so as to fund future projects internally to help increase the value of the firm. Shareholders ofvalue-or secondary stocks, on the other hand, would prefer management to pay surplus earnings in the form of cash dividends, especially when a positive return cannot be earned through the reinvestment of undistributed earnings; ashare buybackprogram may be accepted when the value of the stock is greater than the returns to be realized from the reinvestment of undistributed profits.
Management will also choose theformof the dividend distribution, as stated, generally as cashdividendsor via ashare buyback. Various factors may be taken into consideration: where shareholders must paytax on dividends, firms may elect to retain earnings or to perform a stock buyback, in both cases increasing the value of shares outstanding. Alternatively, some companies will pay "dividends" fromstockrather than in cash or via ashare buybackas mentioned; seeCorporate action.
As forcapital structure above, there are severalschools of thoughton dividends, in particular re their impact on firm value.[56]A key consideration will be whether there are any tax disadvantages associated with dividends: i.e. dividends attract a higher tax rate as compared, e.g., tocapital gains; seedividend taxandRetained earnings § Tax implications.
Here, per the abovementionedModigliani–Miller theorem:
if there are no such disadvantages - and companies can raise equity finance cheaply, i.e. canissue stockat low cost - then dividend policy is value neutral;
if dividends suffer a tax disadvantage, then increasing dividends should reduce firm value.
Regardless, but particularly in the second (more realistic) case, other considerations apply.
The first set of these, relates to investor preferences and behavior (seeClientele effect).
Investors are seen to prefer a “bird in the hand” - i.e. cash dividends are certain as compared to income from future capital gains - and in fact, commonly employ some form ofdividend valuation modelin valuing shares.
Relatedly, investors will then prefer astableor "smooth" dividend payout - as far as is reasonable given earnings prospectsand sustainability- which will then positively impact share price; seeLintner model.
Cash dividends may also allow management to convey(insider)information about corporate performance; and increasing a company's dividend payout may then predict (or lead to) favorable performance of the company's stock in the future; seeDividend signaling hypothesis
The second set relates to management's thinking re capital structure and earnings, overlappingthe above.
Under a"Residual dividend policy"- i.e. as contrasted with a "smoothed" payout policy - the firm will use retained profits to finance capital investments if cheaper than the same via equity financing; see againPecking order theory.
Similarly, under theWalter model, dividends are paid only if capital retained will earn a higher return than that available to investors (proxied:ROE>Ke).
Management may also want to "manipulate" the capital structure - in this context, by paying or not paying dividends - such thatearnings per shareare maximized; see again,Capital structure substitution theory.
Managing the corporation'sworking capitalposition so as to sustain ongoing business operations is referred to asworking capital management.[59][60]This entails, essentially, managing the relationship between a firm'sshort-term assetsand itsshort-term liabilities, conscious of various considerations.
Here, as above, the goal of Corporate Finance is the maximization of firm value. In the context of long term, capital budgeting, firm value is enhanced through appropriately selecting and funding NPV positive investments. These investments, in turn, have implications in terms of cash flow andcost of capital.
The goal of Working Capital (i.e. short term) management is therefore to ensure that the firm is able tooperate, and that it has sufficient cash flow to service long-term debt, and to satisfy both maturingshort-term debtand upcoming operational expenses. In so doing, firm value is enhanced when, and if, thereturn on capitalexceeds the cost of capital; SeeEconomic value added(EVA). Managing short term finance along with long term finance is therefore one task of a modern CFO.
Working capital is the amount of funds that are necessary for an organization to continue its ongoing business operations, until the firm is reimbursed through payments for the goods or services it has delivered to its customers.[61]Working capital is measured through the difference between resources in cash or readily convertible into cash (Current Assets), and cash requirements (Current Liabilities). As a result, capital resource allocations relating to working capital are always current, i.e. short-term.
In addition totime horizon, working capital management differs from capital budgeting in terms ofdiscountingand profitability considerations; decisions here are also "reversible" to a much larger extent. (Considerations as torisk appetiteand return targets remain identical, although some constraints – such as those imposed byloan covenants– may be more relevant here).
The (short term) goals of working capital are therefore not approached on the same basis as (long term) profitability, and working capital management applies different criteria in allocating resources: the main considerations are (1) cash flow / liquidity and (2) profitability / return on capital (of which cash flow is probably the most important).
Guided by the above criteria, management will use a combination of policies and techniques for the management of working capital.[62]These policies, as outlined, aim at managing thecurrent assets(generallycashandcash equivalents,inventoriesanddebtors) and the short term financing, such that cash flows and returns are acceptable.[60]
As discussed, corporate finance comprises the activities, analytical methods, and techniques that deal with the company's long-term investments, finances and capital.
Re the latter,when capital must be raisedfor the corporation or shareholders, the "corporate finance team" will engage[64]itsinvestment bank.
The bankwill then facilitatethe requiredshare listing(IPOorSEO) orbond issuance, as appropriate giventhe above anaysis.
Thereafter the bankwill work closelywith the corporatere servicingthe new securities, and managing its presence in thecapital marketsmore generally
(offering advisory, financial advisory, deal advisory, and / or transaction advisory[65]services).
Use of the term "corporate finance", correspondingly, varies considerably across the world.
In theUnited States, "Corporate Finance" corresponds to the first usage.
Aprofessional heremay be referred to as a "corporate finance analyst" and will typically be based in theFP&Aarea, reporting to theCFO.[64][66]SeeFinancial analyst § Financial planning and analysis.
In theUnited KingdomandCommonwealthcountries,[65]on the other hand, "corporate finance" and "corporate financier" are associated withinvestment banking.
Financial risk management,[48][67]generally, is focused on measuring and managingmarket risk,credit riskandoperational risk.
Within corporates[67](i.e. as opposed to banks), the scope extends to preserving (and enhancing) the firm'seconomic value.[68]It will then overlap both corporate finance andenterprise risk management: addressing risks to the firm's overallstrategic objectives,
by focusing on the financial exposures and opportunities arising from business decisions, and their link to the firm’sappetite for risk, as well as their impact onshare price.
(In large firms, Risk Management typically exists as anindependent function, with theCROconsulted on capital-investment and other strategic decisions.)
Re corporate finance, both operational and funding issues are addressed; respectively:
Broadly,corporate governanceconsiders the mechanisms, processes, practices, and relations by which corporations are controlled and operated by theirboard of directors, managers,shareholders, and other stakeholders.
In the context of corporate finance,[71]a more specific concern will be that executives do not "serve their own vested interests" to the detriment of capital providers.[72]There are several considerations:
In general, here, debt may be seen as "an internal means of controlling management", which has to work hard to ensure that repayments are met,[74]balancing these interests, and also limiting the possibility of overpaying on investments.
GrantingExecutive stock options,[75]alternatively or in parallel, is seen as a mechanism to align management with stockholder interests.
A more formal treatment is offered underagency theory,[76]where these problems and approaches can be seen, and hence analysed, asreal options;[77]seePrincipal–agent problem § Options frameworkfor discussion.
|
https://en.wikipedia.org/wiki/Corporate_finance#Valuing_flexibility
|
Adecision cycleordecision loop[1]is a sequence of steps used by an entity on a repeated basis toreach and implement decisionsand to learn from the results. The "decision cycle" phrase has a history of use to broadly categorize various methods of making decisions, going upstream to the need, downstream to the outcomes, and cycling around to connect the outcomes to the needs.
A decision cycle is said to occur when an explicitly specifieddecision modelis used to guide adecisionand then the outcomes of that decision are assessed against the need for the decision. This cycle includes specification of desired results (the decision need), tracking of outcomes, and assessment of outcomes against the desired results.
|
https://en.wikipedia.org/wiki/Decision_cycle
|
Decision listsare a representation for Boolean functions which can be easily learnable from examples.[1]Single term decision lists are more expressive thandisjunctionsandconjunctions; however, 1-term decision lists are less expressive than the generaldisjunctive normal formand theconjunctive normal form.
The language specified by a k-length decision list includes as a subset the language specified by a k-depthdecision tree.
Learning decision lists can be used forattribute efficient learning.[2]
A decision list (DL) of lengthris of the form:
wherefiis theith formula andbiis theithbooleanfori∈{1...r}{\displaystyle i\in \{1...r\}}. The last if-then-else is the default case, which means formulafris always equal to true. Ak-DL is a decision list where all of formulas have at mostkterms. Sometimes "decision list" is used to refer to a 1-DL, where all of the formulas are either a variable or itsnegation.
Thisartificial intelligence-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Decision_list
|
Adecision matrixis a list of values in rows and columns that allows an analyst to systematically identify, analyze, and rate the performance of relationships between sets of values and information. Elements of a decision matrix show decisions based on certain decision criteria. The matrix is useful for looking at large masses of decision factors and assessing each factor's relative significance by weighting them by importance.[1]
The termdecision matrixis used to describe amultiple-criteria decision analysis(MCDA) problem. An MCDA problem, where there areMalternative options and each needs to be assessed onNcriteria, can be described by the decision matrix which hasNrows andMcolumns, orM×Nelements, as shown in the following table. Each element, such asXij, is either a single numerical value or a single grade, representing the performance of alternativeion criterionj. For example, if alternativeiis "cari", criterionjis "engine quality" assessed by five grades {Exceptional, Good, Average, Below Average, Poor}, and "Cari" is assessed to be "Good" on "engine quality", thenXij= "Good". These assessments may be replaced by scores, from 1 to 5. Sums of scores may then be compared and ranked, to show the winning proposal.[2]
Similar to a decision matrix, abelief decision matrixis used to describe a multiple criteria decision analysis (MCDA) problem in theEvidential Reasoning Approach. Instead of being a single numerical value or a single grade as in a decision matrix, each element in a belief decision matrix is abelief distribution.
For example, suppose Alternative i is "Car i", Criterion j is "Engine Quality" assessed by five grades {Excellent, Good, Average, Below Average, Poor}, and "Car i" is assessed to be “Excellent” on "Engine Quality" with a high degree of belief (e.g. 0.6) due to its low fuel consumption, low vibration and high responsiveness. At the same time, the quality is also assessed to be only “Good” with a lower degree of belief (e.g. 0.4 or less) because its quietness and starting can still be improved. If this is the case, then we have Xij={ (Excellent, 0.6), (Good, 0.4)}, or Xij={ (Excellent, 0.6), (Good, 0.4), (Average, 0), (Below Average, 0), (Poor, 0)}.
A conventional decision matrix is a special case of belief decision matrix when only one belief degree in a belief structure is 1 and the others are 0.
|
https://en.wikipedia.org/wiki/Decision_matrix
|
Decision tablesare a concise visual representation for specifying which actions to perform depending on given conditions. Decision table is the term used for aControl tableorState-transition tablein the field ofBusiness process modeling; they are usually formatted as the transpose of the way they are formatted inSoftware engineering.
Each decision corresponds to a variable, relation or predicate whose possible values are listed among the condition alternatives. Each action is a procedure or operation to perform, and the entries specify whether (or in what order) the action is to be performed for the set of condition alternatives the entry corresponds to.
To make them more concise, many decision tables include in their condition alternatives adon't caresymbol. This can be a hyphen[1][2][3]or blank,[4]although using a blank is discouraged as it may merely indicate that the decision table has not been finished.[citation needed]One of the uses of decision tables is to reveal conditions under which certain input factors are irrelevant on the actions to be taken, allowing these input tests to be skipped and thereby streamlining decision-making procedures.[5]
Aside from the basic four quadrant structure, decision tables vary widely in the way the condition alternatives and action entries are represented.[6][7]Some decision tables use simple true/false values to represent the alternatives to a condition (similar to if-then-else), other tables may use numbered alternatives (similar to switch-case), and some tables even use fuzzy logic or probabilistic representations for condition alternatives.[8]In a similar way, action entries can simply represent whether an action is to be performed (check the actions to perform), or in more advanced decision tables, the sequencing of actions to perform (number the actions to perform).
A decision table is consideredbalanced[4]orcomplete[3]if it includes every possible combination of input variables. In other words, balanced decision tables prescribe an action in every situation where the input variables are provided.[4]
The limited-entry decision table is the simplest to describe. The condition alternatives are simple Boolean values, and the action entries are check-marks, representing which of the actions in a given column are to be performed.
The followingbalanced decision tableis an example in which a technical support company writes a decision table to enable technical support employees to efficiently diagnose printer problems based upon symptoms described to them over the phone from their clients.
This is just a simple example, and it does not necessarily correspond to the reality of printer troubleshooting. Even so, it demonstrates how decision tables can scale to several conditions with many possibilities.
Decision tables, especially when coupled with the use of adomain-specific language, allow developers and policy experts to work from the same information, the decision tables themselves.
Tools to render nested if statements from traditional programming languages into decision tables can also be used as a debugging tool.[9][10]
Decision tables have proven to be easier to understand and review than code, and have been used extensively and successfully to produce specifications for complex systems.[11]
In the 1960s and 1970s a range of "decision table based" languages such asFiletabwere popular for business programming.
Decision tables can be, and often are, embedded within computer programs and used to "drive" the logic of the program. A simple example might be alookup tablecontaining a range of possible input values and afunction pointerto the section of code to process that input.
Multiple conditions can be coded for in similar manner to encapsulate the entire program logic in the form of an "executable" decision table orcontrol table. There may be several such tables in practice, operating at different levels and often linked to each other (either by pointers or an index value).
|
https://en.wikipedia.org/wiki/Decision_table
|
Incomputational complexity theory, thedecision tree modelis themodel of computationin which analgorithmcan be considered to be adecision tree, i.e. a sequence ofqueriesorteststhat are done adaptively, so the outcome of previous tests can influence the tests performed next.
Typically, these tests have a small number of outcomes (such as ayes–no question) and can be performed quickly (say, with unit computational cost), so the worst-casetime complexityof an algorithm in the decision tree model corresponds to the depth of the corresponding tree. This notion of computational complexity of a problem or an algorithm in the decision tree model is called itsdecision tree complexityorquery complexity.
Decision tree models are instrumental in establishinglower boundsfor the complexity of certain classes of computational problems and algorithms. Several variants of decision tree models have been introduced, depending on thecomputational modeland type of query algorithms are allowed to perform.
For example, a decision tree argument is used to show that acomparison sortofn{\displaystyle n}items must makenlog(n){\displaystyle n\log(n)}comparisons. For comparison sorts, a query is a comparison of two itemsa,b{\displaystyle a,b}, with two outcomes (assuming no items are equal): eithera<b{\displaystyle a<b}ora>b{\displaystyle a>b}. Comparison sorts can be expressed as decision trees in this model, since such sorting algorithms only perform these types of queries.
Decision trees are often employed to understand algorithms for sorting and other similar problems; this was first done by Ford and Johnson.[1]
For example, many sorting algorithms arecomparison sorts, which means that they only gain information about an input sequencex1,x2,…,xn{\displaystyle x_{1},x_{2},\ldots ,x_{n}}via local comparisons: testing whetherxi<xj{\displaystyle x_{i}<x_{j}},xi=xj{\displaystyle x_{i}=x_{j}}, orxi>xj{\displaystyle x_{i}>x_{j}}. Assuming that the items to be sorted are all distinct and comparable, this can be rephrased as a yes-or-no question: isxi>xj{\displaystyle x_{i}>x_{j}}?
These algorithms can be modeled as binary decision trees, where the queries are comparisons: an internal node corresponds to a query, and the node's children correspond to the next query when the answer to the question is yes or no. For leaf nodes, the output corresponds to apermutationπ{\displaystyle \pi }that describes how the input sequence was scrambled from the fully ordered list of items. (The inverse of this permutation,π−1{\displaystyle \pi ^{-1}}, re-orders the input sequence.)
One can show that comparison sorts must useΩ(nlog(n)){\displaystyle \Omega (n\log(n))}comparisons through a simple argument: for an algorithm to be correct, it must be able to output every possible permutation ofn{\displaystyle n}elements; otherwise, the algorithm would fail for that particular permutation as input. So, its corresponding decision tree must have at least as many leaves as permutations:n!{\displaystyle n!}leaves. Any binary tree with at leastn!{\displaystyle n!}leaves has depth at leastlog2(n!)=Ω(nlog2(n)){\displaystyle \log _{2}(n!)=\Omega (n\log _{2}(n))}, so this is a lower bound on the run time of a comparison sorting algorithm. In this case, the existence of numerous comparison-sorting algorithms having this time complexity, such asmergesortandheapsort, demonstrates that the bound is tight.[2]: 91
This argument does not use anything about the type of query, so it in fact proves a lower bound for any sorting algorithm that can be modeled as a binary decision tree. In essence, this is a rephrasing of theinformation-theoretic argumentthat a correct sorting algorithm must learn at leastlog2(n!){\displaystyle \log _{2}(n!)}bits of information about the input sequence. As a result, this also works for randomized decision trees as well.
Other decision tree lower bounds do use that the query is a comparison. For example, consider the task of only using comparisons to find the smallest number amongn{\displaystyle n}numbers. Before the smallest number can be determined, every number except the smallest must "lose" (compare greater) in at least one comparison. So, it takes at leastn−1{\displaystyle n-1}comparisons to find the minimum. (The information-theoretic argument here only gives a lower bound oflog(n){\displaystyle \log(n)}.) A similar argument works for general lower bounds for computingorder statistics.[2]: 214
Linear decision trees generalize the above comparison decision trees to computing functions that take realvectorsx∈Rn{\displaystyle x\in \mathbb {R} ^{n}}as input. The tests in linear decision trees are linear functions: for a particular choice of real numbersa0,…,an{\displaystyle a_{0},\dots ,a_{n}}, output the sign ofa0+∑i=1naixi{\displaystyle a_{0}+\textstyle \sum _{i=1}^{n}a_{i}x_{i}}. (Algorithms in this model can only depend on the sign of the output.) Comparison trees are linear decision trees, because the comparison betweenxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}corresponds to the linear functionxi−xj{\displaystyle x_{i}-x_{j}}. From its definition, linear decision trees can only specify functionsf{\displaystyle f}whosefiberscan be constructed by taking unions and intersections of half-spaces.
Algebraic decision treesare a generalization of linear decision trees that allow the test functions to be polynomials of degreed{\displaystyle d}. Geometrically, the space is divided into semi-algebraic sets (a generalization of hyperplane).
These decision tree models, defined by Rabin[3]and Reingold,[4]are often used for proving lower bounds incomputational geometry.[5]For example, Ben-Or showed that element uniqueness (the task of computingf:Rn→{0,1}{\displaystyle f:\mathbb {R} ^{n}\to \{0,1\}}, wheref(x){\displaystyle f(x)}is 0 if and only if there exist distinct coordinatesi,j{\displaystyle i,j}such thatxi=xj{\displaystyle x_{i}=x_{j}}) requires an algebraic decision tree of depthΩ(nlog(n)){\displaystyle \Omega (n\log(n))}.[6]This was first showed for linear decision models by Dobkin and Lipton.[7]They also show an2{\displaystyle n^{2}}lower bound for linear decision trees on the knapsack problem, generalized to algebraic decision trees by Steele and Yao.[8]
For Boolean decision trees, the task is to compute the value of an n-bitBoolean functionf:{0,1}n→{0,1}{\displaystyle f:\{0,1\}^{n}\to \{0,1\}}for an inputx∈{0,1}n{\displaystyle x\in \{0,1\}^{n}}. The queries correspond to reading a bit of the input,xi{\displaystyle x_{i}}, and the output isf(x){\displaystyle f(x)}. Each query may be dependent on previous queries. There are many types of computational models using decision trees that could be considered, admitting multiple complexity notions, calledcomplexity measures.
If the output of a decision tree isf(x){\displaystyle f(x)}, for allx∈{0,1}n{\displaystyle x\in \{0,1\}^{n}}, the decision tree is said to "compute"f{\displaystyle f}. The depth of a tree is the maximum number of queries that can happen before a leaf is reached and a result obtained.D(f){\displaystyle D(f)}, thedeterministic decision treecomplexity off{\displaystyle f}is the smallest depth among all deterministic decision trees that computef{\displaystyle f}.
One way to define arandomized decision treeis to add additional nodes to the tree, each controlled by a probabilitypi{\displaystyle p_{i}}. Another equivalent definition is to define it as a distribution over deterministic decision trees. Based on this second definition, the complexity of the randomized tree is defined as the largest depth among all the trees in the support of the underlying distribution.R2(f){\displaystyle R_{2}(f)}is defined as the complexity of the lowest-depth randomized decision tree whose result isf(x){\displaystyle f(x)}with probability at least2/3{\displaystyle 2/3}for allx∈{0,1}n{\displaystyle x\in \{0,1\}^{n}}(i.e., with bounded 2-sided error).
R2(f){\displaystyle R_{2}(f)}is known as theMonte Carlorandomized decision-tree complexity, because the result is allowed to be incorrect with bounded two-sided error. TheLas Vegasdecision-tree complexityR0(f){\displaystyle R_{0}(f)}measures theexpecteddepth of a decision tree that must be correct (i.e., has zero-error). There is also a one-sided bounded-error version which is denoted byR1(f){\displaystyle R_{1}(f)}.
The nondeterministic decision tree complexity of a function is known more commonly as thecertificate complexityof that function. It measures the number of input bits that anondeterministic algorithmwould need to look at in order to evaluate the function with certainty.
Formally, the certificate complexity off{\displaystyle f}atx{\displaystyle x}is the size of the smallest subset of indicesS⊂[n]{\displaystyle S\subset [n]}such that, for ally∈{0,1}n{\displaystyle y\in \{0,1\}^{n}}, ifyi=xi{\displaystyle y_{i}=x_{i}}for alli∈S{\displaystyle i\in S}, thenf(y)=f(x){\displaystyle f(y)=f(x)}. The certificate complexity off{\displaystyle f}is the maximum certificate complexity over allx{\displaystyle x}. The analogous notion where one only requires the verifier to be correct with 2/3 probability is denotedRC(f){\displaystyle RC(f)}.
The quantum decision tree complexityQ2(f){\displaystyle Q_{2}(f)}is the depth of the lowest-depth quantum decision tree that gives the resultf(x){\displaystyle f(x)}with probability at least2/3{\displaystyle 2/3}for allx∈{0,1}n{\displaystyle x\in \{0,1\}^{n}}. Another quantity,QE(f){\displaystyle Q_{E}(f)}, is defined as the depth of the lowest-depth quantum decision tree that gives the resultf(x){\displaystyle f(x)}with probability 1 in all cases (i.e. computesf{\displaystyle f}exactly).Q2(f){\displaystyle Q_{2}(f)}andQE(f){\displaystyle Q_{E}(f)}are more commonly known asquantum query complexities, because the direct definition of a quantum decision tree is more complicated than in the classical case. Similar to the randomized case, we defineQ0(f){\displaystyle Q_{0}(f)}andQ1(f){\displaystyle Q_{1}(f)}.
These notions are typically bounded by the notions of degree and approximate degree. Thedegreeoff{\displaystyle f}, denoteddeg(f){\displaystyle \deg(f)}, is the smallest degree of any polynomialp{\displaystyle p}satisfyingf(x)=p(x){\displaystyle f(x)=p(x)}for allx∈{0,1}n{\displaystyle x\in \{0,1\}^{n}}. Theapproximate degreeoff{\displaystyle f}, denoteddeg~(f){\displaystyle {\widetilde {\deg }}(f)}, is the smallest degree of any polynomialp{\displaystyle p}satisfyingp(x)∈[0,1/3]{\displaystyle p(x)\in [0,1/3]}wheneverf(x)=0{\displaystyle f(x)=0}andp(x)∈[2/3,1]{\displaystyle p(x)\in [2/3,1]}wheneverf(x)=1{\displaystyle f(x)=1}.
Beals et al. established thatQ0(f)≥deg(f)/2{\displaystyle Q_{0}(f)\geq \deg(f)/2}andQ2(f)≥deg~(f)/2{\displaystyle Q_{2}(f)\geq {\widetilde {\deg }}(f)/2}.[9]
It follows immediately from the definitions that for alln{\displaystyle n}-bit Boolean functionsf{\displaystyle f},Q2(f)≤R2(f)≤R1(f)≤R0(f)≤D(f)≤n{\displaystyle Q_{2}(f)\leq R_{2}(f)\leq R_{1}(f)\leq R_{0}(f)\leq D(f)\leq n}, andQ2(f)≤Q0(f)≤D(f)≤n{\displaystyle Q_{2}(f)\leq Q_{0}(f)\leq D(f)\leq n}. Finding the best upper bounds in the converse direction is a major goal in the field of query complexity.
All of these types of query complexity are polynomially related. Blum and Impagliazzo,[10]Hartmanis and Hemachandra,[11]and Tardos[12]independently discovered thatD(f)≤R0(f)2{\displaystyle D(f)\leq R_{0}(f)^{2}}.Noam Nisanfound that the Monte Carlo randomized decision tree complexity is also polynomially related to deterministic decision tree complexity:D(f)=O(R2(f)3){\displaystyle D(f)=O(R_{2}(f)^{3})}.[13](Nisan also showed thatD(f)=O(R1(f)2){\displaystyle D(f)=O(R_{1}(f)^{2})}.) A tighter relationship is known between the Monte Carlo and Las Vegas models:R0(f)=O(R2(f)2logR2(f)){\displaystyle R_{0}(f)=O(R_{2}(f)^{2}\log R_{2}(f))}.[14]This relationship is optimal up to polylogarithmic factors.[15]As for quantum decision tree complexities,D(f)=O(Q2(f)4){\displaystyle D(f)=O(Q_{2}(f)^{4})}, and this bound is tight.[16][15]Midrijanis showed thatD(f)=O(Q0(f)3){\displaystyle D(f)=O(Q_{0}(f)^{3})},[17][18]improving a quartic bound due to Beals et al.[9]
It is important to note that these polynomial relationships are valid only fortotalBoolean functions. Forpartial Boolean functions, that have a domain a subset of{0,1}n{\displaystyle \{0,1\}^{n}}, an exponential separation betweenQ0(f){\displaystyle Q_{0}(f)}andD(f){\displaystyle D(f)}is possible; the first example of such a problem was discovered byDeutsch and Jozsa.
For aBoolean functionf:{0,1}n→{0,1}{\displaystyle f:\{0,1\}^{n}\to \{0,1\}}, thesensitivityoff{\displaystyle f}is defined to be the maximum sensitivity off{\displaystyle f}over allx{\displaystyle x}, where the sensitivity off{\displaystyle f}atx{\displaystyle x}is the number of single-bit changes inx{\displaystyle x}that change the value off(x){\displaystyle f(x)}. Sensitivity is related to the notion of total influence from theanalysis of Boolean functions, which is equal toaveragesensitivity over allx{\displaystyle x}.
Thesensitivity conjectureis the conjecture that sensitivity is polynomially related to query complexity; that is, there exists exponentc,c′{\displaystyle c,c'}such that, for allf{\displaystyle f},D(f)=O(s(f)c){\displaystyle D(f)=O(s(f)^{c})}ands(f)=O(D(f)c′){\displaystyle s(f)=O(D(f)^{c'})}. One can show through a simple argument thats(f)≤D(f){\displaystyle s(f)\leq D(f)}, so the conjecture is specifically concerned about finding a lower bound for sensitivity. Since all of the previously-discussed complexity measures are polynomially related, the precise type of complexity measure is not relevant. However, this is typically phrased as the question of relating sensitivity with block sensitivity.
Theblock sensitivityoff{\displaystyle f}, denotedbs(f){\displaystyle bs(f)}, is defined to be the maximum block sensitivity off{\displaystyle f}over allx{\displaystyle x}. The block sensitivity off{\displaystyle f}atx{\displaystyle x}is the maximum numbert{\displaystyle t}of disjoint subsetsS1,…,St⊂[n]{\displaystyle S_{1},\ldots ,S_{t}\subset [n]}such that, for any of the subsetsSi{\displaystyle S_{i}}, flipping the bits ofx{\displaystyle x}corresponding toSi{\displaystyle S_{i}}changes the value off(x){\displaystyle f(x)}.[13]
In 2019,Hao Huangproved the sensitivity conjecture, showing thatbs(f)=O(s(f)4){\displaystyle bs(f)=O(s(f)^{4})}.[19][20]
|
https://en.wikipedia.org/wiki/Decision_tree_model
|
Adesign rationaleis an explicit documentation of thereasonsbehinddecisionsmade whendesigningasystemorartifact. As initially developed by W.R. Kunz andHorst Rittel, design rationale seeks to provideargumentation-based structure to the political, collaborative process of addressingwicked problems.[1]
A design rationale is the explicit listing ofdecisionsmade during adesign process, and the reasons why those decisions were made.[2]Its primary goal is to supportdesignersby providing a means torecordandcommunicatethe argumentation and reasoning behind the design process.[3]It should therefore include:[4]
Several science areas are involved in the study of design rationales, such ascomputer science[2]cognitive science,[3]artificial intelligence,[5]andknowledge management.[6]For supporting design rationale, various frameworks have been proposed, such as QOC, DRCS,IBIS, and DRL.
While argumentation formats can be traced back toStephen Toulmin's work in the 1950s[7]datums, claims, warrants, backings and rebuttals, the origin of design rationale can be traced back to W.R. Kunz andHorst Rittel's[1]development of theIssue-Based Information System(IBIS) notation in 1970. Several variants on IBIS have since been proposed.
The first Rationale Management System (RMS) was PROTOCOL, which supported PHI, which was followed by other PHI-based systems MIKROPOLIS and PHIDIAS. The first system providing IBIS support wasHans Dehlinger's STIEC.[15]Rittel developed a small system in 1983 (also not published) and the better knowngIBIS(graphical IBIS) was developed in 1987.[16]
Not all successful DR approaches involve structured argumentation. For example, Carroll and Rosson's Scenario-Claims Analysis approach[17]captures rationale in scenarios that describe how the system is used and how well the system features support the user goals. Carroll and Rosson's approach to design rationale is intended to help designers of computer software and hardware identify underlying design tradeoffs and make inferences about the impact of potential design interventions.[18]
There are a number of ways to characterize DR approaches. Some key distinguishing features are how it is captured, how it is represented, and how it can be used.
Rationale captureis the process of acquiring rationale information to a rationale management
The choice of design rationale representation is very important to make sure that the rationales we capture is what we desire and we can use efficiently. According to the degree of formality, the approaches that are used to represent design rationale can be divided into three main categories: informal, semiformal, or formal.[4]In the informal representation, rationales can be recorded and captured by just using our traditionally accepted methods and media, such as word processors, audio and video records or even hand writings. However, these descriptions make it hard for automatic interpretation or other computer-based supports. In the formal representation, the rationale must be collected under a strict format so that the rationale can be interpreted and understood by computers. However, due to the strict format of rationale defined by formal representations, the contents can hardly be understood by human being and the process of capturing design rationale will require more efforts to finish, and therefore becomes more intrusive.
Semiformal representations try to combine the advantages of informal and formal representations. On one hand, the information captured should be able to be processed by computers so that more computer based support can be provided. On the other hand, the procedure and method used to capture information of design rationale should not be very intrusive. In the system with a semiformal representation, the information expected is suggested and the users can capture rationale by following the instructions to either fill out the attributes according to some templates or just type into natural language descriptions.[4]
Design rationale has the potential to be used in many different ways. One set of uses, defined by Burge and Brown (1998),[19]are:
DR is used by research communities in software engineering, mechanical design, artificial intelligence, civil engineering, and human-computer interaction research. In software engineering, it could be used to support the designers ideas during requirement analysis, capturing and documenting design meetings and predicting possible issues due to new design approach.[31]Insoftware architectureand outsourcing solution design, it can justify the outcome ofarchitectural decisionsand serve as a design guide.[32]In civil engineering, it helps to coordinate the variety of work that the designers do at the same time in different areas of a construction project. It also help the designers to understand and respect each other's ideas and resolve any possible issues.[33]
The DR can also be used by the project managers to maintain their project plan and the project status up to date. Also, the project team members who missed a design meeting can refer back the DR to learn what was discussed on a particular topic. The unresolved issues captured in DR could be used to organize further meetings on those topics.[31]
Design rationale helps the designers to avoid the same mistakes made in the previous design. This can also be helpful to avoid duplication of work.[5]In some cases DR could save time and money when a software system is upgraded from its previous versions.[2]
There are several books and articles that provide excellent surveys of rationale approaches applied to HCI,[34]Engineering Design[4]and Software Engineering.[35]
|
https://en.wikipedia.org/wiki/Design_rationale
|
DRAKON(Russian:Дружелюбный Русский Алгоритмический язык, Который Обеспечивает Наглядность,lit.'Friendly Russian Algorithmic language, Which Provides Clarity') is afree and open sourcealgorithmicvisual programmingandmodeling languagedeveloped as part of the defunct Soviet UnionBuran space program[2]in 1986 following the need in increase of software development productivity. The visual language provides a uniform way to represent processes inflowcharts.
There are various implementation of the language specification that may be used to draw and export actual flowcharts. Notable examples include free and open source DRAKON Editor (September 2011).
The development of DRAKON started in 1986 to address the emerging risk of misunderstandings - and subsequent errors - between users of different programming languages in the Russian space program. Its development was directed by Vladimir Parondzhanov with the participation of theRussian Federal Space Agency(Academician Pilyugin Center,Moscow) andRussian Academy of Sciences(Keldysh Institute of Applied Mathematics).
The language was constructed by formalization, ergonomization and nonclassical structurization offlowchartsdescribed in theISO5807-85 standard and Russian standard «Гост 19.701-90».[3][4]
The goal was to replace specialized languages used in theBuranproject with one universalprogramming language. Namely PROL2 (ПРОЛ2), used for developing inflight systems software for the computer system Biser-4 (Бисер-4),[5]DIPOL (ДИПОЛЬ), used for developing software for the ground maintenance computer systems)[5]and LAKS (ЛАКС), used for modelling.
The work was finished in 1996 (3 years after the Buran project was officially closed), when an automatedCASEprogramming system called "Grafit-Floks" was developed.[6]
This CASE is used since 1996 in: an international projectSea Launch,[citation needed]Russianorbit insertionupper stageFregat(Russian: Фрегат, frigate) for onboard control systems and tests,[7]upgraded heavylaunch vehicle(carrier rocket)Proton-M.[citation needed]
The name DRAKON is the Russian acronym for "Дружелюбный Русский Алгоритмический [язык], Который Обеспечивает Наглядность", which translates to "Friendly Russian Algorithmic [language] that illustrates (or provides clarity)".
The word "наглядность" (pronounced approximately as "na-GLYA-dnost") refers to a concept or idea being easy to imagine and understand, and may be translated as "clarity".
UnlikeUML's philosophy, DRAKON's language philosophy is based on being augmented if needed, by using a hybrid language, which can be illustrated as "incrustating code snippets from text language used into shape DRAKON requires". This way, DRAKON always remains a simple visual language that can be used as an augmentation for a programmer who is interested in making their own project code easier to support or other long-term needs for example improving the ergonomics of the coding process or to making code easier to review and understand.
The DRAKON language can be used both as a modelling/"markup" language (which is considered a standalone "pure DRAKON" program) and as a programming language (as part of a hybrid language).
Integration of a stricter, "academic", variant of a markup language into programming, such as provided by DRAKON, addssyntactic sugarallowing users of different programming languages to comprehend each other's contributions to the overall project and even provide commentary if needed.
DRAKON (Russian:ДРАКОН; meaning "dragon" in English) is designed with the intent of allowing for easy understanding and legibility, as usage of multiple languages in a single project can lead to confusion.
DRAKON is a family of hybrid languages, such as DRAKON-C, DRAKON-ASM, DRAKON-Java, etc. All languages of the DRAKON-family share a uniform, graphical syntax based on flowcharts. The standard graphical syntax provides similarity of drakon-charts for different hybrid languages. The text language uses its own syntax.
The basis of the graphical syntax is a graphical alphabet. Graphical elements ("letters") of the DRAKON alphabet are calledicons(not symbols). DRAKON also hasmacroicons. Macroicons are the graphical words of the DRAKON language; they consist of icons. There are 27 icons and 21 macroicons in the DRAKON language.
Drakon-charts are constructed out of icons and macroicons.
The important parts of maсroiсons are valence points (in the illustration depicted as black circles). Into these points, icons or microicons can be successively entered and arranged by the drakon-editor into columns.
DRAKON was created as an easy to learn visual language to aid the comprehension of computer programs written in different programming languages for illustrative, planning and strategy purposes.
DRAKON uses drakon-chart, which is a formalization of traditional flowcharts to depict the overall structure of the program. Code snippets of a programming language are added to the DRAKON icons. The combination of visual elements with code helps with the creation and maintenance of readable flowcharts alongside the development of the program in question.
DRAKON rules for creating diagrams are cognitively optimized for easy comprehension, making it a tool forintelligence augmentation.[3][8][9][10]
Drakon-charts of big multi-purpose programs can be complex and hard to comprehend. A set of smaller programs, that together serve the same purpose, are often easier to understand when depicted as drakon-charts. A similar problem exists in maintaining code of large programs. This problem is occasionally referred to as "rule of 30 [lines of code]" among programmers.
The full-text article containing description of the visual syntax of the DRAKON language in English, 12 pages, free to download, pdf.[11]
Simple example of a program in the DRAKON language
These examples are real code from an implementation of theTetrisgame. The examples are in DRAKON-JavaScript language. The icons (visual primitives) of the DRAKON language define the overall structure of the algorithms. The code snippets inside the icons (primitives) are in JavaScript.
TheadvanceStepfunction implements the core logic of the game.advanceStepis astate machinerepresented as adecision tree.[12]The game engine callsadvanceStepperiodically. This state machine has three states "playing", "dropping", and "finished". The game takes different actions depending on the current state. For example, in the "playing" state, when there is a falling projectile and the projectile can move down, it is moved down one step.
With DRAKON, the reader of the algorithm can visually trace all possible paths in the decision tree.
JavaScript code generated from the DRAKON-chart:
ThenoProjectilefunction handles the specific situation when there is no falling projectile. If there is a filled row, that row is removed from the grid. Otherwise, the game tries to insert a new projectile. If there is no space for the projectile, the game is lost.
JavaScript code generated from the DRAKON-chart:
TheclearRowfunction scans all rows bottom-up until it hits a row with no gaps. In such case the row is removed from the grid, the score is increased, and the game's tempo goes up.
JavaScript code generated from the DRAKON-chart:
The picture below illustrated the execution of the silhouette DRAKON algorithm. The algorithm execution is animated by highlighting diagram elements in the running order.
The 'Fishing' silhouette consists of four trees:
The main path of each tree is shown by highlighting thick vertical line which is called askewer.
The flow graph always has a path from the Headline icon to each vertex (node) of the control flow graph. Consequently, a silhouette can't have unreachable code in any conditions.
DRAKON language is used in theGerman Aerospace Centerfor implementation of some critical functions dictated by the safety regulations of the flight tests, where automation is important because of maximum distance to the ground station and the process needs quick automatic execution.
The DRAKON Editor software was used to graphically program flowcharts which were specially checked. C-code was generated from the drakon-charts, for instance, for a DRAKON representation of code for launch detection.[13]
The DRAKON language may be used as the language to model and visualize business processes.
"The DRAKON language was applied as the basic language for constructing models of business processes, which makes it possible to obtain a prototype of a finite-state machine when building models of business processes. The visualization of business processes in the state space allows the decision maker to improve the efficiency of the decision-making".[14]
While DRAKON is primarily designed as a tool for comprehending computer programs, drakon-charts can also be used to illustrate processes in fields not related to computing.
In the DRAKON editor pictures can be added to the DRAKON icons. This ability is used in some fields to easily create "flowchart like"infographics. In Russia the DRAKON editor is known for being used in the medical field as a tool for making 'instructional' charts for patients or medical personnel.[citation needed][15]
|
https://en.wikipedia.org/wiki/DRAKON
|
In probability theory and statistics, aMarkov chainorMarkov processis astochastic processdescribing asequenceof possible events in which theprobabilityof each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairsnow." Acountably infinitesequence, in which the chain moves state at discrete time steps, gives adiscrete-time Markov chain(DTMC). Acontinuous-timeprocess is called acontinuous-time Markov chain(CTMC). Markov processes are named in honor of theRussianmathematicianAndrey Markov.
Markov chains have many applications asstatistical modelsof real-world processes.[1]They provide the basis for general stochastic simulation methods known asMarkov chain Monte Carlo, which are used for simulating sampling from complexprobability distributions, and have found application in areas includingBayesian statistics,biology,chemistry,economics,finance,information theory,physics,signal processing, andspeech processing.[1][2][3]
The adjectivesMarkovianandMarkovare used to describe something that is related to a Markov process.[4]
A Markov process is astochastic processthat satisfies theMarkov property(sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history.[5]In other words,conditionalon the present state of the system, its future and past states areindependent.
A Markov chain is a type of Markov process that has either a discretestate spaceor a discrete index set (often representing time), but the precise definition of a Markov chain varies.[6]For example, it is common to define a Markov chain as a Markov process in eitherdiscrete or continuous timewith a countable state space (thus regardless of the nature of time),[7][8][9][10]but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space).[6]
The system'sstate spaceand time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time:
Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, adiscrete-time Markov chain (DTMC),[11]but a few authors use the term "Markov process" to refer to acontinuous-time Markov chain (CTMC)without explicit mention.[12][13][14]In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (seeMarkov model). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term.
While the time parameter is usually discrete, thestate spaceof a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space.[15]However, many applications of Markov chains employ finite orcountably infinitestate spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (seeVariations). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise.
The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, atransition matrixdescribing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate.
A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are theintegersornatural numbers, and the random process is a mapping of these to states. The Markov property states that theconditional probability distributionfor the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps.
Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important.
Andrey Markovstudied Markov processes in the early 20th century, publishing his first paper on the topic in 1906.[16][17][18]Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of thePoisson process.[19][20][21]Markov was interested in studying an extension of independent random sequences, motivated by a disagreement withPavel Nekrasovwho claimed independence was necessary for theweak law of large numbersto hold.[22]In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption,[16][17][18]which had been commonly regarded as a requirement for such mathematical laws to hold.[18]Markov later used Markov chains to study the distribution of vowels inEugene Onegin, written byAlexander Pushkin, and proved acentral limit theoremfor such chains.[16]
In 1912Henri Poincaréstudied Markov chains onfinite groupswith an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced byPaulandTatyana Ehrenfestin 1907, and a branching process, introduced byFrancis GaltonandHenry William Watsonin 1873, preceding the work of Markov.[16][17]After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier byIrénée-Jules Bienaymé.[23]Starting in 1928,Maurice Fréchetbecame interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains.[16][24]
Andrey Kolmogorovdeveloped in a 1931 paper a large part of the early theory of continuous-time Markov processes.[25][26]Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well asNorbert Wiener's work on Einstein's model of Brownian movement.[25][27]He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes.[25][28]Independent of Kolmogorov's work,Sydney Chapmanderived in a 1928 paper an equation, now called theChapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement.[29]The differential equations are now called the Kolmogorov equations[30]or the Kolmogorov–Chapman equations.[31]Other mathematicians who contributed significantly to the foundations of Markov processes includeWilliam Feller, starting in 1930s, and then laterEugene Dynkin, starting in the 1950s.[26]
Suppose that there is a coin purse containing five coins worth 25¢, five coins worth 10¢ and five coins worth 5¢, and one by one, coins are randomly drawn from the purse and are set on a table. IfXn{\displaystyle X_{n}}represents the total value of the coins set on the table afterndraws, withX0=0{\displaystyle X_{0}=0}, then the sequence{Xn:n∈N}{\displaystyle \{X_{n}:n\in \mathbb {N} \}}isnota Markov process.
To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. ThusX6=$0.50{\displaystyle X_{6}=\$0.50}. If we know not justX6{\displaystyle X_{6}}, but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine thatX7≥$0.60{\displaystyle X_{7}\geq \$0.60}with probability 1. But if we do not know the earlier values, then based only on the valueX6{\displaystyle X_{6}}we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses aboutX7{\displaystyle X_{7}}are impacted by our knowledge of values prior toX6{\displaystyle X_{6}}.
However, it is possible to model this scenario as a Markov process. Instead of definingXn{\displaystyle X_{n}}to represent thetotal valueof the coins on the table, we could defineXn{\displaystyle X_{n}}to represent thecountof the various coin types on the table. For instance,X6=1,0,5{\displaystyle X_{6}=1,0,5}could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by6×6×6=216{\displaystyle 6\times 6\times 6=216}possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in stateX1=0,1,0{\displaystyle X_{1}=0,1,0}. The probability of achievingX2{\displaystyle X_{2}}now depends onX1{\displaystyle X_{1}}; for example, the stateX2=1,0,1{\displaystyle X_{2}=1,0,1}is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of theXn=i,j,k{\displaystyle X_{n}=i,j,k}state depends exclusively on the outcome of theXn−1=ℓ,m,p{\displaystyle X_{n-1}=\ell ,m,p}state.
A discrete-time Markov chain is a sequence ofrandom variablesX1,X2,X3, ... with theMarkov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states:
The possible values ofXiform acountable setScalled the state space of the chain.
A continuous-time Markov chain (Xt)t≥ 0is defined by a finite or countable state spaceS, atransition rate matrixQwith dimensions equal to that of the state space and initial probability distribution defined on the state space. Fori≠j, the elementsqijare non-negative and describe the rate of the process transitions from stateito statej. The elementsqiiare chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one.
There are three equivalent definitions of the process.[40]
LetXt{\displaystyle X_{t}}be the random variable describing the state of the process at timet, and assume the process is in a stateiat timet.
Then, knowingXt=i{\displaystyle X_{t}=i},Xt+h=j{\displaystyle X_{t+h}=j}is independent of previous values(Xs:s<t){\displaystyle \left(X_{s}:s<t\right)}, and ash→ 0 for alljand for allt,Pr(X(t+h)=j∣X(t)=i)=δij+qijh+o(h),{\displaystyle \Pr(X(t+h)=j\mid X(t)=i)=\delta _{ij}+q_{ij}h+o(h),}whereδij{\displaystyle \delta _{ij}}is theKronecker delta, using thelittle-o notation.
Theqij{\displaystyle q_{ij}}can be seen as measuring how quickly the transition fromitojhappens.
Define a discrete-time Markov chainYnto describe thenth jump of the process and variablesS1,S2,S3, ... to describe holding times in each of the states whereSifollows theexponential distributionwith rate parameter −qYiYi.
For any valuen= 0, 1, 2, 3, ... and times indexed up to this value ofn:t0,t1,t2, ... and all states recorded at these timesi0,i1,i2,i3, ... it holds that
wherepijis the solution of theforward equation(afirst-order differential equation)
with initial condition P(0) is theidentity matrix.
If the state space isfinite, the transition probability distribution can be represented by amatrix, called the transition matrix, with the (i,j)thelementofPequal to
Since each row ofPsums to one and all elements are non-negative,Pis aright stochastic matrix.
A stationary distributionπis a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrixPon it and so is defined by
By comparing this definition with that of aneigenvectorwe see that the two concepts are related and that
is a normalized (∑iπi=1{\textstyle \sum _{i}\pi _{i}=1}) multiple of a left eigenvectoreof the transition matrixPwith aneigenvalueof 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution.
The values of a stationary distributionπi{\displaystyle \textstyle \pi _{i}}are associated with the state space ofPand its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as∑i1⋅πi=1{\textstyle \sum _{i}1\cdot \pi _{i}=1}we see that thedot productof π with a vector whose components are all 1 is unity and that π lies on asimplex.
If the Markov chain is time-homogeneous, then the transition matrixPis the same after each step, so thek-step transition probability can be computed as thek-th power of the transition matrix,Pk.
If the Markov chain is irreducible and aperiodic, then there is a unique stationary distributionπ.[41]Additionally, in this casePkconverges to a rank-one matrix in which each row is the stationary distributionπ:
where1is the column vector with all entries equal to 1. This is stated by thePerron–Frobenius theorem. If, by whatever means,limk→∞Pk{\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}}is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below.
For some stochastic matricesP, the limitlimk→∞Pk{\textstyle \lim _{k\to \infty }\mathbf {P} ^{k}}does not exist while the stationary distribution does, as shown by this example:
(This example illustrates a periodic Markov chain.)
Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. LetPbe ann×nmatrix, and defineQ=limk→∞Pk.{\textstyle \mathbf {Q} =\lim _{k\to \infty }\mathbf {P} ^{k}.}
It is always true that
SubtractingQfrom both sides and factoring then yields
whereInis theidentity matrixof sizen, and0n,nis thezero matrixof sizen×n. Multiplying together stochastic matrices always yields another stochastic matrix, soQmust be astochastic matrix(see the definition above). It is sometimes sufficient to use the matrix equation above and the fact thatQis a stochastic matrix to solve forQ. Including the fact that the sum of each the rows inPis 1, there aren+1equations for determiningnunknowns, so it is computationally easier if on the one hand one selects one row inQand substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector0, and next left-multiplies this latter vector by the inverse of transformed former matrix to findQ.
Here is one method for doing so: first, define the functionf(A) to return the matrixAwith its right-most column replaced with all 1's. If [f(P−In)]−1exists then[42][41]
One thing to notice is that ifPhas an elementPi,ion its main diagonal that is equal to 1 and theith row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powersPk. Hence, theith row or column ofQwill have the 1 and the 0's in the same positions as inP.
As stated earlier, from the equationπ=πP,{\displaystyle {\boldsymbol {\pi }}={\boldsymbol {\pi }}\mathbf {P} ,}(if exists) the stationary (or steady state) distributionπis a left eigenvector of rowstochastic matrixP. Then assuming thatPis diagonalizable or equivalently thatPhasnlinearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is,defective matrices, one may start with theJordan normal formofPand proceed with a bit more involved set of arguments in a similar way.[43])
LetUbe the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector ofPand letΣbe the diagonal matrix of left eigenvalues ofP, that is,Σ= diag(λ1,λ2,λ3,...,λn). Then byeigendecomposition
Let the eigenvalues be enumerated such that:
SincePis a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no otherπwhich solves the stationary distribution equation above). Letuibe thei-th column ofUmatrix, that is,uiis the left eigenvector ofPcorresponding to λi. Also letxbe a lengthnrow vector that represents a valid probability distribution; since the eigenvectorsuispanRn,{\displaystyle \mathbb {R} ^{n},}we can write
If we multiplyxwithPfrom right and continue this operation with the results, in the end we get the stationary distributionπ. In other words,π=a1u1←xPP...P=xPkask→ ∞. That means
Sinceπis parallel tou1(normalized by L2 norm) andπ(k)is a probability vector,π(k)approaches toa1u1=πask→ ∞ with a speed in the order ofλ2/λ1exponentially. This follows because|λ2|≥⋯≥|λn|,{\displaystyle |\lambda _{2}|\geq \cdots \geq |\lambda _{n}|,}henceλ2/λ1is the dominant term. The smaller the ratio is, the faster the convergence is.[44]Random noise in the state distributionπcan also speed up this convergence to the stationary distribution.[45]
Many results for Markov chains with finite state space can be generalized to chains with uncountable state space throughHarris chains.
The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space.
"Locally interacting Markov chains" are Markov chains with an evolution that takes into account the state of other Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form.
Seeinteracting particle systemandstochastic cellular automata(probabilistic cellular automata).
See for instanceInteraction of Markov Processes[46]or.[47]
Two states are said tocommunicatewith each other if both are reachable from one another by a sequence of transitions that have positive probability. This is an equivalence relation which yields a set of communicating classes. A class isclosedif the probability of leaving the class is zero. A Markov chain isirreducibleif there is one communicating class, the state space.
A stateihas periodkifkis thegreatest common divisorof the number of transitions by whichican be reached, starting fromi. That is:
The state isperiodicifk>1{\displaystyle k>1}; otherwisek=1{\displaystyle k=1}and the state isaperiodic.
A stateiis said to betransientif, starting fromi, there is a non-zero probability that the chain will never return toi. It is calledrecurrent(orpersistent) otherwise.[48]For a recurrent statei, the meanhitting timeis defined as:
Stateiispositive recurrentifMi{\displaystyle M_{i}}is finite andnull recurrentotherwise. Periodicity, transience, recurrence and positive and null recurrence are class properties — that is, if one state has the property then all states in its communicating class have the property.[49]
A stateiis calledabsorbingif there are no outgoing transitions from the state.
Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic.[50]
If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique stationary distribution given byπi=1/E[Ti]{\displaystyle \pi _{i}=1/E[T_{i}]}.
A stateiis said to beergodicif it is aperiodic and positive recurrent. In other words, a stateiis ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time.
If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently, there exists some integerk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive.
It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a numberNsuch that any state can be reached from any other state in any number of steps less or equal to a numberN. In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled withN= 1.
A Markov chain with more than one state and just one out-going transition per state is either not irreducible or not aperiodic, hence cannot be ergodic.
Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones.[51]In fact, merely irreducible Markov chains correspond toergodic processes, defined according toergodic theory.[52]
Some authors call a matrixprimitiveif there exists some integerk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive.[53]Some authors call itregular.[54]
Theindex of primitivity, orexponent, of a regular matrix, is the smallestk{\displaystyle k}such that all entries ofMk{\displaystyle M^{k}}are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each entry ofM{\displaystyle M}is zero or positive, and therefore can be found on a directed graph withsign(M){\displaystyle \mathrm {sign} (M)}as its adjacency matrix.
There are several combinatorial results about the exponent when there are finitely many states. Letn{\displaystyle n}be the number of states, then[55]
If a Markov chain has a stationary distribution, then it can be converted to ameasure-preserving dynamical system: Let the probability space beΩ=ΣN{\displaystyle \Omega =\Sigma ^{\mathbb {N} }}, whereΣ{\displaystyle \Sigma }is the set of all states for the Markov chain. Let the sigma-algebra on the probability space be generated by the cylinder sets. Let the probability measure be generated by the stationary distribution, and the Markov chain transition. LetT:Ω→Ω{\displaystyle T:\Omega \to \Omega }be the shift operator:T(X0,X1,…)=(X1,…){\displaystyle T(X_{0},X_{1},\dots )=(X_{1},\dots )}. Similarly we can construct such a dynamical system withΩ=ΣZ{\displaystyle \Omega =\Sigma ^{\mathbb {Z} }}instead.[57]
SinceirreducibleMarkov chains with finite state spaces have a unique stationary distribution, the above construction is unambiguous for irreducible Markov chains.
Inergodic theory, a measure-preserving dynamical system is calledergodicif any measurable subsetS{\displaystyle S}such thatT−1(S)=S{\displaystyle T^{-1}(S)=S}impliesS=∅{\displaystyle S=\emptyset }orΩ{\displaystyle \Omega }(up to a null set).
The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly positive on all states, the Markov chain isirreducibleif its corresponding measure-preserving dynamical system isergodic.[52]
In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states. For example, letXbe a non-Markovian process. Then define a processY, such that each state ofYrepresents a time-interval of states ofX. Mathematically, this takes the form:
IfYhas the Markov property, then it is a Markovian representation ofX.
An example of a non-Markovian process with a Markovian representation is anautoregressivetime seriesof order greater than one.[58]
Thehitting timeis the time, starting in a given set of states, until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition.
For a subset of statesA⊆S, the vectorkAof hitting times (where elementkiA{\displaystyle k_{i}^{A}}represents theexpected value, starting in stateithat the chain enters one of the states in the setA) is the minimal non-negative solution to[59]
For a CTMCXt, the time-reversed process is defined to beX^t=XT−t{\displaystyle {\hat {X}}_{t}=X_{T-t}}. ByKelly's lemmathis process has the same stationary distribution as the forward process.
A chain is said to bereversibleif the reversed process is the same as the forward process.Kolmogorov's criterionstates that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions.
One method of finding thestationary probability distribution,π, of anergodiccontinuous-time Markov chain,Q, is by first finding itsembedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as ajump process. Each element of the one-step transition probability matrix of the EMC,S, is denoted bysij, and represents theconditional probabilityof transitioning from stateiinto statej. These conditional probabilities may be found by
From this,Smay be written as
whereIis theidentity matrixand diag(Q) is thediagonal matrixformed by selecting themain diagonalfrom the matrixQand setting all other elements to zero.
To find the stationary probability distribution vector, we must next findφ{\displaystyle \varphi }such that
withφ{\displaystyle \varphi }being a row vector, such that all elements inφ{\displaystyle \varphi }are greater than 0 and‖φ‖1{\displaystyle \|\varphi \|_{1}}= 1. From this,πmay be found as
(Smay be periodic, even ifQis not. Onceπis found, it must be normalized to aunit vector.)
Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observingX(t) at intervals of δ units of time. The random variablesX(0),X(δ),X(2δ), ... give the sequence of states visited by the δ-skeleton.
Markov models are used to model changing systems. There are 4 main types of models, that generalize Markov chains depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made:
ABernoulli schemeis a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is independent of even the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as aBernoulli process.
Note, however, by theOrnstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme;[60]thus, one might equally claim that Markov chains are a "special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The isomorphism theorem is even a bit stronger: it states thatanystationary stochastic processis isomorphic to a Bernoulli scheme; the Markov chain is just one such example.
When the Markov matrix is replaced by theadjacency matrixof afinite graph, the resulting shift is termed atopological Markov chainor asubshift of finite type.[60]A Markov matrix that is compatible with the adjacency matrix can then provide ameasureon the subshift. Many chaoticdynamical systemsare isomorphic to topological Markov chains; examples includediffeomorphismsofclosed manifolds, theProuhet–Thue–Morse system, theChacon system,sofic systems,context-free systemsandblock-coding systems.[60]
Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. They have been used for forecasting in several areas: for example, price trends,[61]wind power,[62]stochastic terrorism,[63][64]andsolar irradiance.[65]The Markov chain forecasting models utilize a variety of settings, from discretizing the time series,[62]to hidden Markov models combined with wavelets,[61]and the Markov chain mixture distribution model (MCM).[65]
Markovian systems appear extensively inthermodynamicsandstatistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description.[66][67]For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects.[67]
Markov chains are used inlattice QCDsimulations.[68]
A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain.[69]Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large numbernof molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time isntimes the probability a given molecule is in that state.
The classical model of enzyme activity,Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains.[70]
An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicalsin silicotowards a desired class of compounds such as drugs or natural products.[71]As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds.[72]
Also, the growth (and composition) ofcopolymersmay be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due tosteric effects, second-order Markov effects may also play a role in the growth of some polymer chains.
Similarly, it has been suggested that the crystallization and growth of some epitaxialsuperlatticeoxide materials can be accurately described by Markov chains.[73]
Markov chains are used in various areas of biology. Notable examples include:
Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing.[citation needed]
Solar irradiancevariability assessments are useful forsolar powerapplications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains,[76][77][78][79]also including modeling the two states of clear and cloudiness as a two-state Markov chain.[80][81]
Hidden Markov modelshave been used inautomatic speech recognitionsystems.[82]
Markov chains are used throughout information processing.Claude Shannon's famous 1948 paperA Mathematical Theory of Communication, which in a single step created the field ofinformation theory, opens by introducing the concept ofentropyby modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters.[83]Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effectivedata compressionthroughentropy encodingtechniques such asarithmetic coding. They also allow effectivestate estimationandpattern recognition. Markov chains also play an important role inreinforcement learning.
Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use theViterbi algorithmfor error correction), speech recognition andbioinformatics(such as in rearrangements detection[84]).
TheLZMAlossless data compression algorithm combines Markov chains withLempel-Ziv compressionto achieve very high compression ratios.
Markov chains are the basis for the analytical treatment of queues (queueing theory).Agner Krarup Erlanginitiated the subject in 1917.[85]This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth).[86]
Numerous queueing models use continuous-time Markov chains. For example, anM/M/1 queueis a CTMC on the non-negative integers where upward transitions fromitoi+ 1 occur at rateλaccording to aPoisson processand describe job arrivals, while transitions fromitoi– 1 (fori> 1) occur at rateμ(job service times are exponentially distributed) and describe completed services (departures) from the queue.
ThePageRankof a webpage as used byGoogleis defined by a Markov chain.[87][88][89]It is the probability to be at pagei{\displaystyle i}in the stationary distribution on the following Markov chain on all (known) webpages. IfN{\displaystyle N}is the number of known webpages, and a pagei{\displaystyle i}haski{\displaystyle k_{i}}links to it then it has transition probabilityαki+1−αN{\displaystyle {\frac {\alpha }{k_{i}}}+{\frac {1-\alpha }{N}}}for all pages that are linked to and1−αN{\displaystyle {\frac {1-\alpha }{N}}}for all pages that are not linked to. The parameterα{\displaystyle \alpha }is taken to be about 0.15.[90]
Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user.[citation needed]
Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process calledMarkov chain Monte Carlo(MCMC). In recent years this has revolutionized the practicability ofBayesian inferencemethods, allowing a wide range ofposterior distributionsto be simulated and their parameters found numerically.[citation needed]
In 1971 aNaval Postgraduate SchoolMaster's thesis proposed to model a variety of combat between adversaries as a Markov chain "with states reflecting the control, maneuver, target acquisition, and target destruction actions of a weapons system" and discussed the parallels between the resulting Markov chain andLanchester's laws.[91]
In 1975 Duncan and Siverson remarked that Markov chains could be used to model conflict between state actors, and thought that their analysis would help understand "the behavior of social and political organizations in situations of conflict."[92]
Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes.D. G. Champernownebuilt a Markov chain model of the distribution of income in 1953.[93]Herbert A. Simonand co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes.[94]Louis Bachelierwas the first to observe that stock prices followed a random walk.[95]The random walk was later seen as evidence in favor of theefficient-market hypothesisand random walk models were popular in the literature of the 1960s.[96]Regime-switching models of business cycles were popularized byJames D. Hamilton(1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions).[97]A more recent example is theMarkov switching multifractalmodel ofLaurent E. Calvetand Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models.[98][99]It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns.
Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in ageneral equilibriumsetting.[100]
Credit rating agenciesproduce annual tables of the transition probabilities for bonds of different credit ratings.[101]
Markov chains are generally used in describingpath-dependentarguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due toKarl Marx'sDas Kapital, tyingeconomic developmentto the rise ofcapitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of themiddle class, the ratio of urban to rural residence, the rate ofpoliticalmobilization, etc., will generate a higher probability of transitioning fromauthoritariantodemocratic regime.[102]
Markov chains are employed inalgorithmic music composition, particularly insoftwaresuch asCsound,Max, andSuperCollider. In a first-order chain, the states of the system become note or pitch values, and aprobability vectorfor each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could beMIDInote values, frequency (Hz), or any other desirable metric.[103]
A second-order Markov chain can be introduced by considering the current stateandalso the previous state, as indicated in the second table. Higher,nth-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense ofphrasalstructure, rather than the 'aimless wandering' produced by a first-order system.[104]
Markov chains can be used structurally, as in Xenakis's Analogique A and B.[105]Markov chains are also used in systems which use a Markov model to react interactively to music input.[106]
Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed.[107]
Markov chains can be used to model many games of chance. The children's gamesSnakes and Laddersand "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares).[citation needed]
Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team.[108]He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such asbuntingandbase stealingand differences when playing on grass vs.AstroTurf.[109]
Markov processes can also be used togenerate superficially real-looking textgiven a sample document. Markov processes are used in a variety of recreational "parody generator" software (seedissociated press, Jeff Harrison,[110]Mark V. Shaney,[111][112]and Academias Neutronium). Several open-source text generation libraries using Markov chains exist.
|
https://en.wikipedia.org/wiki/Markov_chain
|
Ordinal priority approach(OPA) is amultiple-criteria decision analysismethod that aids in solving thegroup decision-makingproblems based onpreference relations.
Various methods have been proposed to solve multi-criteria decision-making problems.[1]The basis of methods such asanalytic hierarchy processandanalytic network processispairwise comparisonmatrix.[2]The advantages and disadvantages of the pairwise comparison matrix were discussed by Munier and Hontoria in their book.[3]In recent years, the OPA method was proposed to solve the multi-criteria decision-making problems based on theordinal datainstead of using thepairwise comparisonmatrix.[4]The OPA method is a major part of Dr. Amin Mahmoudi's PhD thesis from theSoutheast Universityof China.[4]
This method useslinear programmingapproach to compute the weights of experts, criteria, and alternatives simultaneously.[5]The main reason for usingordinal datain the OPA method is the accessibility and accuracy of theordinal datacompared with exact ratios used ingroup decision-makingproblems involved with humans.[6]
In real-world situations, the experts might not have enough knowledge regarding one alternative or criterion. In this case, the input data of the problem is incomplete, which needs to be incorporated into the linear programming of the OPA. To handle the incomplete input data in the OPA method, the constraints related to the criteria or alternatives should be removed from the OPA linear-programming model.[7]
Various types of datanormalizationmethods have been employed in multi-criteria decision-making methods in recent years. Palczewski and Sałabun showed that using various data normalization methods can change the final ranks of themulti-criteria decision-making methods.[8]Javed and colleagues showed that a multiple-criteria decision-making problem can be solved by avoiding the data normalization.[9]There is no need to normalize thepreference relationsand thus, the OPA method does not requiredata normalization.[10]
The OPA model is alinear programmingmodel, which can be solved using asimplex algorithm. The steps of this method are as follows:[11]
Step 1: Identifying the experts and determining thepreferenceof experts based on their working experience, educational qualification, etc.
Step 2: identifying the criteria and determining the preference of the criteria by each expert.
Step 3: identifying the alternatives and determining the preference of the alternatives in each criterion by each expert.
Step 4: Constructing the following linear programming model and solving it by an appropriate optimization software such asLINGO,GAMS,MATLAB, etc.
MaxZS.t.Z≤ri(rj(rk(wijkrk−wijkrk+1)))∀i,jandrkZ≤rirjrmwijkrm∀i,jandrm∑i=1p∑j=1n∑k=1mwijk=1wijk≥0∀i,jandkZ:Unrestrictedinsign{\textstyle {\begin{aligned}&MaxZ\\&S.t.\\&Z\leq r_{i}{\bigg (}r_{j}{\big (}r_{k}(w_{ijk}^{r_{k}}-w_{ijk}^{{r_{k}}+1}){\big )}{\bigg )}\;\;\;\;\forall i,j\;and\;r_{k}\\&Z\leq r_{i}r_{j}r_{m}w_{ijk}^{r_{m}}\;\;\;\forall i,j\;and\;r_{m}\\&\sum _{i=1}^{p}\sum _{j=1}^{n}\sum _{k=1}^{m}w_{ijk}=1\\&w_{ijk}\geq 0\;\;\;\forall i,j\;and\;k\\&Z:Unrestricted\;in\;sign\\\end{aligned}}}
In the above model,ri(i=1,...,p){\displaystyle r_{i}(i=1,...,p)}represents the rank of experti{\displaystyle i},rj(j=1...,n){\displaystyle r_{j}(j=1...,n)}represents the rank of criterionj{\displaystyle j},rk(k=1...,m){\displaystyle r_{k}(k=1...,m)}represents the rank of alternativek{\displaystyle k}, andwijk{\displaystyle w_{ijk}}represents the weight of alternativek{\displaystyle k}in criterionj{\displaystyle j}by experti{\displaystyle i}.
After solving the OPAlinear programmingmodel, the weight of each alternative is calculated by the following equation:
wk=∑i=1p∑j=1nwijk∀k{\displaystyle {\begin{aligned}&w_{k}=\sum _{i=1}^{p}\sum _{j=1}^{n}w_{ijk}\;\;\;\;\forall k\\\end{aligned}}}
The weight of each criterion is calculated by the following equation:
wj=∑i=1p∑k=1mwijk∀j{\displaystyle {\begin{aligned}&w_{j}=\sum _{i=1}^{p}\sum _{k=1}^{m}w_{ijk}\;\;\;\;\forall j\\\end{aligned}}}
And the weight of each expert is calculated by the following equation:
wi=∑j=1n∑k=1mwijk∀i{\displaystyle {\begin{aligned}&w_{i}=\sum _{j=1}^{n}\sum _{k=1}^{m}w_{ijk}\;\;\;\;\forall i\\\end{aligned}}}
Suppose that we are going to investigate the issue of buying a house. There aretwo expertsin thisdecision problem. Also, there are two criteria calledcost (c), andconstruction quality (q)for buying the house. On the other hand, there arethree houses (h1, h2, h3)for purchasing. The first expert (x) hasthree years of working experienceand the second expert (y) hastwo years of working experience. The structure of the problem is shown in the figure.
Step 1: The first expert (x) has more experience than expert (y), hence x > y.
Step 2: The criteria and their preference are summarized in the following table:
Step 3: The alternatives and their preference are summarized in the following table:
Step 4: The OPA linear programming model is formed based on the input data as follows:
MaxZS.t.Z≤1∗1∗1∗(wxch1−wxch3)Z≤1∗1∗2∗(wxch3−wxch2)Z≤1∗1∗3∗wxch2Z≤1∗2∗1∗(wxqh2−wxqh1)Z≤1∗2∗2∗(wxqh1−wxqh3)Z≤1∗2∗3∗wxqh3Z≤2∗2∗1∗(wych1−wych2)Z≤2∗2∗2∗(wych2−wych3)Z≤2∗2∗3∗wych3Z≤2∗1∗1∗(wyqh2−wyqh3)Z≤2∗1∗2∗(wyqh3−wyqh1)Z≤2∗1∗3∗wyqh1wxch1+wxch2+wxch3+wxqh1+wxqh2+wxqh3+wych1+wych2+wych3+wyqh1+wyqh2+wyqh3=1{\displaystyle {\begin{aligned}&MaxZ\\&S.t.\\&Z\leq 1*1*1*(w_{xch1}-w_{xch3})\;\;\;\;\\&Z\leq 1*1*2*(w_{xch3}-w_{xch2})\;\;\;\;\\&Z\leq 1*1*3*w_{xch2}\;\;\;\\\\&Z\leq 1*2*1*(w_{xqh2}-w_{xqh1})\;\;\;\;\\&Z\leq 1*2*2*(w_{xqh1}-w_{xqh3})\;\;\;\;\\&Z\leq 1*2*3*w_{xqh3}\;\;\;\\\\&Z\leq 2*2*1*(w_{ych1}-w_{ych2})\;\;\;\;\\&Z\leq 2*2*2*(w_{ych2}-w_{ych3})\;\;\;\;\\&Z\leq 2*2*3*w_{ych3}\;\;\;\\\\&Z\leq 2*1*1*(w_{yqh2}-w_{yqh3})\;\;\;\;\\&Z\leq 2*1*2*(w_{yqh3}-w_{yqh1})\;\;\;\;\\&Z\leq 2*1*3*w_{yqh1}\;\;\;\\\\&w_{xch1}+w_{xch2}+w_{xch3}+w_{xqh1}+w_{xqh2}+w_{xqh3}+w_{ych1}+w_{ych2}+w_{ych3}+w_{yqh1}+w_{yqh2}+w_{yqh3}=1\\\\\end{aligned}}}
After solving the above model using optimization software, the weights of experts, criteria and alternatives are obtained as follows:
wx=wxch1+wxch2+wxch3+wxqh1+wxqh2+wxqh3=0.666667wy=wych1+wych2+wych3+wyqh1+wyqh2+wyqh3=0.333333wc=wxch1+wxch2+wxch3+wych1+wych2+wych3=0.555556wq=wxqh1+wxqh2+wxqh3+wyqh1+wyqh2+wyqh3=0.444444wh1=wxch1+wxqh1+wych1+wyqh1=0.425926wh2=wxch2+wxqh2+wych2+wyqh2=0.351852wh3=wxch3+wxqh3+wych3+wyqh3=0.222222{\displaystyle {\begin{aligned}&w_{x}=w_{xch1}+w_{xch2}+w_{xch3}+w_{xqh1}+w_{xqh2}+w_{xqh3}=0.666667\\\\&w_{y}=w_{ych1}+w_{ych2}+w_{ych3}+w_{yqh1}+w_{yqh2}+w_{yqh3}=0.333333\\\\\\&w_{c}=w_{xch1}+w_{xch2}+w_{xch3}+w_{ych1}+w_{ych2}+w_{ych3}=0.555556\\\\&w_{q}=w_{xqh1}+w_{xqh2}+w_{xqh3}+w_{yqh1}+w_{yqh2}+w_{yqh3}=0.444444\\\\\\&w_{h1}=w_{xch1}+w_{xqh1}+w_{ych1}+w_{yqh1}=0.425926\\\\&w_{h2}=w_{xch2}+w_{xqh2}+w_{ych2}+w_{yqh2}=0.351852\\\\&w_{h3}=w_{xch3}+w_{xqh3}+w_{ych3}+w_{yqh3}=0.222222\\\\\end{aligned}}}
Therefore, House 1 (h1) is considered as the best alternative. Moreover, we can understand that criterion cost (c) is more important than criterion construction quality (q). Also, based on the experts' weights, we can understand that expert (x) has a higher impact on final selection compared with expert (y).
The applications of the OPA method in various field of studies are summarized as follows:
Agriculture, manufacturing, services
Construction industry
Energy and environment
Healthcare
Information technology
Transportation
Several extensions of the OPA method are listed as follows:
The following non-profit tools are available to solve the MCDM problems using the OPA method:
|
https://en.wikipedia.org/wiki/Ordinal_priority_approach
|
Indecision theory, theodds algorithm(orBruss algorithm) is a mathematical method for computing optimal strategies for a class of problems that belong to the domain ofoptimal stoppingproblems. Their solution follows from theodds strategy, and the importance of the odds strategy lies in its optimality, as explained below.
The odds algorithm applies to a class of problems calledlast-successproblems. Formally, the objective in these problems is to maximize the probability of identifying in a sequence of sequentially observed independent events the last event satisfying a specific criterion (a "specific event"). This identification must be done at the time of observation. No revisiting of preceding observations is permitted. Usually, a specific event is defined by the decision maker as an event that is of true interest in the view of "stopping" to take a well-defined action. Such problems are encountered in several situations.
Two different situations exemplify the interest in maximizing the probability to stop on a last specific event.
Consider a sequence ofn{\displaystyle n}independent events. Associate with this sequence another sequence of independent eventsI1,I2,…,In{\displaystyle I_{1},\,I_{2},\,\dots ,\,I_{n}}with values 1 or 0. HereIk=1{\displaystyle \,I_{k}=1}, called a success, stands for
the event that the kth observation is interesting (as defined by the decision maker), andIk=0{\displaystyle \,I_{k}=0}for non-interesting.
These random variablesI1,I2,…,In{\displaystyle I_{1},\,I_{2},\,\dots ,\,I_{n}}are observed sequentially and the goal is to correctly select the last success when it is observed.
Letpk=P(Ik=1){\displaystyle \,p_{k}=P(\,I_{k}\,=1)}be the probability that the kth event is interesting. Further letqk=1−pk{\displaystyle \,q_{k}=\,1-p_{k}}andrk=pk/qk{\displaystyle \,r_{k}=p_{k}/q_{k}}. Note thatrk{\displaystyle \,r_{k}}represents theoddsof the kth event turning out to be interesting, explaining the name of the odds algorithm.
The odds algorithm sums up the odds in reverse order
until this sum reaches or exceeds the value 1 for the first time. If this happens at indexs, it savessand the corresponding sum
If the sum of the odds does not reach 1, it setss= 1. At the same time it computes
The output is
The odds strategy is the rule to observe the events one after the other and to stop on the first interesting event from indexsonwards (if any), wheresis the stopping threshold of output a.
The importance of the odds strategy, and hence of the odds algorithm, lies in the following odds theorem.
The odds theorem states that
The odds algorithm computes the optimalstrategyand the optimalwin probabilityat the same time. Also, the number of operations of the odds algorithm is (sub)linear in n. Hence no quicker algorithm can possibly
exist for all sequences, so that the odds algorithm is, at the same time, optimal as an algorithm.
Bruss 2000devised the odds algorithm, and coined its name. It is also known as Bruss algorithm (strategy). Free implementations can be found on the web.
Applications reach from medical questions inclinical trialsover sales problems,secretary problems,portfolioselection, (one way) search strategies, trajectory problems and theparking problemto problems in online maintenance and others.
There exists, in the same spirit, an Odds Theorem for continuous-time arrival processes withindependent incrementssuch as thePoisson process(Bruss 2000). In some cases, the odds are not necessarily known in advance (as in Example 2 above) so that the application of the odds algorithm is not directly possible. In this case each step can usesequential estimatesof the odds. This is meaningful, if the number of unknown parameters is not large compared with the number n of observations. The question of optimality is then more complicated, however, and requires additional studies. Generalizations of the odds algorithm allow for different rewards for failing to stop
and wrong stops as well as replacing independence assumptions by weaker ones (Ferguson 2008).
Bruss & Paindaveine 2000discussed a problem of selecting the lastk{\displaystyle k}successes.
Tamaki 2010proved a multiplicative odds theorem which deals with a problem of stopping at any of the lastℓ{\displaystyle \ell }successes.
A tight lower bound of win probability is obtained byMatsui & Ano 2014.
Matsui & Ano 2017discussed a problem of selectingk{\displaystyle k}out of the lastℓ{\displaystyle \ell }successes and obtained a tight lower bound of win probability. Whenℓ=k=1,{\displaystyle \ell =k=1,}the problem is equivalent to Bruss' odds problem. Ifℓ=k≥1,{\displaystyle \ell =k\geq 1,}the problem is equivalent to that inBruss & Paindaveine 2000. A problem discussed byTamaki 2010is obtained by settingℓ≥k=1.{\displaystyle \ell \geq k=1.}
A player is allowedr{\displaystyle r}choices, and he wins if any choice is the last success.
For classical secretary problem,Gilbert & Mosteller 1966discussed the casesr=2,3,4{\displaystyle r=2,3,4}.
The odds problem withr=2,3{\displaystyle r=2,3}is discussed byAno, Kakinuma & Miyoshi 2010.
For further cases of odds problem, seeMatsui & Ano 2016.
An optimal strategy for this problem belongs to the class of strategies defined by a set of threshold numbers(a1,a2,...,ar){\displaystyle (a_{1},a_{2},...,a_{r})}, wherea1>a2>⋯>ar{\displaystyle a_{1}>a_{2}>\cdots >a_{r}}.
Specifically, imagine that you haver{\displaystyle r}letters of acceptance labelled from1{\displaystyle 1}tor{\displaystyle r}. You would haver{\displaystyle r}application officers, each holding one letter. You keep interviewing the candidates and rank them on a chart that every application officer can see. Now officeri{\displaystyle i}would send their letter of acceptance to the first candidate that is better than all candidates1{\displaystyle 1}toai{\displaystyle a_{i}}. (Unsent letters of acceptance are by default given to the last applicants, the same as in the standard secretary problem.)
Whenr=2{\displaystyle r=2},Ano, Kakinuma & Miyoshi 2010showed that the tight lower bound of win probability is equal toe−1+e−32.{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}.}For general positive integerr{\displaystyle r},Matsui & Ano 2016proved that the tight lower bound of win probability is the win probability of thesecretary problem variant where one must pick the top-k candidates using just k attempts.
Whenr=3,4,5{\displaystyle r=3,4,5}, tight lower bounds of win probabilities are equal toe−1+e−32+e−4724{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}},e−1+e−32+e−4724+e−27611152{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}+e^{-{\frac {2761}{1152}}}}ande−1+e−32+e−4724+e−27611152+e−41626371474560,{\displaystyle e^{-1}+e^{-{\frac {3}{2}}}+e^{-{\frac {47}{24}}}+e^{-{\frac {2761}{1152}}}+e^{-{\frac {4162637}{1474560}}},}respectively.
For further numerical cases forr=6,...,10{\displaystyle r=6,...,10}, and an algorithm for general cases, seeMatsui & Ano 2016.
|
https://en.wikipedia.org/wiki/Odds_algorithm
|
Themathematicaldiscipline oftopological combinatoricsis the application oftopologicalandalgebro-topologicalmethods to solving problems incombinatorics.
The discipline ofcombinatorial topologyused combinatorial concepts in topology and in the early 20th century this turned into the field ofalgebraic topology.
In 1978 the situation was reversed—methods from algebraic topology were used to solve a problem in combinatorics—whenLászló Lovászproved theKneser conjecture, thus beginning the new field oftopological combinatorics. Lovász's proof used theBorsuk–Ulam theoremand this theorem retains a prominent role in this new field. This theorem has many equivalent versions and analogs and has been used in the study offair divisionproblems.
In another application ofhomologicalmethods tograph theory, Lovász proved both the undirected and directed versions of aconjectureofAndrás Frank: Given ak-connected graphG,kpointsv1,…,vk∈V(G){\displaystyle v_{1},\ldots ,v_{k}\in V(G)}, andkpositiveintegersn1,n2,…,nk{\displaystyle n_{1},n_{2},\ldots ,n_{k}}that sum up to|V(G)|{\displaystyle |V(G)|}, there exists a partition{V1,…,Vk}{\displaystyle \{V_{1},\ldots ,V_{k}\}}ofV(G){\displaystyle V(G)}such thatvi∈Vi{\displaystyle v_{i}\in V_{i}},|Vi|=ni{\displaystyle |V_{i}|=n_{i}}, andVi{\displaystyle V_{i}}spans a connected subgraph.
In 1987 thenecklace splitting problemwas solved byNoga Alonusing the Borsuk–Ulam theorem. It has also been used to studycomplexity problemsinlinear decision tree algorithmsand theAanderaa–Karp–Rosenberg conjecture. Other areas includetopology of partially ordered setsandBruhat orders.
Additionally, methods fromdifferential topologynow have a combinatorial analog indiscrete Morse theory.
|
https://en.wikipedia.org/wiki/Topological_combinatorics
|
Atruth tableis amathematical tableused inlogic—specifically in connection withBoolean algebra,Boolean functions, andpropositional calculus—which sets out the functional values of logicalexpressionson each of their functional arguments, that is, for eachcombination of values taken by their logical variables.[1]In particular, truth tables can be used to show whether a propositional expression is true for all legitimate input values, that is,logically valid.
A truth table has one column for each input variable (for example, A and B), and one final column showing all of the possible results of the logical operation that the table represents (for example,AXORB). Each row of the truth table contains one possible configuration of the input variables (for instance, A=true, B=false), and the result of the operation for those values.
A proposition's truth table is a graphical representation of itstruth function. The truth function can be more useful for mathematical purposes, although the same information is encoded in both.
Ludwig Wittgensteinis generally credited with inventing and popularizing the truth table in hisTractatus Logico-Philosophicus, which was completed in 1918 and published in 1921.[2]Such a system was also independently proposed in 1921 byEmil Leon Post.[3]
Irving Anellis's research shows thatC.S. Peirceappears to be the earliest logician (in 1883) to devise a truth table matrix.[4]
From the summary of Anellis's paper:[4]
In 1997, John Shosky discovered, on theversoof a page of the typed transcript ofBertrand Russell's 1912 lecture on "The Philosophy of Logical Atomism" truth table matrices. The matrix for negation is Russell's, alongside of which is the matrix for material implication in the hand of Ludwig Wittgenstein. It is shown that an unpublished manuscript identified as composed by Peirce in 1893 includes a truth table matrix that is equivalent to the matrix for material implication discovered by John Shosky. An unpublished manuscript by Peirce identified as having been composed in 1883–84 in connection with the composition of Peirce's "On the Algebra of Logic: A Contribution to the Philosophy of Notation" that appeared in theAmerican Journal of Mathematicsin 1885 includes an example of an indirect truth table for the conditional.
Truth tables can be used to prove many otherlogical equivalences. For example, consider the following truth table:
This demonstrates the fact thatp→q{\displaystyle p\rightarrow q}islogically equivalentto¬p∨q{\displaystyle \neg p\vee q}.
Here is a truth table that gives definitions of each of the 6 possible 2-inputlogic gatefunctions of two Boolean variables P and Q:
whereTmeanstrueandFmeansfalse
For binary operators, a condensed form of truth table is also used, where the row headings and the column headings specify the operands and the table cells specify the result. For example,Boolean logicuses this condensed truth table notation:
This notation is useful especially if the operations are commutative, although one can additionally specify that the rows are the first operand and the columns are the second operand. This condensed notation is particularly useful in discussing multi-valued extensions of logic, as it significantly cuts down on combinatoric explosion of the number of rows otherwise needed. It also provides for quickly recognizable characteristic "shape" of the distribution of the values in the table which can assist the reader in grasping the rules more quickly.
Truth tables are also used to specify the function ofhardware look-up tables (LUTs)indigital logic circuitry. For an n-input LUT, the truth table will have2n{\displaystyle 2^{n}}values (or rows in the above tabular format), completely specifying a Boolean function for the LUT. By representing each Boolean value as abitin abinary number, truth table values can be efficiently encoded asintegervalues inelectronic design automation (EDA)software. For example, a 32-bit integer can encode the truth table for a LUT with up to 5 inputs.
When using an integer representation of a truth table, the output value of the LUT can be obtained by calculating a bit indexkbased on the input values of the LUT, in which case the LUT's output value is thekth bit of the integer. For example, to evaluate the output value of a LUT given anarrayofnBoolean input values, the bit index of the truth table's output value can be computed as follows: if theith input is true, letVi=1{\displaystyle V_{i}=1}, else letVi=0{\displaystyle V_{i}=0}. Then thekth bit of the binary representation of the truth table is the LUT's output value, wherek=V0×20+V1×21+V2×22+⋯+Vn−1×2n−1.{\displaystyle k=V_{0}\times 2^{0}+V_{1}\times 2^{1}+V_{2}\times 2^{2}+\dots +V_{n-1}\times 2^{n-1}.}
Truth tables are a simple and straightforward way to encode Boolean functions, however given theexponential growthin size as the number of inputs increase, they are not suitable for functions with a large number of inputs. Other representations which are more memory efficient are text equations andbinary decision diagrams.
In digital electronics and computer science (fields of applied logic engineering and mathematics), truth tables can be used to reduce basic Boolean operations to simple correlations of inputs to outputs, without the use oflogic gatesor code. For example, a binary addition can be represented with the truth table:
where A is the first operand, B is the second operand, C is the carry digit, and R is the result.
This truth table is read left to right:
This table does not describe the logic operations necessary to implement this operation, rather it simply specifies the function of inputs to output values.
With respect to the result, this example may be arithmetically viewed as modulo 2 binary addition, and as logically equivalent to the exclusive-or (exclusive disjunction) binary logic operation.
In this case it can be used for only very simple inputs and outputs, such as 1s and 0s. However, if the number of types of values one can have on the inputs increases, the size of the truth table will increase.
For instance, in an addition operation, one needs two operands, A and B. Each can have one of two values, zero or one. The number of combinations of these two values is 2×2, or four. So the result is four possible outputs of C and R. If one were to use base 3, the size would increase to 3×3, or nine possible outputs.
The first "addition" example above is called a half-adder. A full-adder is when the carry from the previous operation is provided as input to the next adder. Thus, a truth table of eight rows would be needed to describe afull adder's logic:
Regarding theguide columns[5]to the left of a table, which representpropositional variables, different authors have different recommendations about how to fill them in, although this is of no logical significance.[6]
Lee Archie, a professor atLander University, recommends this procedure, which is commonly followed in published truth-tables:
This method results in truth-tables such as the following table forP→ (Q∨R→ (R→ ¬P)), produced byStephen Cole Kleene:[7]
Colin Howson, on the other hand, believes that "it is a good practical rule" to do the following:
to start with all Ts, then all the ways (three) two Ts can be combined with one F, then all the ways (three) one T can be combined with two Fs, and then finish with all Fs. If a compound is built up from n distinct sentence letters, its truth table will have 2nrows, since there are two ways of assigning T or F to the first letter, and for each of these there will be two ways of assigning T or F to the second, and for each of these there will be two ways of assigning T or F to the third, and so on, giving 2.2.2. …, n times, which is equal to 2n.[6]
This results in truth tables like this table "showing that(A→C)∧(B→C)and(A∨B)→Caretruth-functionallyequivalent", modeled after a table produced byHowson:[6]
If there areninput variables then there are 2npossible combinations of their truth values. A given function may produce true or false for each combination so the number of different functions ofnvariables is thedouble exponential22n.
Truth tables for functions of three or more variables are rarely given.
It can be useful to have the output of a truth table expressed as a function of some variable values, instead of just a literal truth or false value. These may be called "function tables" to differentiate them from the more general "truth tables".[8]For example, one value,G, may be used with an XOR gate to conditionally invert another value,X. In other words, whenGis false, the output isX, and whenGis true, the output is¬X{\textstyle \neg X}. The function table for this would look like:
Similarly, a 4-to-1multiplexerwith select imputsS0{\displaystyle S_{0}}andS1{\displaystyle S_{1}}, data inputsA,B,CandD, and outputZ(as displayed in the image) would have this function table:
Here is an extended truth table giving definitions of all sixteen possible truth functions of two Boolean variablespandq:[note 1]
where
In proposition 5.101 of theTractatus Logico-Philosophicus,[9]Wittgensteinlisted the table above as follows:
The truth table represented by each row is obtained by appending the sequence given inTruthvaluesrowto the table[note 3]
For example, the table
represents the truth table forMaterial implication. Logical operators can also be visualized usingVenn diagrams.
There are 2 nullary operations:
The output value is always true, because this operator has zero operands and therefore no input values
The output value is never true: that is, always false, because this operator has zero operands and therefore no input values
There are 2 unary operations:
Logical identityis anoperationon onelogical valuep, for which the output value remains p.
The truth table for the logical identity operator is as follows:
Logical negationis anoperationon onelogical value, typically the value of aproposition, that produces a value oftrueif its operand is false and a value offalseif its operand is true.
The truth table forNOT p(also written as¬p,Np,Fpq, or~p) is as follows:
There are 16 possibletruth functionsof twobinary variables, each operator has its own name.
Logical conjunctionis anoperationon twological values, typically the values of twopropositions, that produces a value oftrueif both of its operands are true.
The truth table forp AND q(also written asp ∧ q,Kpq,p & q, orp⋅{\displaystyle \cdot }q) is as follows:
In ordinary language terms, if bothpandqare true, then the conjunctionp∧qis true. For all other assignments of logical values topand toqthe conjunctionp∧qis false.
It can also be said that ifp, thenp∧qisq, otherwisep∧qisp.
Logical disjunctionis anoperationon twological values, typically the values of twopropositions, that produces a value oftrueif at least one of its operands is true.
The truth table forp OR q(also written asp ∨ q,Apq,p || q, orp + q) is as follows:
Stated in English, ifp, thenp∨qisp, otherwisep∨qisq.
Logical implication and thematerial conditionalare both associated with anoperationon twological values, typically the values of twopropositions, which produces a value offalseif the first operand is true and the second operand is false, and a value oftrueotherwise.
The truth table associated with the logical implicationp implies q(symbolized asp ⇒ q, or more rarelyCpq) is as follows:
The truth table associated with the material conditionalif p then q(symbolized asp → q) is as follows:
p ⇒ qandp → qare equivalent to¬p ∨ q.
Logical equality(also known asbiconditionalorexclusive nor) is anoperationon twological values, typically the values of twopropositions, that produces a value oftrueif both operands are false or both operands are true.
The truth table forp XNOR q(also written asp ↔ q,Epq,p = q, orp ≡ q) is as follows:
So p EQ q is true if p and q have the sametruth value(both true or both false), and false if they have different truth values.
Exclusive disjunctionis anoperationon twological values, typically the values of twopropositions, that produces a value oftrueif one but not both of its operands is true.
The truth table forp XOR q(also written asJpq, orp ⊕ q) is as follows:
For two propositions,XORcan also be written as (p ∧ ¬q) ∨ (¬p ∧ q).
Thelogical NANDis anoperationon twological values, typically the values of twopropositions, that produces a value offalseif both of its operands are true. In other words, it produces a value oftrueif at least one of its operands is false.
The truth table forp NAND q(also written asp ↑ q,Dpq, orp | q) is as follows:
It is frequently useful to express a logical operation as acompound operation, that is, as an operation that is built up or composed from other operations. Many such compositions are possible, depending on the operations that are taken as basic or "primitive" and the operations that are taken as composite or "derivative".
In the case of logical NAND, it is clearly expressible as a compound of NOT and AND.
The negation of a conjunction: ¬(p∧q), and the disjunction of negations: (¬p) ∨ (¬q) can be tabulated as follows:
Thelogical NORis anoperationon twological values, typically the values of twopropositions, that produces a value oftrueif both of its operands are false. In other words, it produces a value offalseif at least one of its operands is true. ↓ is also known as thePeirce arrowafter its inventor,Charles Sanders Peirce, and is aSole sufficient operator.
The truth table forp NOR q(also written asp ↓ q, orXpq) is as follows:
The negation of a disjunction ¬(p∨q), and the conjunction of negations (¬p) ∧ (¬q) can be tabulated as follows:
Inspection of the tabular derivations for NAND and NOR, under each assignment of logical values to the functional argumentspandq, produces the identical patterns of functional values for ¬(p∧q) as for (¬p) ∨ (¬q), and for ¬(p∨q) as for (¬p) ∧ (¬q). Thus the first and second expressions in each pair are logically equivalent, and may be substituted for each other in all contexts that pertain solely to their logical values.
This equivalence is one ofDe Morgan's laws.
This explains whyTractatusrowin the table given here does not point to the sameTruthvaluesrowas in the Tractatus.
|
https://en.wikipedia.org/wiki/Truth_table
|
In situadaptive tabulation(ISAT) is analgorithmfor the approximation ofnonlinearrelationships. ISAT is based onmultiple linear regressionsthat are dynamically added as additional information is discovered. The technique is adaptive as it adds new linear regressions dynamically to a store of possible retrieval points. ISAT maintains error control by defining finer granularity in regions of increased nonlinearity. A binary tree search transverses cutting hyper-planes to locate a local linear approximation. ISAT is an alternative toartificial neural networksthat is receiving increased attention for desirable characteristics, namely:
ISAT was first proposed byStephen B. Popefor computational reduction ofturbulentcombustionsimulation[1]and later extended to model predictive control.[2]It has been generalized to anISAT frameworkthat operates based on any input and output data regardless of the application. An improved version of the algorithm[3]was proposed just over a decade later of the original publication, including new features that allow you to improve the efficiency of the search for tabulated data, as well as error control.
|
https://en.wikipedia.org/wiki/In_situ_adaptive_tabulation
|
Inmachine learning,kernel machinesare a class of algorithms forpattern analysis, whose best known member is thesupport-vector machine(SVM). These methods involve using linear classifiers to solve nonlinear problems.[1]The general task ofpattern analysisis to find and study general types of relations (for exampleclusters,rankings,principal components,correlations,classifications) in datasets. For many algorithms that solve these tasks, the data in raw representation have to be explicitly transformed intofeature vectorrepresentations via a user-specifiedfeature map: in contrast, kernel methods require only a user-specifiedkernel, i.e., asimilarity functionover all pairs of data points computed usinginner products. The feature map in kernel machines is infinite dimensional but only requires a finite dimensional matrix from user-input according to therepresenter theorem. Kernel machines are slow to compute for datasets larger than a couple of thousand examples without parallel processing.
Kernel methods owe their name to the use ofkernel functions, which enable them to operate in a high-dimensional,implicitfeature spacewithout ever computing the coordinates of the data in that space, but rather by simply computing theinner productsbetween theimagesof all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the "kernel trick".[2]Kernel functions have been introduced for sequence data,graphs, text, images, as well as vectors.
Algorithms capable of operating with kernels include thekernel perceptron, support-vector machines (SVM),Gaussian processes,principal components analysis(PCA),canonical correlation analysis,ridge regression,spectral clustering,linear adaptive filtersand many others.
Most kernel algorithms are based onconvex optimizationoreigenproblemsand are statistically well-founded. Typically, their statistical properties are analyzed usingstatistical learning theory(for example, usingRademacher complexity).
Kernel methods can be thought of asinstance-based learners: rather than learning some fixed set of parameters corresponding to the features of their inputs, they instead "remember" thei{\displaystyle i}-th training example(xi,yi){\displaystyle (\mathbf {x} _{i},y_{i})}and learn for it a corresponding weightwi{\displaystyle w_{i}}. Prediction for unlabeled inputs, i.e., those not in the training set, is treated by the application of asimilarity functionk{\displaystyle k}, called akernel, between the unlabeled inputx′{\displaystyle \mathbf {x'} }and each of the training inputsxi{\displaystyle \mathbf {x} _{i}}. For instance, a kernelizedbinary classifiertypically computes a weighted sum of similaritiesy^=sgn∑i=1nwiyik(xi,x′),{\displaystyle {\hat {y}}=\operatorname {sgn} \sum _{i=1}^{n}w_{i}y_{i}k(\mathbf {x} _{i},\mathbf {x'} ),}where
Kernel classifiers were described as early as the 1960s, with the invention of thekernel perceptron.[3]They rose to great prominence with the popularity of thesupport-vector machine(SVM) in the 1990s, when the SVM was found to be competitive withneural networkson tasks such ashandwriting recognition.
The kernel trick avoids the explicit mapping that is needed to get linearlearning algorithmsto learn a nonlinear function ordecision boundary. For allx{\displaystyle \mathbf {x} }andx′{\displaystyle \mathbf {x'} }in the input spaceX{\displaystyle {\mathcal {X}}}, certain functionsk(x,x′){\displaystyle k(\mathbf {x} ,\mathbf {x'} )}can be expressed as aninner productin another spaceV{\displaystyle {\mathcal {V}}}. The functionk:X×X→R{\displaystyle k\colon {\mathcal {X}}\times {\mathcal {X}}\to \mathbb {R} }is often referred to as akernelor akernel function. The word "kernel" is used in mathematics to denote a weighting function for a weighted sum orintegral.
Certain problems in machine learning have more structure than an arbitrary weighting functionk{\displaystyle k}. The computation is made much simpler if the kernel can be written in the form of a "feature map"φ:X→V{\displaystyle \varphi \colon {\mathcal {X}}\to {\mathcal {V}}}which satisfiesk(x,x′)=⟨φ(x),φ(x′)⟩V.{\displaystyle k(\mathbf {x} ,\mathbf {x'} )=\langle \varphi (\mathbf {x} ),\varphi (\mathbf {x'} )\rangle _{\mathcal {V}}.}The key restriction is that⟨⋅,⋅⟩V{\displaystyle \langle \cdot ,\cdot \rangle _{\mathcal {V}}}must be a proper inner product. On the other hand, an explicit representation forφ{\displaystyle \varphi }is not necessary, as long asV{\displaystyle {\mathcal {V}}}is aninner product space. The alternative follows fromMercer's theorem: an implicitly defined functionφ{\displaystyle \varphi }exists whenever the spaceX{\displaystyle {\mathcal {X}}}can be equipped with a suitablemeasureensuring the functionk{\displaystyle k}satisfiesMercer's condition.
Mercer's theorem is similar to a generalization of the result from linear algebra thatassociates an inner product to any positive-definite matrix. In fact, Mercer's condition can be reduced to this simpler case. If we choose as our measure thecounting measureμ(T)=|T|{\displaystyle \mu (T)=|T|}for allT⊂X{\displaystyle T\subset X}, which counts the number of points inside the setT{\displaystyle T}, then the integral in Mercer's theorem reduces to a summation∑i=1n∑j=1nk(xi,xj)cicj≥0.{\displaystyle \sum _{i=1}^{n}\sum _{j=1}^{n}k(\mathbf {x} _{i},\mathbf {x} _{j})c_{i}c_{j}\geq 0.}If this summation holds for all finite sequences of points(x1,…,xn){\displaystyle (\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n})}inX{\displaystyle {\mathcal {X}}}and all choices ofn{\displaystyle n}real-valued coefficients(c1,…,cn){\displaystyle (c_{1},\dots ,c_{n})}(cf.positive definite kernel), then the functionk{\displaystyle k}satisfies Mercer's condition.
Some algorithms that depend on arbitrary relationships in the native spaceX{\displaystyle {\mathcal {X}}}would, in fact, have a linear interpretation in a different setting: the range space ofφ{\displaystyle \varphi }. The linear interpretation gives us insight about the algorithm. Furthermore, there is often no need to computeφ{\displaystyle \varphi }directly during computation, as is the case withsupport-vector machines. Some cite this running time shortcut as the primary benefit. Researchers also use it to justify the meanings and properties of existing algorithms.
Theoretically, aGram matrixK∈Rn×n{\displaystyle \mathbf {K} \in \mathbb {R} ^{n\times n}}with respect to{x1,…,xn}{\displaystyle \{\mathbf {x} _{1},\dotsc ,\mathbf {x} _{n}\}}(sometimes also called a "kernel matrix"[4]), whereKij=k(xi,xj){\displaystyle K_{ij}=k(\mathbf {x} _{i},\mathbf {x} _{j})}, must bepositive semi-definite (PSD).[5]Empirically, for machine learning heuristics, choices of a functionk{\displaystyle k}that do not satisfy Mercer's condition may still perform reasonably ifk{\displaystyle k}at least approximates the intuitive idea of similarity.[6]Regardless of whetherk{\displaystyle k}is a Mercer kernel,k{\displaystyle k}may still be referred to as a "kernel".
If the kernel functionk{\displaystyle k}is also acovariance functionas used inGaussian processes, then the Gram matrixK{\displaystyle \mathbf {K} }can also be called acovariance matrix.[7]
Application areas of kernel methods are diverse and includegeostatistics,[8]kriging,inverse distance weighting,3D reconstruction,bioinformatics,cheminformatics,information extractionandhandwriting recognition.
|
https://en.wikipedia.org/wiki/Kernel_machines
|
Instatistical classification, theFisher kernel, named afterRonald Fisher, is a function thatmeasures the similarityof two objects on the basis of sets of measurements for each object and a statistical model. In a classification procedure, the class for a new object (whose real class is unknown) can be estimated by minimising, across classes, an average of the Fisher kernel distance from the new object to each known member of the given class.
The Fisher kernel was introduced in 1998.[1]It combines the advantages ofgenerative statistical models(like thehidden Markov model) and those ofdiscriminative methods(likesupport vector machines):
The Fisher kernel makes use of theFisherscore, defined as
withθbeing a set (vector) of parameters. The function takingθto log P(X|θ) is thelog-likelihoodof the probabilistic model.
TheFisher kernelis defined as
withI{\displaystyle {\mathcal {I}}}being theFisher informationmatrix.
The Fisher kernel is the kernel for a generative probabilistic model. As such, it constitutes a bridge between generative and probabilistic models of documents.[2]Fisher kernels exist for numerous models, notablytf–idf,[3]Naive Bayesandprobabilistic latent semantic analysis.
The Fisher kernel can also be applied to image representation for classification or retrieval problems. Currently, the most popularbag-of-visual-wordsrepresentation suffers from sparsity and high dimensionality. The Fisher kernel can result in a compact and dense representation, which is more desirable for image classification[4]and retrieval[5][6]problems.
The Fisher Vector (FV), a special, approximate, and improved case of the general Fisher kernel,[7]is an image representation obtained by pooling local imagefeatures. The FV encoding stores the mean and the covariance deviation vectors per component k of the Gaussian-Mixture-Model (GMM) and each element of the local feature descriptors together. In a systematic comparison, FV outperformed all compared encoding methods (Bag of Visual Words (BoW), Kernel Codebook encoding (KCB), Locality Constrained Linear Coding (LLC), Vector of Locally Aggregated Descriptors (VLAD)) showing that the encoding of second order information (aka codeword covariances) indeed benefits classification performance.[8]
|
https://en.wikipedia.org/wiki/Fisher_kernel
|
Inmachine learning,Platt scalingorPlatt calibrationis a way of transforming the outputs of aclassification modelinto aprobability distribution over classes. The method was invented byJohn Plattin the context ofsupport vector machines,[1]replacing an earlier method byVapnik, but can be applied to other classification models.[2]Platt scaling works by fitting alogistic regressionmodel to a classifier's scores.
Consider the problem ofbinary classification: for inputsx, we want to determine whether they belong to one of two classes, arbitrarily labeled+1and−1. We assume that the classification problem will be solved by a real-valued functionf, by predicting a class labely= sign(f(x)).[a]For many problems, it is convenient to get a probabilityP(y=1|x){\displaystyle P(y=1|x)}, i.e. a classification that not only gives an answer, but also a degree of certainty about the answer. Some classification models do not provide such a probability, or give poor probability estimates.
L=1,k=1,x0=0{\displaystyle L=1,k=1,x_{0}=0}.
Platt scaling is an algorithm to solve the aforementioned problem. It produces probability estimates
i.e., alogistictransformation of the classifier outputf(x), whereAandBare twoscalarparameters that are learned by the algorithm. After scaling, values can be predicted asy=1iffP(y=1|x)>12{\displaystyle y=1{\text{ iff }}P(y=1|x)>{\frac {1}{2}}}. IfB≠0,{\displaystyle B\neq 0,}then the probability estimates are modified from to the original decision functiony= sign(f(x)).[3]
The parametersAandBare estimated using amaximum likelihoodmethod that optimizes on the same training set as that for the original classifierf. To avoidoverfittingto this set, a held-outcalibration setorcross-validationcan be used, but Platt additionally suggests transforming the labelsyto target probabilities
Here,N+andN−are the number of positive and negative samples, respectively. This transformation follows by applyingBayes' ruleto a model of out-of-sample data that has a uniform prior over the labels.[1]The constants 1 and 2, on the numerator and denominator respectively, are derived from the application ofLaplace smoothing.
Platt himself suggested using theLevenberg–Marquardt algorithmto optimize the parameters, but aNewton algorithmwas later proposed that should be morenumerically stable.[4]
Platt scaling has been shown to be effective for SVMs as well as other types of classification models, includingboostedmodels and evennaive Bayes classifiers, which produce distorted probability distributions. It is particularly effective for max-margin methods such as SVMs and boosted trees, which show sigmoidal distortions in their predicted probabilities, but has less of an effect with well-calibratedmodels such aslogistic regression,multilayer perceptrons, andrandom forests.[2]
An alternative approach to probability calibration is to fit anisotonic regressionmodel to an ill-calibrated probability model. This has been shown to work better than Platt scaling, in particular when enough training data is available.[2]
Platt scaling can also be applied to deep neural network classifiers. For image classification, such as CIFAR-100, small networks likeLeNet-5have good calibration but low accuracy, and large networks likeResNethas high accuracy but is overconfident in predictions. A 2017 paper proposedtemperature scaling, which simply multiplies the output logits of a network by a constant1/T{\displaystyle 1/T}before taking thesoftmax. During training,T{\displaystyle T}is set to 1. After training,T{\displaystyle T}is optimized on a held-out calibration set to minimize the calibration loss.[5]
|
https://en.wikipedia.org/wiki/Platt_scaling
|
Inmachine learning, thepolynomial kernelis akernel functioncommonly used withsupport vector machines(SVMs) and otherkernelizedmodels, that represents the similarity of vectors (training samples) in a feature space over polynomials of the original variables, allowing learning of non-linear models.
Intuitively, the polynomial kernel looks not only at the given features of input samples to determine their similarity, but also combinations of these. In the context ofregression analysis, such combinations are known as interaction features. The (implicit) feature space of a polynomial kernel is equivalent to that ofpolynomial regression, but without the combinatorial blowup in the number of parameters to be learned. When the input features are binary-valued (booleans), then the features correspond tological conjunctionsof input features.[1]
For degree-dpolynomials, the polynomial kernel is defined as[2]
wherexandyare vectors of sizenin theinput space, i.e. vectors of features computed from training or test samples andc≥ 0is a free parameter trading off the influence of higher-order versus lower-order terms in the polynomial. Whenc= 0, the kernel is called homogeneous.[3](A further generalized polykernel dividesxTyby a user-specified scalar parametera.[4])
As a kernel,Kcorresponds to an inner product in a feature space based on some mappingφ:
The nature ofφcan be seen from an example. Letd= 2, so we get the special case of the quadratic kernel. After using themultinomial theorem(twice—the outermost application is thebinomial theorem) and regrouping,
From this it follows that the feature map is given by:
generalizing for(xTy+c)d{\displaystyle \left(\mathbf {x} ^{T}\mathbf {y} +c\right)^{d}},
wherex∈Rn{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}},y∈Rn{\displaystyle \mathbf {y} \in \mathbb {R} ^{n}}and applying themultinomial theorem:
(xTy+c)d=∑j1+j2+⋯+jn+1=dd!j1!⋯jn!jn+1!x1j1⋯xnjncjn+1d!j1!⋯jn!jn+1!y1j1⋯ynjncjn+1=φ(x)Tφ(y){\displaystyle {\begin{alignedat}{2}\left(\mathbf {x} ^{T}\mathbf {y} +c\right)^{d}&=\sum _{j_{1}+j_{2}+\dots +j_{n+1}=d}{\frac {\sqrt {d!}}{\sqrt {j_{1}!\cdots j_{n}!j_{n+1}!}}}x_{1}^{j_{1}}\cdots x_{n}^{j_{n}}{\sqrt {c}}^{j_{n+1}}{\frac {\sqrt {d!}}{\sqrt {j_{1}!\cdots j_{n}!j_{n+1}!}}}y_{1}^{j_{1}}\cdots y_{n}^{j_{n}}{\sqrt {c}}^{j_{n+1}}\\&=\varphi (\mathbf {x} )^{T}\varphi (\mathbf {y} )\end{alignedat}}}
The last summation hasld=(n+dd){\displaystyle l_{d}={\tbinom {n+d}{d}}}elements, so that:
wherel=(j1,j2,...,jn,jn+1){\displaystyle l=(j_{1},j_{2},...,j_{n},j_{n+1})}and
Although theRBF kernelis more popular in SVM classification than the polynomial kernel, the latter is quite popular innatural language processing(NLP).[1][5]The most common degree isd= 2(quadratic), since larger degrees tend tooverfiton NLP problems.
Various ways of computing the polynomial kernel (both exact and approximate) have been devised as alternatives to the usual non-linear SVM training algorithms, including:
One problem with the polynomial kernel is that it may suffer fromnumerical instability: whenxTy+c< 1,K(x,y) = (xTy+c)dtends to zero with increasingd, whereas whenxTy+c> 1,K(x,y)tends to infinity.[4]
|
https://en.wikipedia.org/wiki/Polynomial_kernel
|
Predictive analytics, orpredictive AI, encompasses a variety ofstatisticaltechniques fromdata mining,predictive modeling, andmachine learningthat analyze current and historical facts to makepredictionsabout future or otherwise unknown events.[1]
In business, predictive models exploitpatternsfound in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guidingdecision-makingfor candidate transactions.[2]
The defining functional effect of these technical approaches is that predictive analytics provides a predictive score (probability) for each individual (customer, employee, healthcare patient, product SKU, vehicle, component, machine, or other organizational unit) in order to determine, inform, or influence organizational processes that pertain across large numbers of individuals, such as in marketing, credit risk assessment, fraud detection, manufacturing, healthcare, and government operations including law enforcement.
Predictive analytics is a set of business intelligence (BI) technologies that uncovers relationships and patterns within large volumes of data that can be used to predict behavior and events. Unlike other BI technologies, predictive analytics is forward-looking, using past events to anticipate the future.[3]Predictive analytics statistical techniques includedata modeling,machine learning,AI,deep learningalgorithms anddata mining. Often the unknown event of interest is in the future, but predictive analytics can be applied to any type of unknown whether it be in the past, present or future. For example, identifying suspects after a crime has been committed, or credit card fraud as it occurs.[4]The core of predictive analytics relies on capturing relationships betweenexplanatory variablesand the predicted variables from past occurrences, and exploiting them to predict the unknown outcome. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions.[1]
Predictive analytics is often defined as predicting at a more detailed level of granularity, i.e., generating predictive scores (probabilities) for each individual organizational element. This distinguishes it fromforecasting. For example, "Predictive analytics—Technology that learns from experience (data) to predict the future behavior of individuals in order to drive better decisions."[5]In future industrial systems, the value of predictive analytics will be to predict and prevent potential issues to achieve near-zero break-down and further be integrated intoprescriptive analyticsfor decision optimization.[6]
The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques.
Machine learning can be defined as the ability of a machine to learn and then mimic human behavior that requires intelligence. This is accomplished through artificial intelligence, algorithms, and models.[7]
ARIMA models are a common example of time series models. These models use autoregression, which means the model can be fitted with a regression software that will use machine learning to do most of the regression analysis and smoothing. ARIMA models are known to have no overall trend, but instead have a variation around the average that has a constant amplitude, resulting in statistically similar time patterns. Through this, variables are analyzed and data is filtered in order to better understand and predict future values.[8][9]
One example of an ARIMA method is exponential smoothing models. Exponential smoothing takes into account the difference in importance between older and newer data sets, as the more recent data is more accurate and valuable in predicting future values. In order to accomplish this, exponents are utilized to give newer data sets a larger weight in the calculations than the older sets.[10]
Time series models are a subset of machine learning that utilize time series in order to understand and forecast data using past values. A time series is the sequence of a variable's value over equally spaced periods, such as years or quarters in business applications.[11]To accomplish this, the data must be smoothed, or the random variance of the data must be removed in order to reveal trends in the data. There are multiple ways to accomplish this.
Single moving average methods utilize smaller and smaller numbered sets of past data to decrease error that is associated with taking a single average, making it a more accurate average than it would be to take the average of the entire data set.[12]
Centered moving average methods utilize the data found in the single moving average methods by taking an average of the median-numbered data set. However, as the median-numbered data set is difficult to calculate with even-numbered data sets, this method works better with odd-numbered data sets than even.[13]
Predictive modeling is a statistical technique used to predict future behavior. It utilizes predictive models to analyze a relationship between a specific unit in a given sample and one or more features of the unit. The objective of these models is to assess the possibility that a unit in another sample will display the same pattern. Predictive model solutions can be considered a type of data mining technology. The models can analyze both historical and current data and generate a model in order to predict potential future outcomes.[14]
Regardless of the methodology used, in general, the process of creating predictive models involves the same steps. First, it is necessary to determine the project objectives and desired outcomes and translate these into predictive analytic objectives and tasks. Then, analyze the source data to determine the most appropriate data and model building approach (models are only as useful as the applicable data used to build them). Select and transform the data in order to create models. Create and test models in order to evaluate if they are valid and will be able to meet project goals and metrics. Apply the model's results to appropriate business processes (identifying patterns in the data doesn't necessarily mean a business will understand how to take advantage or capitalize on it). Afterward, manage and maintain models in order to standardize and improve performance (demand will increase for model management in order to meet new compliance regulations).[3]
Generally, regression analysis uses structural data along with the past values of independent variables and the relationship between them and the dependent variable to form predictions.[8]
Inlinear regression, a plot is constructed with the previous values of the dependent variable plotted on the Y-axis and the independent variable that is being analyzed plotted on the X-axis. A regression line is then constructed by a statistical program representing the relationship between the independent and dependent variables which can be used to predict values of the dependent variable based only on the independent variable. With the regression line, the program also shows a slope intercept equation for the line which includes an addition for the error term of the regression, where the higher the value of the error term the less precise the regression model is. In order to decrease the value of the error term, other independent variables are introduced to the model, and similar analyses are performed on these independent variables.[8][15]
An important aspect of auditing includes analytical review. In analytical review, the reasonableness of reported account balances being investigated is determined. Auditors accomplish this process through predictive modeling to form predictions called conditional expectations of the balances being audited using autoregressive integrated moving average (ARIMA) methods and general regression analysis methods,[8]specifically through the Statistical Technique for Analytical Review (STAR) methods.[16]
The ARIMA method for analytical review uses time-series analysis on past audited balances in order to create the conditional expectations. These conditional expectations are then compared to the actual balances reported on the audited account in order to determine how close the reported balances are to the expectations. If the reported balances are close to the expectations, the accounts are not audited further. If the reported balances are very different from the expectations, there is a higher possibility of a material accounting error and a further audit is conducted.[16]
Regression analysis methods are deployed in a similar way, except the regression model used assumes the availability of only one independent variable. The materiality of the independent variable contributing to the audited account balances are determined using past account balances along with present structural data.[8]Materiality is the importance of an independent variable in its relationship to the dependent variable.[17]In this case, the dependent variable is the account balance. Through this the most important independent variable is used in order to create the conditional expectation and, similar to the ARIMA method, the conditional expectation is then compared to the account balance reported and a decision is made based on the closeness of the two balances.[8]
The STAR methods operate using regression analysis, and fall into two methods. The first is the STAR monthly balance approach, and the conditional expectations made and regression analysis used are both tied to one month being audited. The other method is the STAR annual balance approach, which happens on a larger scale by basing the conditional expectations and regression analysis on one year being audited. Besides the difference in the time being audited, both methods operate the same, by comparing expected and reported balances to determine which accounts to further investigate.[16]
As we move into a world of technological advances where more and more data is created and stored digitally, businesses are looking for ways to take advantage of this opportunity and use this information to help generate profits. Predictive analytics can be used and is capable of providing many benefits to a wide range of businesses, including asset management firms, insurance companies, communication companies, and many other firms. In a study conducted by IDC Analyze the Future, Dan Vasset and Henry D. Morris explain how an asset management firm used predictive analytics to develop a better marketing campaign. They went from a mass marketing approach to a customer-centric approach, where instead of sending the same offer to each customer, they would personalize each offer based on their customer. Predictive analytics was used to predict the likelihood that a possible customer would accept a personalized offer. Due to the marketing campaign and predictive analytics, the firm's acceptance rate skyrocketed, with three times the number of people accepting their personalized offers.[18]
Technological advances in predictive analytics have increased its value to firms. One technological advancement is more powerful computers, and with this predictive analytics has become able to create forecasts on large data sets much faster. With the increased computing power also comes more data and applications, meaning a wider array of inputs to use with predictive analytics. Another technological advance includes a more user-friendly interface, allowing a smaller barrier of entry and less extensive training required for employees to utilize the software and applications effectively. Due to these advancements, many more corporations are adopting predictive analytics and seeing the benefits in employee efficiency and effectiveness, as well as profits.[19]
ARIMAunivariate and multivariate models can be used in forecasting a company's futurecash flows, with its equations and calculations based on the past values of certain factors contributing to cash flows. Using time-series analysis, the values of these factors can be analyzed and extrapolated to predict the future cash flows for a company. For the univariate models, past values of cash flows are the only factor used in the prediction. Meanwhile the multivariate models use multiple factors related to accrual data, such as operating income before depreciation.[20]
Another model used in predicting cash-flows was developed in 1998 and is known as the Dechow, Kothari, and Watts model, or DKW (1998). DKW (1998) uses regression analysis in order to determine the relationship between multiple variables and cash flows. Through this method, the model found that cash-flow changes and accruals are negatively related, specifically through current earnings, and using this relationship predicts the cash flows for the next period. The DKW (1998) model derives this relationship through the relationships of accruals and cash flows to accounts payable and receivable, along with inventory.[21]
Some child welfare agencies have started using predictive analytics to flag high risk cases.[22]For example, inHillsborough County, Florida, the child welfare agency's use of a predictive modeling tool has prevented abuse-related child deaths in the target population.[23]
The predicting of the outcome ofjuridical decisionscan be done by AI programs. These programs can be used as assistive tools for professions in this industry.[24][25]
Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example, a retailer might be interested in predicting store-level demand for inventory management purposes. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year. These types of problems can be addressed by predictive analytics using time series techniques (see below). They can also be addressed via machine learning approaches which transform the original time series into a feature vector space, where the learning algorithm finds patterns that have predictive power.[26][27]
Many businesses have to account for risk exposure due to their different services and determine the costs needed to cover the risk. Predictive analytics can helpunderwritethese quantities by predicting the chances of illness,default,bankruptcy, etc. Predictive analytics can streamline the process of customer acquisition by predicting the future risk behavior of a customer using application level data. Predictive analytics in the form of credit scores have reduced the amount of time it takes for loan approvals, especially in the mortgage market. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default. Predictive analytics can be used to mitigate moral hazard and prevent accidents from occurring.[28]
|
https://en.wikipedia.org/wiki/Predictive_analytics
|
Withinmathematical analysis,Regularization perspectives on support-vector machinesprovide a way of interpretingsupport-vector machines(SVMs) in the context of other regularization-based machine-learning algorithms. SVM algorithms categorize binary data, with the goal of fitting thetraining setdata in a way that minimizes the average of the hinge-loss function and L2 norm of the learned weights. This strategy avoidsoverfittingviaTikhonov regularizationand in the L2 norm sense and also corresponds to minimizing the bias and variance of our estimator of the weights. Estimators with lowerMean squared errorpredict better or generalize better when given unseen data.
Specifically,Tikhonov regularizationalgorithms produce a decision boundary that minimizes the average training-set error and constrain theDecision boundarynot to be excessively complicated or overfit the training data via a L2 norm of the weights term. The training and test-set errors can be measured without bias and in a fair way using accuracy, precision, Auc-Roc, precision-recall, and other metrics.
Regularization perspectives on support-vector machines interpret SVM as a special case of Tikhonov regularization, specifically Tikhonov regularization with thehinge lossfor a loss function. This provides a theoretical framework with which to analyze SVM algorithms and compare them to other algorithms with the same goals: togeneralizewithoutoverfitting. SVM was first proposed in 1995 byCorinna CortesandVladimir Vapnik, and framed geometrically as a method for findinghyperplanesthat can separatemultidimensionaldata into two categories.[1]This traditional geometric interpretation of SVMs provides useful intuition about how SVMs work, but is difficult to relate to othermachine-learningtechniques for avoiding overfitting, likeregularization,early stopping,sparsityandBayesian inference. However, once it was discovered that SVM is also aspecial caseof Tikhonov regularization, regularization perspectives on SVM provided the theory necessary to fit SVM within a broader class of algorithms.[2][3][4]This has enabled detailed comparisons between SVM and other forms of Tikhonov regularization, and theoretical grounding for why it is beneficial to use SVM's loss function, the hinge loss.[5]
In thestatistical learning theoryframework, analgorithmis a strategy for choosing afunctionf:X→Y{\displaystyle f\colon \mathbf {X} \to \mathbf {Y} }given a training setS={(x1,y1),…,(xn,yn)}{\displaystyle S=\{(x_{1},y_{1}),\ldots ,(x_{n},y_{n})\}}of inputsxi{\displaystyle x_{i}}and their labelsyi{\displaystyle y_{i}}(the labels are usually±1{\displaystyle \pm 1}).Regularizationstrategies avoidoverfittingby choosing a function that fits the data, but is not too complex. Specifically:
whereH{\displaystyle {\mathcal {H}}}is ahypothesis space[6]of functions,V:Y×Y→R{\displaystyle V\colon \mathbf {Y} \times \mathbf {Y} \to \mathbb {R} }is the loss function,‖⋅‖H{\displaystyle \|\cdot \|_{\mathcal {H}}}is anormon the hypothesis space of functions, andλ∈R{\displaystyle \lambda \in \mathbb {R} }is theregularization parameter.[7]
WhenH{\displaystyle {\mathcal {H}}}is areproducing kernel Hilbert space, there exists akernel functionK:X×X→R{\displaystyle K\colon \mathbf {X} \times \mathbf {X} \to \mathbb {R} }that can be written as ann×n{\displaystyle n\times n}symmetricpositive-definitematrixK{\displaystyle \mathbf {K} }. By therepresenter theorem,[8]
The simplest and most intuitive loss function for categorization is the misclassification loss, or 0–1 loss, which is 0 iff(xi)=yi{\displaystyle f(x_{i})=y_{i}}and 1 iff(xi)≠yi{\displaystyle f(x_{i})\neq y_{i}}, i.e. theHeaviside step functionon−yif(xi){\displaystyle -y_{i}f(x_{i})}. However, this loss function is notconvex, which makes the regularization problem very difficult to minimize computationally. Therefore, we look for convex substitutes for the 0–1 loss. The hinge loss,V(yi,f(xi))=(1−yf(x))+{\displaystyle V{\big (}y_{i},f(x_{i}){\big )}={\big (}1-yf(x){\big )}_{+}}, where(s)+=max(s,0){\displaystyle (s)_{+}=\max(s,0)}, provides such aconvex relaxation. In fact, the hinge loss is the tightest convexupper boundto the 0–1 misclassification loss function,[4]and with infinite data returns theBayes-optimal solution:[5][9]
The Tikhonov regularization problem can be shown to be equivalent to traditional formulations of SVM by expressing it in terms of the hinge loss.[10]With the hinge loss
where(s)+=max(s,0){\displaystyle (s)_{+}=\max(s,0)}, the regularization problem becomes
Multiplying by1/(2λ){\displaystyle 1/(2\lambda )}yields
withC=1/(2λn){\displaystyle C=1/(2\lambda n)}, which is equivalent to the standard SVM minimization problem.
|
https://en.wikipedia.org/wiki/Regularization_perspectives_on_support_vector_machines
|
Inmathematics, aRelevance Vector Machine (RVM)is amachine learningtechnique that usesBayesian inferenceto obtainparsimonioussolutions forregressionandprobabilistic classification.[1]A greedy optimisation procedure and thus fast version were subsequently developed.[2][3]The RVM has an identical functional form to thesupport vector machine, but provides probabilistic classification.
It is actually equivalent to aGaussian processmodel withcovariance function:
whereφ{\displaystyle \varphi }is thekernel function(usually Gaussian),αj{\displaystyle \alpha _{j}}are the variances of the prior on the weight vectorw∼N(0,α−1I){\displaystyle w\sim N(0,\alpha ^{-1}I)}, andx1,…,xN{\displaystyle \mathbf {x} _{1},\ldots ,\mathbf {x} _{N}}are the input vectors of thetraining set.[4]
Compared to that ofsupport vector machines(SVM), the Bayesian formulation of the RVM avoids the set of free parameters of the SVM (that usually require cross-validation-based post-optimizations). However RVMs use anexpectation maximization(EM)-like learning method and are therefore at risk of local minima. This is unlike the standardsequential minimal optimization(SMO)-based algorithms employed bySVMs, which are guaranteed to find a global optimum (of the convex problem).
The relevance vector machine waspatented in the United StatesbyMicrosoft(patent expired September 4, 2019).[5]
|
https://en.wikipedia.org/wiki/Relevance_vector_machine
|
Sequential minimal optimization(SMO) is an algorithm for solving thequadratic programming(QP) problem that arises during the training ofsupport-vector machines(SVM). It was invented byJohn Plattin 1998 atMicrosoft Research.[1]SMO is widely used for training support vector machines and is implemented by the popularLIBSVMtool.[2][3]The publication of the SMO algorithm in 1998 has generated a lot of excitement in the SVM community, as previously available methods for SVM training were much more complex and required expensive third-party QP solvers.[4]
Consider abinary classificationproblem with a dataset (x1,y1), ..., (xn,yn), wherexiis an input vector andyi∈ {-1, +1}is a binary label corresponding to it. A soft-marginsupport vector machineis trained by solving a quadratic programming problem, which is expressed in thedual formas follows:
whereCis an SVM hyperparameter andK(xi,xj) is thekernel function, both supplied by the user; and the variablesαi{\displaystyle \alpha _{i}}areLagrange multipliers.
SMO is an iterative algorithm for solving the optimization problem described above. SMO breaks this problem into a series of smallest possible sub-problems, which are then solved analytically. Because of the linear equality constraint involving the Lagrange multipliersαi{\displaystyle \alpha _{i}}, the smallest possible problem involves two such multipliers. Then, for any two multipliersα1{\displaystyle \alpha _{1}}andα2{\displaystyle \alpha _{2}}, the constraints are reduced to:
and this reduced problem can be solved analytically: one needs to find a minimum of a one-dimensional quadratic function.k{\displaystyle k}is the negative of the sum over the rest of terms in the equality constraint, which is fixed in each iteration.
The algorithm proceeds as follows:
When all the Lagrange multipliers satisfy the KKT conditions (within a user-defined tolerance), the problem has been solved. Although this algorithm is guaranteed to converge, heuristics are used to choose the pair of multipliers so as to accelerate the rate of convergence. This is critical for large data sets since there aren(n−1)/2{\displaystyle n(n-1)/2}possible choices forαi{\displaystyle \alpha _{i}}andαj{\displaystyle \alpha _{j}}.
The first approach to splitting large SVM learning problems into a series of smaller optimization tasks was proposed byBernhard Boser,Isabelle Guyon,Vladimir Vapnik.[5]It is known as the "chunking algorithm". The algorithm starts with a random subset of the data, solves this problem, and iteratively adds examples which violate the optimality conditions. One disadvantage of this algorithm is that it is necessary to solve QP-problems scaling with the number of SVs. On real world sparse data sets, SMO can be more than 1000 times faster than the chunking algorithm.[1]
In 1997,E. Osuna,R. Freund, andF. Girosiproved a theorem which suggests a whole new set of QP algorithms for SVMs.[6]By the virtue of this theorem a large QP problem can be broken down into a series of smaller QP sub-problems. A sequence of QP sub-problems that always add at least one violator of theKarush–Kuhn–Tucker (KKT) conditionsis guaranteed to converge. The chunking algorithm obeys the conditions of the theorem, and hence will converge.[1]The SMO algorithm can be considered a special case of the Osuna algorithm, where the size of the optimization is two and both Lagrange multipliers are replaced at every step with new multipliers that are chosen via good heuristics.[1]
The SMO algorithm is closely related to a family of optimization algorithms calledBregman methodsor row-action methods. These methods solve convex programming problems with linear constraints. They are iterative methods where each step projects the current primal point onto each constraint.[1]
|
https://en.wikipedia.org/wiki/Sequential_minimal_optimization
|
Thewinnow algorithm[1]is a technique frommachine learningfor learning alinear classifierfrom labeled examples. It is very similar to theperceptron algorithm. However, the perceptron algorithm uses an additive weight-update scheme, while Winnow uses amultiplicative schemethat allows it to perform much better when many dimensions are irrelevant (hence its namewinnow). It is a simple algorithm that scales well to high-dimensional data. During training, Winnow is shown a sequence of positive and negative examples. From these it learns a decisionhyperplanethat can then be used to label novel examples as positive or negative. The algorithm can also be used in theonline learningsetting, where the learning and the classification phase are not clearly separated.
The basic algorithm, Winnow1, is as follows. The instance space isX={0,1}n{\displaystyle X=\{0,1\}^{n}}, that is, each instance is described as a set ofBoolean-valuedfeatures. The algorithm maintains non-negative weightswi{\displaystyle w_{i}}fori∈{1,…,n}{\displaystyle i\in \{1,\ldots ,n\}}, which are initially set to 1, one weight for each feature. When the learner is given an example(x1,…,xn){\displaystyle (x_{1},\ldots ,x_{n})}, it applies the typical prediction rule for linear classifiers:
HereΘ{\displaystyle \Theta }is a real number that is called thethreshold. Together with the weights, the threshold defines a dividing hyperplane in the instance space. Good bounds are obtained ifΘ=n/2{\displaystyle \Theta =n/2}(see below).
For each example with which it is presented, the learner applies the following update rule:
A typical value forαis 2.
There are many variations to this basic approach.Winnow2[1]is similar except that in the demotion step the weights are divided byαinstead of being set to 0.Balanced Winnowmaintains two sets of weights, and thus two hyperplanes. This can then be generalized formulti-label classification.
In certain circumstances, it can be shown that the number of mistakes Winnow makes as it learns has anupper boundthat is independent of the number of instances with which it is presented. If the Winnow1 algorithm usesα>1{\displaystyle \alpha >1}andΘ≥1/α{\displaystyle \Theta \geq 1/\alpha }on a target function that is ak{\displaystyle k}-literal monotone disjunction given byf(x1,…,xn)=xi1∪⋯∪xik{\displaystyle f(x_{1},\ldots ,x_{n})=x_{i_{1}}\cup \cdots \cup x_{i_{k}}}, then for any sequence of instances the total number of mistakes is bounded by:αk(logαΘ+1)+nΘ{\displaystyle \alpha k(\log _{\alpha }\Theta +1)+{\frac {n}{\Theta }}}.[2]
|
https://en.wikipedia.org/wiki/Winnow_(algorithm)
|
In the field ofmathematical modeling, aradial basis function networkis anartificial neural networkthat usesradial basis functionsasactivation functions. The output of the network is alinear combinationof radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, includingfunction approximation,time series prediction,classification, and systemcontrol. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at theRoyal Signals and Radar Establishment.[1][2][3]
Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbersx∈Rn{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}. The output of the network is then a scalar function of the input vector,φ:Rn→R{\displaystyle \varphi :\mathbb {R} ^{n}\to \mathbb {R} }, and is given by
whereN{\displaystyle N}is the number of neurons in the hidden layer,ci{\displaystyle \mathbf {c} _{i}}is the center vector for neuroni{\displaystyle i}, andai{\displaystyle a_{i}}is the weight of neuroni{\displaystyle i}in the linear output neuron. Functions that depend only on the distance from a center vector are radially symmetric about that vector, hence the name radial basis function. In the basic form, all inputs are connected to each hidden neuron. Thenormis typically taken to be theEuclidean distance(although theMahalanobis distanceappears to perform better with pattern recognition[4][5][editorializing]) and the radial basis function is commonly taken to beGaussian
The Gaussian basis functions are local to the center vector in the sense that
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.
Given certain mild conditions on the shape of the activation function, RBF networks areuniversal approximatorson acompactsubset ofRn{\displaystyle \mathbb {R} ^{n}}.[6]This means that an RBF network with enough hidden neurons can approximate any continuous function on a closed, bounded set with arbitrary precision.
The parametersai{\displaystyle a_{i}},ci{\displaystyle \mathbf {c} _{i}}, andβi{\displaystyle \beta _{i}}are determined in a manner that optimizes the fit betweenφ{\displaystyle \varphi }and the data.
In addition to the aboveunnormalizedarchitecture, RBF networks can benormalized. In this case the mapping is
where
is known as anormalized radial basis function.
There is theoretical justification for this architecture in the case of stochastic data flow. Assume astochastic kernelapproximation for the joint probability density
where the weightsci{\displaystyle \mathbf {c} _{i}}andei{\displaystyle e_{i}}are exemplars from the data and we require the kernels to be normalized
and
The probability densities in the input and output spaces are
and
The expectation of y given an inputx{\displaystyle \mathbf {x} }is
where
is the conditional probability of y givenx{\displaystyle \mathbf {x} }.
The conditional probability is related to the joint probability throughBayes' theorem
which yields
This becomes
when the integrations are performed.
It is sometimes convenient to expand the architecture to includelocal linearmodels. In that case the architectures become, to first order,
and
in the unnormalized and normalized cases, respectively. Herebi{\displaystyle \mathbf {b} _{i}}are weights to be determined. Higher order linear terms are also possible.
This result can be written
where
and
in the unnormalized case and in the normalized case.
Hereδij{\displaystyle \delta _{ij}}is aKronecker delta functiondefined as
RBF networks are typically trained from pairs of input and target valuesx(t),y(t){\displaystyle \mathbf {x} (t),y(t)},t=1,…,T{\displaystyle t=1,\dots ,T}by a two-step algorithm.
In the first step, the center vectorsci{\displaystyle \mathbf {c} _{i}}of the RBF functions in the hidden layer are chosen. This step can be performed in several ways; centers can be randomly sampled from some set of examples, or they can be determined usingk-means clustering. Note that this step isunsupervised.
The second step simply fits a linear model with coefficientswi{\displaystyle w_{i}}to the hidden layer's outputs with respect to some objective function. A common objective function, at least for regression/function estimation, is the least squares function:
where
We have explicitly included the dependence on the weights. Minimization of the least squares objective function by optimal choice of weights optimizes accuracy of fit.
There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as
where
and
where optimization of S maximizes smoothness andλ{\displaystyle \lambda }is known as aregularizationparameter.
A third optionalbackpropagationstep can be performed to fine-tune all of the RBF net's parameters.[3]
RBF networks can be used to interpolate a functiony:Rn→R{\displaystyle y:\mathbb {R} ^{n}\to \mathbb {R} }when the values of that function are known on finite number of points:y(xi)=bi,i=1,…,N{\displaystyle y(\mathbf {x} _{i})=b_{i},i=1,\ldots ,N}. Taking the known pointsxi{\displaystyle \mathbf {x} _{i}}to be the centers of the radial basis functions and evaluating the values of the basis functions at the same pointsgij=ρ(||xj−xi||){\displaystyle g_{ij}=\rho (||\mathbf {x} _{j}-\mathbf {x} _{i}||)}the weights can be solved from the equation
It can be shown that the interpolation matrix in the above equation is non-singular, if the pointsxi{\displaystyle \mathbf {x} _{i}}are distinct, and thus the weightsw{\displaystyle w}can be solved by simple linear algebra:
whereG=(gij){\displaystyle G=(g_{ij})}.
If the purpose is not to perform strict interpolation but instead more generalfunction approximationorclassificationthe optimization is somewhat more complex because there is no obvious choice for the centers. The training is typically done in two phases first fixing the width and centers and then the weights. This can be justified by considering the different nature of the non-linear hidden neurons versus the linear output neuron.
Basis function centers can be randomly sampled among the input instances or obtained by Orthogonal Least Square Learning Algorithm or found byclusteringthe samples and choosing the cluster means as the centers.
The RBF widths are usually all fixed to same value which is proportional to the maximum distance between the chosen centers.
After the centersci{\displaystyle c_{i}}have been fixed, the weights that minimize the error at the output can be computed with a linearpseudoinversesolution:
where the entries ofGare the values of the radial basis functions evaluated at the pointsxi{\displaystyle x_{i}}:gji=ρ(||xj−ci||){\displaystyle g_{ji}=\rho (||x_{j}-c_{i}||)}.
The existence of this linear solution means that unlike multi-layer perceptron (MLP) networks, RBF networks have an explicit minimizer (when the centers are fixed).
Another possible training algorithm isgradient descent. In gradient descent training, the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function (thus allowing the minimum of the objective function to be found),
whereν{\displaystyle \nu }is a "learning parameter."
For the case of training the linear weights,ai{\displaystyle a_{i}}, the algorithm becomes
in the unnormalized case and
in the normalized case.
For local-linear-architectures gradient-descent training is
For the case of training the linear weights,ai{\displaystyle a_{i}}andeij{\displaystyle e_{ij}}, the algorithm becomes
in the unnormalized case and
in the normalized case and
in the local-linear case.
For one basis function, projection operator training reduces toNewton's method.
The basic properties of radial basis functions can be illustrated with a simple mathematical map, thelogistic map, which maps the unit interval onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explorefunction approximation,time series prediction, andcontrol theory. The map originated from the field ofpopulation dynamicsand became the prototype forchaotictime series. The map, in the fully chaotic regime, is given by
where t is a time index. The value of x at time t+1 is a parabolic function of x at time t. This equation represents the underlying geometry of the chaotic time series generated by the logistic map.
Generation of the time series from this equation is theforward problem. The examples here illustrate theinverse problem; identification of the underlying dynamics, or fundamental equation, of the logistic map from exemplars of the time series. The goal is to find an estimate
for f.
The architecture is
where
Since the input is ascalarrather than avector, the input dimension is one. We choose the number of basis functions as N=5 and the size of the training set to be 100 exemplars generated by the chaotic time series. The weightβ{\displaystyle \beta }is taken to be a constant equal to 5. The weightsci{\displaystyle c_{i}}are five exemplars from the time series. The weightsai{\displaystyle a_{i}}are trained with projection operator training:
where thelearning rateν{\displaystyle \nu }is taken to be 0.3. The training is performed with one pass through the 100 training points. Therms erroris 0.15.
The normalized RBF architecture is
where
Again:
Again, we choose the number of basis functions as five and the size of the training set to be 100 exemplars generated by the chaotic time series. The weightβ{\displaystyle \beta }is taken to be a constant equal to 6. The weightsci{\displaystyle c_{i}}are five exemplars from the time series. The weightsai{\displaystyle a_{i}}are trained with projection operator training:
where thelearning rateν{\displaystyle \nu }is again taken to be 0.3. The training is performed with one pass through the 100 training points. Therms erroron a test set of 100 exemplars is 0.084, smaller than the unnormalized error. Normalization yields accuracy improvement. Typically accuracy with normalized basis functions increases even more over unnormalized functions as input dimensionality increases.
Once the underlying geometry of the time series is estimated as in the previous examples, a prediction for the time series can be made by iteration:
A comparison of the actual and estimated time series is displayed in the figure. The estimated times series starts out at time zero with an exact knowledge of x(0). It then uses the estimate of the dynamics to update the time series estimate for several time steps.
Note that the estimate is accurate for only a few time steps. This is a general characteristic of chaotic time series. This is a property of the sensitive dependence on initial conditions common to chaotic time series. A small initial error is amplified with time. A measure of the divergence of time series with nearly identical initial conditions is known as theLyapunov exponent.
We assume the output of the logistic map can be manipulated through a control parameterc[x(t),t]{\displaystyle c[x(t),t]}such that
The goal is to choose the control parameter in such a way as to drive the time series to a desired outputd(t){\displaystyle d(t)}. This can be done if we choose the control parameter to be
where
is an approximation to the underlying natural dynamics of the system.
The learning algorithm is given by
where
|
https://en.wikipedia.org/wiki/Radial_basis_function_network
|
k-medoidsis a classical partitioning technique of clustering that splits the data set ofnobjects intokclusters, where the numberkof clusters assumed knowna priori(which implies that the programmer must specify k before the execution of ak-medoids algorithm). The "goodness" of the given value ofkcan be assessed with methods such as thesilhouette method. The name of the clustering method was coined by Leonard Kaufman andPeter J. Rousseeuwwith their PAM (Partitioning Around Medoids) algorithm.[1]
Themedoidof a cluster is defined as the object in the cluster whose sum (and, equivalently, the average) of dissimilarities to all the objects in the cluster is minimal, that is, it is a most centrally located point in the cluster. Unlike certain objects used by other algorithms, the medoid is an actual point in the cluster.
In general, thek-medoids problem is NP-hard to solve exactly.[2]As such, multiple heuristics to optimize this problem exist.
PAM[3]uses a greedy search which may not find the optimum solution, but it is faster than exhaustive search. It works as follows:
The runtime complexity of the original PAM algorithm per iteration of (3) isO(k(n−k)2){\displaystyle O(k(n-k)^{2})}, by only computing the change in cost. A naive implementation recomputing the entire cost function every time will be inO(n2k2){\displaystyle O(n^{2}k^{2})}. This runtime can be further reduced toO(n2){\displaystyle O(n^{2})}, by splitting the cost change into three parts such that computations can be shared or avoided (FastPAM). The runtime can further be reduced by eagerly performing swaps (FasterPAM),[4]at which point a random initialization becomes a viable alternative to BUILD.
Algorithms other than PAM have also been suggested in the literature, including the followingVoronoi iterationmethod known as the "Alternating" heuristic in literature, as it alternates between two optimization steps:[5][6][7]
k-means-style Voronoi iteration tends to produce worse results, and exhibit "erratic behavior".[8]: 957Because it does not allow re-assigning points to other clusters while updating means it only explores a smaller search space. It can be shown that even in simple cases this heuristic finds inferior solutions the swap based methods can solve.[4]
Multiple variants ofhierarchical clusteringwith a "medoid linkage" have been proposed. The Minimum Sum linkage criterion[9]directly uses the objective of medoids, but the Minimum Sum Increase linkage was shown to produce better results (similar to how Ward linkage uses the increase in squared error). Earlier approaches simply used the distance of the cluster medoids of the previous medoids as linkage measure,[10][11]but which tends to result in worse solutions, as the distance of two medoids does not ensure there exists a good medoid for the combination. These approaches have a run time complexity ofO(n3){\displaystyle O(n^{3})}, and when the dendrogram is cut at a particular number of clustersk, the results will typically be worse than the results found by PAM.[9]Hence these methods are primarily of interest when a hierarchical tree structure is desired.
Other approximate algorithms such as CLARA and CLARANS trade quality for runtime. CLARA applies PAM on multiple subsamples, keeping the best result. By setting the sample size toO(N){\displaystyle O({\sqrt {N}})}, a linear runtime (just as to k-means) can be achieved. CLARANS works on the entire data set, but only explores a subset of the possible swaps of medoids and non-medoids using sampling. BanditPAM uses the concept of multi-armed bandits to choose candidate swaps instead of uniform sampling as in CLARANS.[12]
Thek-medoids problem is aclusteringproblem similar tok-means. Both thek-means andk-medoids algorithms are partitional (breaking the dataset up into groups) and attempt to minimize the distance between points labeled to be in a cluster and a point designated as the center of that cluster. In contrast to thek-means algorithm,k-medoids chooses actual data points as centers (medoidsor exemplars), and thereby allows for greater interpretability of the cluster centers than ink-means, where the center of a cluster is not necessarily one of the input data points (it is the average between the points in the cluster known as thecentroid). Furthermore,k-medoids can be used with arbitrary dissimilarity measures, whereask-means generally requiresEuclidean distancefor efficient solutions. Becausek-medoids minimizes a sum of pairwise dissimilarities instead of a sum ofsquared Euclidean distances, it is more robust to noise and outliers thank-means.
Despite these advantages, the results ofk-medoids lack consistency since the results of the algorithm may vary. This is because the initial medoids are chosen at random during the performance of the algorithm.k-medoids is also not suitable for clustering objects that are not spherical and may work inefficiently when dealing with large datasets depending on how it is implemented. Meanwhile,k-means is suitable for well-distributed and isotropic clusters and can handle larger datasets.[13]Similarly tok-medoids however,k-means also uses random initial points which varies the results the algorithm finds.
any noise.[14]
Several software packages provide implementations of k-medoids clustering algorithms. These implementations vary in their algorithmic approaches and computational efficiency.
The scikit-learn-extra[16]package includes a KMedoids class that implements k-medoids clustering with aScikit-learncompatible interface. It offers two algorithm choices:
Parameters include:
Example Python usage:
The python-kmedoids[17]package provides optimized implementations of PAM and related algorithms:
This package requires precomputed dissimilarity matrices and includes silhouette-based methods for evaluating clusters.
Example Python usage:
|
https://en.wikipedia.org/wiki/K-medoids
|
TheBFR algorithm, named after its inventors Bradley, Fayyad and Reina, is a variant ofk-means algorithmthat is designed to cluster data in a high-dimensionalEuclidean space. It makes a very strong assumption about the shape of clusters: they must benormally distributedabout acentroid. Themeanandstandard deviationfor a cluster may differ for different dimensions, but the dimensions must be independent.[1]In other words, the data must take the shape of axis-aligned ellipses.
|
https://en.wikipedia.org/wiki/BFR_algorithm
|
Ingeometry, acentroidal Voronoi tessellation(CVT) is a special type ofVoronoi tessellationin which the generating point of each Voronoi cell is also itscentroid(center of mass). It can be viewed as an optimal partition corresponding to an optimal distribution of generators. A number of algorithms can be used to generate centroidal Voronoi tessellations, includingLloyd's algorithmforK-means clusteringorQuasi-Newton methodslikeBFGS.[1]
Gersho's conjecture, proven for one and two dimensions, says that "asymptotically speaking, all cells of the optimal CVT, while forming atessellation, arecongruentto a basic cell which depends on the dimension."[2]
In two dimensions, the basic cell for the optimal CVT is a regularhexagonas it is proven to be the most densepacking of circlesin 2D Euclidean space.
Its three dimensional equivalent is therhombic dodecahedral honeycomb, derived from the most densepacking of spheresin 3D Euclidean space.
Centroidal Voronoi tessellations are useful indata compression, optimalquadrature, optimalquantization,clustering, and optimal mesh generation.[3]
A weighted centroidal Voronoi diagrams is a CVT in which each centroid is weighted according to a certain function. For example, agrayscaleimage can be used as a density function to weight the points of a CVT, as a way to create digitalstippling.[4]
Manypatterns seen in natureare closely approximated by a centroidal Voronoi tessellation. Examples of this include theGiant's Causeway, the cells of thecornea,[5]and the breeding pits of the maletilapia.[3]
|
https://en.wikipedia.org/wiki/Centroidal_Voronoi_tessellation
|
Cluster analysisorclusteringis the data analyzing technique in which task of grouping a set of objects in such a way that objects in the same group (called acluster) are moresimilar(in some specific sense defined by the analyst) to each other than to those in other groups (clusters). It is a main task ofexploratory data analysis, and a common technique forstatisticaldata analysis, used in many fields, includingpattern recognition,image analysis,information retrieval,bioinformatics,data compression,computer graphicsandmachine learning.
Cluster analysis refers to a family of algorithms and tasks rather than one specificalgorithm. It can be achieved by various algorithms that differ significantly in their understanding of what constitutes a cluster and how to efficiently find them. Popular notions of clusters include groups with smalldistancesbetween cluster members, dense areas of the data space, intervals or particularstatistical distributions. Clustering can therefore be formulated as amulti-objective optimizationproblem. The appropriate clustering algorithm and parameter settings (including parameters such as thedistance functionto use, a density threshold or the number of expected clusters) depend on the individualdata setand intended use of the results. Cluster analysis as such is not an automatic task, but an iterative process ofknowledge discoveryor interactive multi-objective optimization that involves trial and failure. It is often necessary to modifydata preprocessingand model parameters until the result achieves the desired properties.
Besides the termclustering, there are a number of terms with similar meanings, includingautomaticclassification,numerical taxonomy,botryology(fromGreek:βότρυς'grape'),typological analysis, andcommunity detection. The subtle differences are often in the use of the results: while in data mining, the resulting groups are the matter of interest, in automatic classification the resulting discriminative power is of interest.
Cluster analysis originated in anthropology by Driver and Kroeber in 1932[1]and introduced to psychology byJoseph Zubinin 1938[2]andRobert Tryonin 1939[3]and famously used byCattellbeginning in 1943[4]for trait theory classification inpersonality psychology.
The notion of a "cluster" cannot be precisely defined, which is one of the reasons why there are so many clustering algorithms.[5]There is a common denominator: a group of data objects. However, different researchers employ different cluster models, and for each of these cluster models again different algorithms can be given. The notion of a cluster, as found by different algorithms, varies significantly in its properties. Understanding these "cluster models" is key to understanding the differences between the various algorithms. Typical cluster models include:
A "clustering" is essentially a set of such clusters, usually containing all objects in the data set. Additionally, it may specify the relationship of the clusters to each other, for example, a hierarchy of clusters embedded in each other. Clusterings can be roughly distinguished as:
There are also finer distinctions possible, for example:
As listed above, clustering algorithms can be categorized based on their cluster model. The following overview will only list the most prominent examples of clustering algorithms, as there are possibly over 100 published clustering algorithms. Not all provide models for their clusters and can thus not easily be categorized. An overview of algorithms explained in Wikipedia can be found in thelist of statistics algorithms.
There is no objectively "correct" clustering algorithm, but as it was noted, "clustering is in the eye of the beholder."[5]In fact, an axiomatic approach to clustering demonstrates that it is impossible for any clustering method to meet three fundamental properties simultaneously:scale invariance(results remain unchanged under proportional scaling of distances),richness(all possible partitions of the data can be achieved), andconsistencybetween distances and the clustering structure.[7]The most appropriate clustering algorithm for a particular problem often needs to be chosen experimentally, unless there is a mathematical reason to prefer one cluster model over another. An algorithm that is designed for one kind of model will generally fail on a data set that contains a radically different kind of model.[5]For example, k-means cannot find non-convex clusters.[5]Most traditional clustering methods assume the clusters exhibit a spherical, elliptical or convex shape.[8]
Connectivity-based clustering, also known ashierarchical clustering, is based on the core idea of objects being more related to nearby objects than to objects farther away. These algorithms connect "objects" to form "clusters" based on their distance. A cluster can be described largely by the maximum distance needed to connect parts of the cluster. At different distances, different clusters will form, which can be represented using adendrogram, which explains where the common name "hierarchical clustering" comes from: these algorithms do not provide a single partitioning of the data set, but instead provide an extensive hierarchy of clusters that merge with each other at certain distances. In a dendrogram, the y-axis marks the distance at which the clusters merge, while the objects are placed along the x-axis such that the clusters don't mix.
Connectivity-based clustering is a whole family of methods that differ by the way distances are computed. Apart from the usual choice ofdistance functions, the user also needs to decide on the linkage criterion (since a cluster consists of multiple objects, there are multiple candidates to compute the distance) to use. Popular choices are known assingle-linkage clustering(the minimum of object distances),complete linkage clustering(the maximum of object distances), andUPGMAorWPGMA("Unweighted or Weighted Pair Group Method with Arithmetic Mean", also known as average linkage clustering). Furthermore, hierarchical clustering can be agglomerative (starting with single elements and aggregating them into clusters) or divisive (starting with the complete data set and dividing it into partitions).
These methods will not produce a unique partitioning of the data set, but a hierarchy from which the user still needs to choose appropriate clusters. They are not very robust towards outliers, which will either show up as additional clusters or even cause other clusters to merge (known as "chaining phenomenon", in particular withsingle-linkage clustering). In the general case, the complexity isO(n3){\displaystyle {\mathcal {O}}(n^{3})}for agglomerative clustering andO(2n−1){\displaystyle {\mathcal {O}}(2^{n-1})}fordivisive clustering,[9]which makes them too slow for large data sets. For some special cases, optimal efficient methods (of complexityO(n2){\displaystyle {\mathcal {O}}(n^{2})}) are known: SLINK[10]for single-linkage and CLINK[11]for complete-linkage clustering.
In centroid-based clustering, each cluster is represented by a central vector, which is not necessarily a member of the data set. When the number of clusters is fixed tok,k-means clusteringgives a formal definition as an optimization problem: find thekcluster centers and assign the objects to the nearest cluster center, such that the squared distances from the cluster are minimized.
The optimization problem itself is known to beNP-hard, and thus the common approach is to search only for approximate solutions. A particularly well-known approximate method isLloyd's algorithm,[12]often just referred to as "k-means algorithm" (althoughanother algorithm introduced this name). It does however only find alocal optimum, and is commonly run multiple times with different random initializations. Variations ofk-means often include such optimizations as choosing the best of multiple runs, but also restricting the centroids to members of the data set (k-medoids), choosingmedians(k-medians clustering), choosing the initial centers less randomly (k-means++) or allowing a fuzzy cluster assignment (fuzzy c-means).
Mostk-means-type algorithms require thenumber of clusters–k– to be specified in advance, which is considered to be one of the biggest drawbacks of these algorithms. Furthermore, the algorithms prefer clusters of approximately similar size, as they will always assign an object to the nearest centroid; often yielding improperly cut borders of clusters. This happens primarily because the algorithm optimizes cluster centers, not cluster borders. Steps involved in the centroid-based clustering algorithm are:
K-means has a number of interesting theoretical properties. First, it partitions the data space into a structure known as aVoronoi diagram. Second, it is conceptually close to nearest neighbor classification, and as such is popular inmachine learning. Third, it can be seen as a variation of model-based clustering, and Lloyd's algorithm as a variation of theExpectation-maximization algorithmfor this model discussed below.
Centroid-based clustering problems such ask-means andk-medoids are special cases of the uncapacitated, metricfacility location problem, a canonical problem in the operations research and computational geometry communities. In a basic facility location problem (of which there are numerous variants that model more elaborate settings), the task is to find the best warehouse locations to optimally service a given set of consumers. One may view "warehouses" as cluster centroids and "consumer locations" as the data to be clustered. This makes it possible to apply the well-developed algorithmic solutions from the facility location literature to the presently considered centroid-based clustering problem.
The clustering framework most closely related to statistics ismodel-based clustering, which is based ondistribution models. This approach models the data as arising from a mixture of probability distributions. It has the advantages of providing principled statistical answers to questions such as how many clusters there are, what clustering method or model to use, and how to detect and deal with outliers.
While the theoretical foundation of these methods is excellent, they suffer fromoverfittingunless constraints are put on the model complexity. A more complex model will usually be able to explain the data better, which makes choosing the appropriate model complexity inherently difficult. Standardmodel-based clusteringmethods include more parsimonious models based on theeigenvalue decompositionof the covariance matrices, that provide a balance between overfitting and fidelity to the data.
One prominent method is known as Gaussian mixture models (using theexpectation-maximization algorithm). Here, the data set is usually modeled with a fixed (to avoid overfitting) number ofGaussian distributionsthat are initialized randomly and whose parameters are iteratively optimized to better fit the data set. This will converge to alocal optimum, so multiple runs may produce different results. In order to obtain a hard clustering, objects are often then assigned to the Gaussian distribution they most likely belong to; for soft clusterings, this is not necessary.
Distribution-based clustering produces complex models for clusters that can capturecorrelation and dependencebetween attributes. However, these algorithms put an extra burden on the user: for many real data sets, there may be no concisely defined mathematical model (e.g. assuming Gaussian distributions is a rather strong assumption on the data).
In density-based clustering,[13]clusters are defined as areas of higher density than the remainder of the data set. Objects in sparse areas – that are required to separate clusters – are usually considered to be noise and border points.
The most popular[14]density-based clustering method isDBSCAN.[15]In contrast to many newer methods, it features a well-defined cluster model called "density-reachability". Similar to linkage-based clustering, it is based on connecting points within certain distance thresholds. However, it only connects points that satisfy a density criterion, in the original variant defined as a minimum number of other objects within this radius. A cluster consists of all density-connected objects (which can form a cluster of an arbitrary shape, in contrast to many other methods) plus all objects that are within these objects' range. Another interesting property of DBSCAN is that its complexity is fairly low – it requires a linear number of range queries on the database – and that it will discover essentially the same results (it isdeterministicfor core and noise points, but not for border points) in each run, therefore there is no need to run it multiple times.OPTICS[16]is a generalization of DBSCAN that removes the need to choose an appropriate value for the range parameterε{\displaystyle \varepsilon }, and produces a hierarchical result related to that oflinkage clustering. DeLi-Clu,[17]Density-Link-Clustering combines ideas fromsingle-linkage clusteringand OPTICS, eliminating theε{\displaystyle \varepsilon }parameter entirely and offering performance improvements over OPTICS by using anR-treeindex.
The key drawback ofDBSCANandOPTICSis that they expect some kind of density drop to detect cluster borders. On data sets with, for example, overlapping Gaussian distributions – a common use case in artificial data – the cluster borders produced by these algorithms will often look arbitrary, because the cluster density decreases continuously. On a data set consisting of mixtures of Gaussians, these algorithms are nearly always outperformed by methods such asEM clusteringthat are able to precisely model this kind of data.
Mean-shiftis a clustering approach where each object is moved to the densest area in its vicinity, based onkernel density estimation. Eventually, objects converge to local maxima of density. Similar to k-means clustering, these "density attractors" can serve as representatives for the data set, but mean-shift can detect arbitrary-shaped clusters similar to DBSCAN. Due to the expensive iterative procedure and density estimation, mean-shift is usually slower than DBSCAN or k-Means. Besides that, the applicability of the mean-shift algorithm to multidimensional data is hindered by the unsmooth behaviour of the kernel density estimate, which results in over-fragmentation of cluster tails.[17]
The grid-based technique is used for amulti-dimensionaldata set.[18]In this technique, we create a grid structure, and the comparison is performed on grids (also known as cells). The grid-based technique is fast and has low computational complexity. There are two types of grid-based clustering methods: STING and CLIQUE. Steps involved in the grid-based clusteringalgorithmare:
In recent years, considerable effort has been put into improving the performance of existing algorithms.[19][20]Among them areCLARANS,[21]andBIRCH.[22]With the recent need to process larger and larger data sets (also known asbig data), the willingness to trade semantic meaning of the generated clusters for performance has been increasing. This led to the development of pre-clustering methods such ascanopy clustering, which can process huge data sets efficiently, but the resulting "clusters" are merely a rough pre-partitioning of the data set to then analyze the partitions with existing slower methods such ask-means clustering.
Forhigh-dimensional data, many of the existing methods fail due to thecurse of dimensionality, which renders particular distance functions problematic in high-dimensional spaces. This led to newclustering algorithms for high-dimensional datathat focus onsubspace clustering(where only some attributes are used, and cluster models include the relevant attributes for the cluster) andcorrelation clusteringthat also looks for arbitrary rotated ("correlated") subspace clusters that can be modeled by giving acorrelationof their attributes.[23]Examples for such clustering algorithms are CLIQUE[24]andSUBCLU.[25]
Ideas from density-based clustering methods (in particular theDBSCAN/OPTICSfamily of algorithms) have been adapted to subspace clustering (HiSC,[26]hierarchical subspace clustering and DiSH[27]) and correlation clustering (HiCO,[28]hierarchical correlation clustering, 4C[29]using "correlation connectivity" and ERiC[30]exploring hierarchical density-based correlation clusters).
Several different clustering systems based onmutual informationhave been proposed. One is Marina Meilă'svariation of informationmetric;[31]another provides hierarchical clustering.[32]Using genetic algorithms, a wide range of different fit-functions can be optimized, including mutual information.[33]Alsobelief propagation, a recent development incomputer scienceandstatistical physics, has led to the creation of new types of clustering algorithms.[34]
Evaluation (or "validation") of clustering results is as difficult as the clustering itself.[35]Popular approaches involve "internal" evaluation, where the clustering is summarized to a single quality score, "external" evaluation, where the clustering is compared to an existing "ground truth" classification, "manual" evaluation by a human expert, and "indirect" evaluation by evaluating the utility of the clustering in its intended application.[36]
Internal evaluation measures suffer from the problem that they represent functions that themselves can be seen as a clustering objective. For example, one could cluster the data set by the Silhouette coefficient; except that there is no known efficient algorithm for this. By using such an internal measure for evaluation, one rather compares the similarity of the optimization problems,[36]and not necessarily how useful the clustering is.
External evaluation has similar problems: if we have such "ground truth" labels, then we would not need to cluster; and in practical applications we usually do not have such labels. On the other hand, the labels only reflect one possible partitioning of the data set, which does not imply that there does not exist a different, and maybe even better, clustering.
Neither of these approaches can therefore ultimately judge the actual quality of a clustering, but this needs human evaluation,[36]which is highly subjective. Nevertheless, such statistics can be quite informative in identifying bad clusterings,[37]but one should not dismiss subjective human evaluation.[37]
When a clustering result is evaluated based on the data that was clustered itself, this is called internal evaluation. These methods usually assign the best score to the algorithm that produces clusters with high similarity within a cluster and low similarity between clusters. One drawback of using internal criteria in cluster evaluation is that high scores on an internal measure do not necessarily result in effective information retrieval applications.[38]Additionally, this evaluation is biased towards algorithms that use the same cluster model. For example, k-means clustering naturally optimizes object distances, and a distance-based internal criterion will likely overrate the resulting clustering.
Therefore, the internal evaluation measures are best suited to get some insight into situations where one algorithm performs better than another, but this shall not imply that one algorithm produces more valid results than another.[5]Validity as measured by such an index depends on the claim that this kind of structure exists in the data set. An algorithm designed for some kind of models has no chance if the data set contains a radically different set of models, or if the evaluation measures a radically different criterion.[5]For example, k-means clustering can only find convex clusters, and many evaluation indexes assume convex clusters. On a data set with non-convex clusters neither the use ofk-means, nor of an evaluation criterion that assumes convexity, is sound.
More than a dozen of internal evaluation measures exist, usually based on the intuition that items in the same cluster should be more similar than items in different clusters.[39]: 115–121For example, the following methods can be used to assess the quality of clustering algorithms based on internal criterion:
TheDavies–Bouldin indexcan be calculated by the following formula:DB=1n∑i=1nmaxj≠i(σi+σjd(ci,cj)){\displaystyle DB={\frac {1}{n}}\sum _{i=1}^{n}\max _{j\neq i}\left({\frac {\sigma _{i}+\sigma _{j}}{d(c_{i},c_{j})}}\right)}wherenis the number of clusters,ci{\displaystyle c_{i}}is thecentroidof clusteri{\displaystyle i},σi{\displaystyle \sigma _{i}}is the average distance of all elements in clusteri{\displaystyle i}to centroidci{\displaystyle c_{i}}, andd(ci,cj){\displaystyle d(c_{i},c_{j})}is the distance between centroidsci{\displaystyle c_{i}}andcj{\displaystyle c_{j}}. Since algorithms that produce clusters with low intra-cluster distances (high intra-cluster similarity) and high inter-cluster distances (low inter-cluster similarity) will have a low Davies–Bouldin index, the clustering algorithm that produces a collection of clusters with the smallestDavies–Bouldin indexis considered the best algorithm based on this criterion.
The Dunn index aims to identify dense and well-separated clusters. It is defined as the ratio between the minimal inter-cluster distance to maximal intra-cluster distance. For each cluster partition, the Dunn index can be calculated by the following formula:[40]
whered(i,j) represents the distance between clustersiandj, andd'(k) measures the intra-cluster distance of clusterk. The inter-cluster distanced(i,j) between two clusters may be any number of distance measures, such as the distance between thecentroidsof the clusters. Similarly, the intra-cluster distanced'(k) may be measured in a variety of ways, such as the maximal distance between any pair of elements in clusterk. Since internal criterion seek clusters with high intra-cluster similarity and low inter-cluster similarity, algorithms that produce clusters with high Dunn index are more desirable.
The silhouette coefficient contrasts the average distance to elements in the same cluster with the average distance to elements in other clusters. Objects with a high silhouette value are considered well clustered, objects with a low value may be outliers. This index works well withk-means clustering, and is also used to determine the optimal number of clusters.[41]
In external evaluation, clustering results are evaluated based on data that was not used for clustering, such as known class labels and external benchmarks. Such benchmarks consist of a set of pre-classified items, and these sets are often created by (expert) humans. Thus, the benchmark sets can be thought of as agold standardfor evaluation.[35]These types of evaluation methods measure how close the clustering is to the predetermined benchmark classes. However, it has recently been discussed whether this is adequate for real data, or only on synthetic data sets with a factual ground truth, since classes can contain internal structure, the attributes present may not allow separation of clusters or the classes may containanomalies.[42]Additionally, from aknowledge discoverypoint of view, the reproduction of known knowledge may not necessarily be the intended result.[42]In the special scenario ofconstrained clustering, where meta information (such as class labels) is used already in the clustering process, the hold-out of information for evaluation purposes is non-trivial.[43]
A number of measures are adapted from variants used to evaluate classification tasks. In place of counting the number of times a class was correctly assigned to a single data point (known astrue positives), suchpair countingmetrics assess whether each pair of data points that is truly in the same cluster is predicted to be in the same cluster.[35]
As with internal evaluation, several external evaluation measures exist,[39]: 125–129for example:
Purity is a measure of the extent to which clusters contain a single class.[38]Its calculation can be thought of as follows: For each cluster, count the number of data points from the most common class in said cluster. Now take the sum over all clusters and divide by the total number of data points. Formally, given some set of clustersM{\displaystyle M}and some set of classesD{\displaystyle D}, both partitioningN{\displaystyle N}data points, purity can be defined as:
This measure doesn't penalize having many clusters, and more clusters will make it easier to produce a high purity. A purity score of 1 is always possible by putting each data point in its own cluster. Also, purity doesn't work well for imbalanced data, where even poorly performing clustering algorithms will give a high purity value. For example, if a size 1000 dataset consists of two classes, one containing 999 points and the other containing 1 point, then every possible partition will have a purity of at least 99.9%.
The Rand index[44]computes how similar the clusters (returned by the clustering algorithm) are to the benchmark classifications. It can be computed using the following formula:
whereTP{\displaystyle TP}is the number of true positives,TN{\displaystyle TN}is the number oftrue negatives,FP{\displaystyle FP}is the number offalse positives, andFN{\displaystyle FN}is the number offalse negatives. The instances being counted here are the number of correctpairwiseassignments. That is,TP{\displaystyle TP}is the number of pairs of points that are clustered together in the predicted partition and in the ground truth partition,FP{\displaystyle FP}is the number of pairs of points that are clustered together in the predicted partition but not in the ground truth partition etc. If the dataset is of size N, thenTP+TN+FP+FN=(N2){\displaystyle TP+TN+FP+FN={\binom {N}{2}}}.
One issue with theRand indexis thatfalse positivesandfalse negativesare equally weighted. This may be an undesirable characteristic for some clustering applications. The F-measure addresses this concern,[citation needed]as does the chance-correctedadjusted Rand index.
The F-measure can be used to balance the contribution offalse negativesby weightingrecallthrough a parameterβ≥0{\displaystyle \beta \geq 0}. Letprecisionandrecall(both external evaluation measures in themselves) be defined as follows:P=TPTP+FP{\displaystyle P={\frac {TP}{TP+FP}}}R=TPTP+FN{\displaystyle R={\frac {TP}{TP+FN}}}whereP{\displaystyle P}is theprecisionrate andR{\displaystyle R}is therecallrate. We can calculate the F-measure by using the following formula:[38]Fβ=(β2+1)⋅P⋅Rβ2⋅P+R{\displaystyle F_{\beta }={\frac {(\beta ^{2}+1)\cdot P\cdot R}{\beta ^{2}\cdot P+R}}}Whenβ=0{\displaystyle \beta =0},F0=P{\displaystyle F_{0}=P}. In other words,recallhas no impact on the F-measure whenβ=0{\displaystyle \beta =0}, and increasingβ{\displaystyle \beta }allocates an increasing amount of weight to recall in the final F-measure.
AlsoTN{\displaystyle TN}is not taken into account and can vary from 0 upward without bound.
The Jaccard index is used to quantify the similarity between two datasets. TheJaccard indextakes on a value between 0 and 1. An index of 1 means that the two dataset are identical, and an index of 0 indicates that the datasets have no common elements. The Jaccard index is defined by the following formula:J(A,B)=|A∩B||A∪B|=TPTP+FP+FN{\displaystyle J(A,B)={\frac {|A\cap B|}{|A\cup B|}}={\frac {TP}{TP+FP+FN}}}This is simply the number of unique elements common to both sets divided by the total number of unique elements in both sets.
Note thatTN{\displaystyle TN}is not taken into account.
The Dice symmetric measure doubles the weight onTP{\displaystyle TP}while still ignoringTN{\displaystyle TN}:DSC=2TP2TP+FP+FN{\displaystyle DSC={\frac {2TP}{2TP+FP+FN}}}
The Fowlkes–Mallows index[45]computes the similarity between the clusters returned by the clustering algorithm and the benchmark classifications. The higher the value of the Fowlkes–Mallows index the more similar the clusters and the benchmark classifications are. It can be computed using the following formula:FM=TPTP+FP⋅TPTP+FN{\displaystyle FM={\sqrt {{\frac {TP}{TP+FP}}\cdot {\frac {TP}{TP+FN}}}}}whereTP{\displaystyle TP}is the number oftrue positives,FP{\displaystyle FP}is the number offalse positives, andFN{\displaystyle FN}is the number offalse negatives. TheFM{\displaystyle FM}index is the geometric mean of theprecisionandrecallP{\displaystyle P}andR{\displaystyle R}, and is thus also known as theG-measure, while the F-measure is their harmonic mean.[46][47]Moreover,precisionandrecallare also known as Wallace's indicesBI{\displaystyle B^{I}}andBII{\displaystyle B^{II}}.[48]Chance normalized versions of recall, precision and G-measure correspond toInformedness,MarkednessandMatthews Correlationand relate strongly toKappa.[49]
The Chi index[50]is an external validation index that measure the clustering results by applying thechi-squared statistic. This index scores positively the fact that the labels are as sparse as possible across the clusters, i.e., that each cluster has as few different labels as possible. The higher the value of the Chi Index the greater the relationship between the resulting clusters and the label used.
The mututal information is aninformation theoreticmeasure of how much information is shared between a clustering and a ground-truth classification that can detect a non-linear similarity between two clusterings.Normalized mutual informationis a family of corrected-for-chance variants of this that has a reduced bias for varying cluster numbers.[35]
A confusion matrix can be used to quickly visualize the results of a classification (or clustering) algorithm. It shows how different a cluster is from the gold standard cluster.
The validity measure (short v-measure) is a combined metric for homogeneity and completeness of the clusters[51]
To measure cluster tendency is to measure to what degree clusters exist in the data to be clustered, and may be performed as an initial test, before attempting clustering. One way to do this is to compare the data against random data. On average, random data should not have clusters[verification needed].
|
https://en.wikipedia.org/wiki/Cluster_analysis
|
Head/tail breaksis aclustering algorithmfor data with aheavy-tailed distributionsuch aspower lawsandlognormal distributions. The heavy-tailed distribution can be simply referred to the scaling pattern of far more small things than large ones, or alternatively numerous smallest, a very few largest, and some in between the smallest and largest. The classification is done through dividing things into large (or called the head) and small (or called the tail) things around the arithmetic mean or average, and then recursively going on for the division process for the large things or the head until the notion of far more small things than large ones is no longer valid, or with more or less similar things left only.[1]Head/tail breaks is not just for classification, but also for visualization of big data by keeping the head, since the head is self-similar to the whole. Head/tail breaks can be applied not only to vector data such as points, lines and polygons, but also to raster data like digital elevation model (DEM).
The head/tail breaks is motivated by inability of conventional classification methods such as equal intervals, quantiles, geometric progressions, standard deviation, and natural breaks - commonly known asJenks natural breaks optimizationork-means clusteringto reveal the underlying scaling or living structure with the inherent hierarchy (or heterogeneity) characterized by the recurring notion of far more small things than large ones.[2][3]Note that the notion of far more small things than large one is not only referred to geometric property, but also to topological and semantic properties. In this connection, the notion should be interpreted as far more unpopular (or less-connected) things than popular (or well-connected) ones, or far more meaningless things than meaningful ones. Head/tail breaks uses the mean or average to dichotomize a dataset into small and large values, rather than to characterize classes by average values, which is unlike k-means clustering or natural breaks. Through the head/tail breaks, a dataset is seen as a living structure with an inherent hierarchy with far more smalls than larges, or recursively perceived as the head of the head of the head and so on. It opens up new avenues of analyzing data from a holistic and organic point of view while considering different types of scales and scaling in spatial analysis.[4]
Given some variable X that demonstrates a heavy-tailed distribution, there are far more small x than large ones. Take the average of all xi, and obtain the first mean m1. Then calculate the second mean for those xi greater than m1, and obtain m2. In the same recursive way, we can get m3 depending on whether the ending condition of no longer far more small x than large ones is met. For simplicity, we assume there are three means, m1, m2, and m3. This classification leads to four classes: [minimum, m1], (m1, m2], (m2, m3], (m3, maximum]. In general, it can be represented as a recursive function as follows:
The resulting number of classes is referred to as ht-index, an alternative index tofractal dimensionfor characterizing complexity of fractals or geographic features: the higher the ht-index, the more complex the fractals.[5]
The criterion to stop the iterative classification process using the head/tail breaks method is that the remaining data (i.e., the head part) are not heavy-tailed, or simply, the head part is no longer a minority (i.e., the proportion of the head part is no longer less than a threshold such as 40%). This threshold is suggested to be 40% by Jiang et al. (2013),[6]just as the codes above (i.e., (length/head)/length(data) ≤ 40%). This process is called head/tail breaks 1.0. But sometimes a larger threshold, for example 50% or more, can be used, as Jiang and Yin (2014)[5]noted in another article: "this condition can be relaxed for many geographic features, such as 50 percent or even more". However, all heads' percentage on average must be smaller than 40% (or 41, 42%), indicating far more small things than large ones. Many real-world data cannot be fit into a perfect long tailed distribution, therefore its threshold can be relaxed structurally. In head/tail breaks 2.0 the threshold only applies to the overall heads' percentage.[7]This means that the percentages of all heads related to the tails should be around 40% on average. Individual classes can have any percentage spit around the average, as long as this averages out as a whole. For example, if there is data distributed in such a way that it has a clearly defined head and tail during the first and second iteration (length(head)/(length(data)<20%) but a much less well defined long tailed distribution for the third iteration (60% in the head), head/tail breaks 2.0 allows the iteration to continue into the fourth iteration which can be distributed 30% head - 70% tail again and so on. As long as the overall threshold is not surpassed the head/tail breaks classification holds.
A good tool to display the scaling pattern, or the heavy-tailed distribution, is the rank-size plot, which is a scatter plot to display a set of values according to their ranks. With this tool, a new index[8]termed as the ratio of areas (RA) in a rank-size plot was defined to characterize the scaling pattern. The RA index has been successfully used in the estimation of traffic conditions. However, the RA index can only be used as a complementary method to the ht-index, because it is ineffective to capture the scaling structure of geographic features.
In addition to the ht-index, the following indices are also derived with the head/tail breaks.
Instead of more or less similar things, there are far more small things than large ones surrounding us. Given the ubiquity of the scaling pattern, head/tail breaks is found to be of use to statistical mapping, map generalization, cognitive mapping and even perception of beauty
.[6][12][13]It helps visualize big data, since big data are likely to show the scaling property of far more small things than large ones. Essentially geographic phenomena can be scaleful or scale-free. Scaleful phenomena can be explained by conventional mathematical or geographical operations, but scale-free phenomena can not. Head/tail breaks can be used to characterize the scale-free phenomena, which are in the majority.[14]The visualization strategy is to recursively drop out the tail parts until the head parts are clear or visible enough.[15][16]In addition, it helps delineate cities or natural cities to be more precise from various geographic information such as street networks, social media geolocation data, and nighttime images.
As the head/tail breaks method can be used iteratively to obtain head parts of a data set, this method actually captures the underlying hierarchy of the data set. For example, if we divide the array (19, 8, 7, 6, 2, 1, 1, 1, 0) with the head/tail breaks method, we can get two head parts, i.e., the first head part (19, 8, 7, 6) and the second head part (19). These two head parts as well as the original array form a three-level hierarchy:
The number of levels of the above-mentioned hierarchy is actually a characterization of the imbalance of the example array, and this number of levels has been termed as the ht-index.[5]With the ht-index, we are able to compare degrees of imbalance of two data sets. For example, the ht-index of the example array (19, 8, 7, 6, 2, 1, 1, 1, 0) is 3, and the ht-index of another array (19, 8, 8, 8, 8, 8, 8, 8, 8) is 2. Therefore, the degree of imbalance of the former array is higher than that of the latter array.
The use of fractals in modelling human geography has for a longer period been seen as useful in measuring the spatial distribution of human settlements.[17]Head/tail breaks can be used to do just that with a concept called natural cities. The term ‘natural cities’ refers to the human settlements or human activities in general on Earth's surface that are naturally or objectively defined and delineated from massive geographic information based on head/tail division rule, a non-recursive form of head/tail breaks.[18][19]Such geographic information could be from various sources, such as massive street junctions[19]and street ends, a massive number of street blocks, nighttime imagery and social media users’ locations etc. Based on these the different urban forms and configurations detected in cities can be derived.[20]Distinctive from conventional cities, the adjective ‘natural’ could be explained not only by the sources of natural cities, but also by the approach to derive them[1]. Natural cities are derived from a meaningful cutoff averaged from a massive number of units extracted from geographic information.[15]Those units vary according to different kinds of geographic information, for example the units could be area units for the street blocks and pixel values for the nighttime images.[21]Anatural cities modelhas been created using ArcGIS model builder,[22]it follows the same process of deriving natural cities from location-based social media,[18]namely, building up huge triangular irregular network (TIN) based on the point features (street nodes in this case) and regarding the triangles which are smaller than a mean value as the natural cities. These natural cities can also be created from other open access information likeOpenStreetMapand further be used as an alternative delineation of administrative boundaries.[23]Scaling lawcan also at the same time correctly be identified and the administrative borders can be created to respect this by the delineation of the natural cities.[24][25]This type methodology can help urban geographers and planners by correctly identifying the effective urban territorial scope of the areas they work in.[26]
Natural cities can vary depending on the scale on which the natural cities are delineated, which is why optimally they have to be based on data from the whole world. Due to that being computationally impossible, a country or county scale is suggested as alternative.[27]Due to the scale-free nature of natural cities and the data they are based on there are also possibilities to use the natural cities method for further measurements. One of the main advantages of natural cities is that it is derivedbottom-upinstead oftop-down. That means that the borders are determined by the data of something physical rather than determined by an administrative government or administration.[28]For example by calculating the natural cities of a natural city recursively the dense areas within a natural city are identified. These can be seen as city centers for example. By using the natural cities method in this way further border delineations can be made dependent on the scale the natural cities were generated from.[29]Natural cities derived from smaller regional areas will provide less accurate but still usable results in certain analysis, like for example determining urban expansion over time.[30]As mentioned before though, optimally natural cities should be based on a massive amount of for example street intersections for an entire country or even the world. This is because natural cities are based onthe wisdom of crowdsthinking, which needs the biggest set of available data for the best results. Also note that the structure of natural cities can be considered to befractalin nature.[31]
It is important when head/tail breaks are being used to generate natural cities, that the data is not aggregated afterwards. For example, the amount of generated natural cities can only be known after they are generated. It is not possible to use a pre-defined number of cities for an area or country and aggregate the results of the natural cities to administratively determined city borders. Naturally natural cities should followZipf's law, if they do not, the area is most likely too small, or data has probably been processed wrongly. An example of this is seen in a research where head/tail breaks were used to extract natural cities, but they were aggregated to administrative borders, which following that concluded that the cities do not followZipf's law.[32]This happens more often in science, where papers actually produce results which are actually false.[33]
Current color renderings for DEM or density map are essentially based on conventional classifications such as natural breaks or equal intervals, so they disproportionately exaggerate high elevations or high densities. As a matter of fact, there are not so many high elevations or high-density locations.[34]It was found that coloring based head/tail breaks is more favorable than those by other classifications.[35][36][2]
The pattern of far more small things than large ones frequently recurs in geographical data. A spiral layout inspired by the golden ratio or Fibonacci sequence can help visualize this recursive notion of scaling hierarchy and the different levels of scale.[37][38]In other words, from the smallest to the largest scale, a map can be seen as a map of a map of a map, and so on.
Other applications of Head/tail breaks:
The following implementations are available underFree/Open Source Softwarelicenses.
|
https://en.wikipedia.org/wiki/Head/tail_breaks
|
Indata miningandmachine learning,kq-flats algorithm[1][2]is an iterative method which aims to partitionmobservations intokclusters where each cluster is close to aq-flat, whereqis a given integer.
It is a generalization of thek-means algorithm. Ink-means algorithm, clusters are formed in the way that each cluster is close to one point, which is a0-flat.kq-flats algorithm gives better clustering result thank-means algorithm
for some data set.
Given a setAofmobservations(a1,a2,…,am){\displaystyle (a_{1},a_{2},\dots ,a_{m})}where each observationai{\displaystyle a_{i}}is an n-dimensional real vector,kq-flats algorithm aims to partitionmobservation points by generatingkq-flats that minimize the sum of the squares of distances of each observation to a nearestq-flat.
Aq-flat is a subset ofRn{\displaystyle \mathbb {R} ^{n}}that is congruent toRq{\displaystyle \mathbb {R} ^{q}}. For example, a0-flat is a point; a1-flat is a line; a2-flat is a plane; an−1{\displaystyle n-1}-flat is ahyperplane.q-flat can be characterized by the solution set of a linear system of equations:F={x∣x∈Rn,W′x=γ}{\displaystyle F=\left\{x\mid x\in \mathbb {R} ^{n},W'x=\gamma \right\}}, whereW∈Rn×(n−q){\displaystyle W\in \mathbb {R} ^{n\times (n-q)}},γ∈R1×(n−q){\displaystyle \gamma \in \mathbb {R} ^{1\times (n-q)}}.
Denote apartitionof{1,2,…,n}{\displaystyle \{1,2,\dots ,n\}}asS=(S1,S2,…,Sk){\displaystyle S=(S_{1},S_{2},\dots ,S_{k})}. The problem can be formulated as
wherePFi(aj){\displaystyle P_{F_{i}}(a_{j})}is the projection ofaj{\displaystyle a_{j}}ontoFi{\displaystyle F_{i}}. Note that‖aj−PFi(aj)‖=dist(aj,Fl){\displaystyle \|a_{j}-P_{F_{i}}(a_{j})\|=\operatorname {dist} (a_{j},F_{l})}is the distance fromaj{\displaystyle a_{j}}toFl{\displaystyle F_{l}}.
The algorithm is similar to the k-means algorithm (i.e. Lloyd's algorithm) in that it alternates between cluster assignment and cluster update. In specific, the algorithm starts with an initial set ofq-flatsFl(0)={x∈Rn∣(Wl(0))′x=γl(0)},l=1,…,k{\displaystyle F_{l}^{(0)}=\left\{x\in R^{n}\mid \left(W_{l}^{(0)}\right)'x=\gamma _{l}^{(0)}\right\},l=1,\dots ,k}, and proceeds by alternating between the following two steps:
Stop whenever the assignments no longer change.
The cluster assignment step uses the following fact: given aq-flatFl={x∣W′x=γ}{\displaystyle F_{l}=\{x\mid W'x=\gamma \}}and a vectora, whereW′W=I{\displaystyle W'W=I}, the distance fromato theq-flatFl{\displaystyle F_{l}}isdist(a,Fl)=minx:W′x=γ‖x−a‖F2=‖W(W′W)−1(W′x−γ)‖F2=‖W′x−γ‖F2.{\textstyle \operatorname {dist} (a,F_{l})=\min _{x:W'x=\gamma }\left\|x-a\right\|_{F}^{2}=\left\|W(W'W)^{-1}(W'x-\gamma )\right\|_{F}^{2}=\left\|W'x-\gamma \right\|_{F}^{2}.}
The key part of this algorithm is how to update the cluster, i.e. givenmpoints, how to find aq-flat that minimizes the sum of squares of distances of each point to theq-flat. Mathematically, this problem is: givenA∈Rm×n,{\displaystyle A\in R^{m\times n},}solve the quadratic optimization problem
whereA∈Rm×n{\displaystyle A\in \mathbb {R} ^{m\times n}}is given, ande=(1,…,1)′∈Rm×1{\displaystyle e=(1,\dots ,1)'\in \mathbb {R} ^{m\times 1}}.
The problem can be solved using Lagrangian multiplier method and the solution is as given in the cluster update step.
It can be shown that the algorithm will terminate in a finite number of iterations (no more than the total number of possible assignments, which is bounded bykm{\displaystyle k^{m}}). In addition, the algorithm will terminate at a point that the overall objective cannot be decreased either by a different assignment or by defining new cluster planes for these clusters (such point is called "locally optimal" in the references).
This convergence result is a consequence of the fact thatproblem (P2)can be solved exactly.
The same convergence result holds fork-means algorithm because the cluster update problem can be solved exactly.
kq-flats algorithm is a generalization ofk-means algorithm. In fact,k-means algorithm isk0-flats algorithm since a point is a 0-flat. Despite their connection, they should be used in different scenarios.kq-flats algorithm for the case that data lie in a few low-dimensional spaces.k-means algorithm is desirable for the case the clusters are of the ambient dimension. For example, if all observations lie in two lines,kq-flats algorithm withq=1{\displaystyle q=1}may be used; if the observations are twoGaussian clouds,k-means algorithm may be used.
Natural signals lie in a high-dimensional space. For example, the dimension of a 1024-by-1024 image is about 106, which is far too high for most signal processing algorithms. One way to get rid of the high dimensionality is to find a set of basis functions, such that the high-dimensional signal can be represented by only a few basis functions. In other words, the coefficients of the signal representation lies in a low-dimensional space, which is easier to apply signal processing algorithms. In the literature, wavelet transform is usually used in image processing, and fourier transform is usually used in audio processing. The set of basis functions is usually called adictionary.
However, it is not clear what is the best dictionary to use once given a signal data set. One popular approach is to find a dictionary when given a data set using the idea of Sparse Dictionary Learning. It aims to find a dictionary, such that the signal can be sparsely represented by the dictionary. The optimization problem can be written as follows.
where
The idea ofkq-flats algorithm is similar to sparse dictionary learning in nature. If we restrict theq-flat toq-dimensional subspace, then thekq-flats algorithm is simply finding the closedq-dimensional subspace to a given signal. Sparse dictionary learning is also doing the same thing, except for an additional constraints on the sparsity of the representation. Mathematically, it is possible to show thatkq-flats algorithm is of the form of sparse dictionary learning with an additional block structure onR.
LetBk{\displaystyle B_{k}}be ad×q{\displaystyle d\times q}matrix, where columns ofBk{\displaystyle B_{k}}are basis of thek-th flat. Then the projection of the signalxto thek-th flat isBkrk{\displaystyle B_{k}r_{k}}, whererk{\displaystyle r_{k}}is aq-dimensional coefficient. LetB=[B1,⋯,BK]{\displaystyle B=[B_{1},\cdots ,B_{K}]}denote concatenation of basis ofKflats, it is easy to show that thekq-flat algorithm is the same as the following.
The block structure ofRrefers the fact that each signal is labeled to only one flat. Comparing the two formulations,kq-flat is the same as sparse dictionary modeling whenl=K×q{\displaystyle l=K\times q}and with an additional block structure onR. Users may refer to Szlam's paper[3]for more discussion about the relationship between the two concept.
Classificationis a procedure that classifies an input signal into different classes. One example is to classify an email intospamornon-spamclasses. Classification algorithms usually require a supervised learning stage. In the supervised learning stage, training data for each class is used for the algorithm to learn the characteristics of the class. In the classification stage, a new observation is classified into a class by using the characteristics that were already trained.
kq-flat algorithm can be used for classification. Suppose there are total of m classes. For each class,kflats are trained a priori via training data set. When a new data comes, find the flat that is closest to the new data. Then the new data is associate to class of the closest flat.
However, the classification performance can be further improved if we impose some structure on the flats. One possible choice is to require different flats from different class be sufficiently far apart. Some researchers[4]use this idea and develop a discriminative k q-flat algorithm.
Source:[3]
Inkq-flats algorithm,‖x−PF(x)‖2{\displaystyle \|x-P_{F}(x)\|^{2}}is used to measure the representation error.PF(x){\displaystyle P_{F}(x)}denotes the projection ofxto the flatF. If data lies in aq-dimension flat, then a singleq-flat can represent the data very well. On the contrary, if data lies in a very high dimension space but near a common center, then k-means algorithm is a better way thankq-flat algorithm to represent the data. It is becausek-means algorithm use‖x−xc‖2{\displaystyle \|x-x_{c}\|^{2}}to measure the error, wherexc{\displaystyle x_{c}}denotes the center. K-metrics is a generalization that use both the idea of flat and mean. In k-metrics, error is measured by the following Mahalanobis metric.
‖x−y‖A2=(x−y)TA(x−y){\displaystyle \left\|x-y\right\|_{A}^{2}=(x-y)^{\mathsf {T}}A(x-y)}
whereAis a positive semi-definite matrix.
IfAis the identity matrix, then the Mahalanobis metric is exactly the same as the error measure used ink-means. IfAis not the identity matrix, then‖x−y‖A2{\displaystyle \|x-y\|_{A}^{2}}will favor certain directions as thekq-flat error measure.
|
https://en.wikipedia.org/wiki/K_q-flats
|
Indata mining,k-means++[1][2]is an algorithm for choosing the initial values (or "seeds") for thek-means clusteringalgorithm. It was proposed in 2007 by David Arthur and Sergei Vassilvitskii, as an approximation algorithm for theNP-hardk-means problem—a way of avoiding the sometimes poor clusterings found by the standardk-means algorithm. It is similar to the first of three seeding methods proposed, in independent work, in 2006[3]byRafail Ostrovsky, Yuval Rabani,Leonard Schulmanand Chaitanya Swamy. (The distribution of the first seed is different.)
Thek-means problem is to find cluster centers that minimize the intra-class variance, i.e. the sum of squared distances from each data point being clustered to its cluster center (the center that is closest to it).
Although finding an exact solution to thek-means problem for arbitrary input is NP-hard,[4]the standard approach to finding an approximate solution (often calledLloyd's algorithmor thek-means algorithm) is used widely and frequently finds reasonable solutions quickly.
However, thek-means algorithm has at least two major theoretic shortcomings:
Thek-means++ algorithm addresses the second of these obstacles by specifying a procedure to initialize the cluster centers before proceeding with the standardk-means optimization iterations.
With thek-means++ initialization, the algorithm is guaranteed to find a solution that is O(logk) competitive to the optimalk-means solution.
To illustrate the potential of thek-means algorithm to perform arbitrarily poorly with respect to the objective function of minimizing the sum of squared distances of cluster points to the centroid of their assigned clusters, consider the example of four points inR2{\displaystyle \mathbb {R} ^{2}}that form an axis-aligned rectangle whose width is greater than its height.
Ifk=2{\displaystyle k=2}and the two initial cluster centers lie at the midpoints of the top and bottom line segments of the rectangle formed by the four data points, thek-means algorithm converges immediately, without moving these cluster centers. Consequently, the two bottom data points are clustered together and the two data points forming the top of the rectangle are clustered together—a suboptimal clustering because the width of the rectangle is greater than its height.
Consider now extending the rectangle in a horizontal direction to any desired width. The standardk-means algorithm will continue to cluster the points suboptimally, and by increasing the horizontal distance between the two data points in each cluster, we can make the algorithm perform arbitrarily poorly with respect to thek-means objective function.
The intuition behind this approach is that spreading out thekinitial cluster centers is a good thing: the first cluster center is chosen uniformly at random from the data points that are being clustered, after which each subsequent cluster center is chosen from theremainingdata points with probability proportional to its squared distance from the point's closest existing cluster center.
The exact algorithm is as follows:
This seeding method yields considerable improvement in the final error ofk-means. Although the initial selection in the algorithm takes extra time, thek-means part itself converges very quickly after this seeding and thus the algorithm actually lowers the computation time. The authors tested their method with real and synthetic datasets and obtained typically 2-fold improvements in speed, and for certain datasets, close to 1000-fold improvements in error. In these simulations the new method almost always performed at least as well asvanillak-means in both speed and error.
Additionally, the authors calculate an approximation ratio for their algorithm. Thek-means++ algorithm guarantees an approximation ratio O(logk) in expectation (over the randomness of the algorithm), wherek{\displaystyle k}is the number of clusters used. This is in contrast to vanillak-means, which can generate clusterings arbitrarily worse than the optimum.[6]A generalization of the performance of k-means++ with respect to any arbitrary distance is provided in
.[7]
Thek-means++ approach has been applied since its initial proposal. In a review by Shindler,[8]which includes many types of clustering algorithms, the method is said to successfully overcome some of the problems associated with other ways of defining initial cluster-centres fork-means clustering. Lee et al.[9]report an application ofk-means++ to create geographical cluster of photographs based on the latitude and longitude information attached to the photos. An application to financial diversification is reported by Howard and Johansen.[10]Other support for the method and ongoing discussion is also available online.[11]Since the k-means++ initialization needs k passes over the data, it does not scale very well to large data sets. Bahmani et al. have proposed a scalable variant of k-means++ called k-means|| (to be read as "k-means parallel") which provides the same theoretical guarantees and yet is highly scalable.[12]
|
https://en.wikipedia.org/wiki/K-means%2B%2B
|
TheLinde–Buzo–Gray algorithm(named after its creators Yoseph Linde, Andrés Buzo andRobert M. Gray, who designed it in 1980)[1]is aniterativevector quantizationalgorithm to improve a small set of vectors (codebook) to represent a larger set of vectors (training set), such that it will belocally optimal. It combinesLloyd's Algorithmwith a splitting technique in which larger codebooks are built from smaller codebooks by splitting each code vector in two. The core idea of the algorithm is that by splitting the codebook such that all code vectors from the previous codebook are present, the new codebook must be as good as the previous one or better.[2]: 361–362
The Linde–Buzo–Gray algorithm may be implemented as follows:
|
https://en.wikipedia.org/wiki/Linde%E2%80%93Buzo%E2%80%93Gray_algorithm
|
Aself-organizing map(SOM) orself-organizing feature map(SOFM) is anunsupervisedmachine learningtechnique used to produce alow-dimensional(typically two-dimensional) representation of a higher-dimensional data set while preserving thetopological structureof the data. For example, a data set withp{\displaystyle p}variables measured inn{\displaystyle n}observations could be represented as clusters of observations with similar values for the variables. These clusters then could be visualized as a two-dimensional "map" such that observations in proximal clusters have more similar values than observations in distal clusters. This can make high-dimensional data easier to visualize and analyze.
An SOM is a type ofartificial neural networkbut is trained usingcompetitive learningrather than the error-correction learning (e.g.,backpropagationwithgradient descent) used by other artificial neural networks. The SOM was introduced by theFinnishprofessorTeuvo Kohonenin the 1980s and therefore is sometimes called aKohonen maporKohonen network.[1][2]The Kohonen map or network is a computationally convenient abstraction building on biological models of neural systems from the 1970s[3]andmorphogenesismodels dating back toAlan Turingin the 1950s.[4]SOMs create internal representations reminiscent of thecortical homunculus[citation needed], a distorted representation of thehuman body, based on a neurological "map" of the areas and proportions of thehuman braindedicated to processingsensory functions, for different parts of the body.
Self-organizing maps, like most artificial neural networks, operate in two modes: training and mapping. First, training uses an input data set (the "input space") to generate a lower-dimensional representation of the input data (the "map space"). Second, mapping classifies additional input data using the generated map.
In most cases, the goal of training is to represent an input space withpdimensions as a map space with two dimensions. Specifically, an input space withpvariables is said to havepdimensions. A map space consists of components called "nodes" or "neurons", which are arranged as ahexagonalorrectangulargrid with two dimensions.[5]The number of nodes and their arrangement are specified beforehand based on the larger goals of the analysis andexploration of the data.
Each node in the map space is associated with a "weight" vector, which is the position of the node in the input space. While nodes in the map space stay fixed, training consists in moving weight vectors toward the input data (reducing a distance metric such asEuclidean distance) without spoiling the topology induced from the map space. After training, the map can be used to classify additional observations for the input space by finding the node with the closest weight vector (smallest distance metric) to the input space vector.
The goal of learning in the self-organizing map is to cause different parts of the network to respond similarly to certain input patterns. This is partly motivated by how visual, auditory or othersensoryinformation is handled in separate parts of thecerebral cortexin thehuman brain.[6]
The weights of the neurons are initialized either to small random values or sampled evenly from the subspace spanned by the two largestprincipal componenteigenvectors. With the latter alternative, learning is much faster because the initial weights already give a good approximation of SOM weights.[7]
The network must be fed a large number of example vectors that represent, as close as possible, the kinds of vectors expected during mapping. The examples are usually administered several times as iterations.
The training utilizescompetitive learning. When a training example is fed to the network, itsEuclidean distanceto all weight vectors is computed. The neuron whose weight vector is most similar to the input is called thebest matching unit(BMU). The weights of the BMU and neurons close to it in the SOM grid are adjusted towards the input vector. The magnitude of the change decreases with time and with the grid-distance from the BMU. The update formula for a neuron v with weight vectorWv(s) is
wheresis the step index,tis an index into the training sample,uis the index of the BMU for the input vectorD(t),α(s) is amonotonically decreasinglearning coefficient;θ(u,v,s) is theneighborhoodfunction which gives the distance between the neuron u and the neuronvin steps.[8]Depending on the implementations, t can scan the training data set systematically (tis 0, 1, 2...T-1, then repeat,Tbeing the training sample's size), be randomly drawn from the data set (bootstrap sampling), or implement some other sampling method (such asjackknifing).
The neighborhood functionθ(u,v,s) (also calledfunction of lateral interaction) depends on the grid-distance between the BMU (neuronu) and neuronv. In the simplest form, it is 1 for all neurons close enough to BMU and 0 for others, but theGaussianandMexican-hat[9]functions are common choices, too. Regardless of the functional form, the neighborhood function shrinks with time.[6]At the beginning when the neighborhood is broad, the self-organizing takes place on the global scale. When the neighborhood has shrunk to just a couple of neurons, the weights are converging to local estimates. In some implementations, the learning coefficientαand the neighborhood functionθdecrease steadily with increasings, in others (in particular those wheretscans the training data set) they decrease in step-wise fashion, once everyTsteps.
This process is repeated for each input vector for a (usually large) number of cyclesλ. The network winds up associating output nodes with groups or patterns in the input data set. If these patterns can be named, the names can be attached to the associated nodes in the trained net.
During mapping, there will be one singlewinningneuron: the neuron whose weight vector lies closest to the input vector. This can be simply determined by calculating the Euclidean distance between input vector and weight vector.
While representing input data as vectors has been emphasized in this article, any kind of object which can be represented digitally, which has an appropriate distance measure associated with it, and in which the necessary operations for training are possible can be used to construct a self-organizing map. This includes matrices, continuous functions or even other self-organizing maps.
The variable names mean the following, with vectors in bold,
The key design choices are the shape of the SOM, the neighbourhood function, and the learning rate schedule. The idea of the neighborhood function is to make it such that the BMU is updated the most, its immediate neighbors are updated a little less, and so on. The idea of the learning rate schedule is to make it so that the map updates are large at the start, and gradually stop updating.
For example, if we want to learn a SOM using a square grid, we can index it using(i,j){\displaystyle (i,j)}where bothi,j∈1:N{\displaystyle i,j\in 1:N}. The neighborhood function can make it so that the BMU updates in full, the nearest neighbors update in half, and their neighbors update in half again, etc.θ((i,j),(i′,j′),s)=12|i−i′|+|j−j′|={1ifi=i′,j=j′1/2if|i−i′|+|j−j′|=11/4if|i−i′|+|j−j′|=2⋯⋯{\displaystyle \theta ((i,j),(i',j'),s)={\frac {1}{2^{|i-i'|+|j-j'|}}}={\begin{cases}1&{\text{if }}i=i',j=j'\\1/2&{\text{if }}|i-i'|+|j-j'|=1\\1/4&{\text{if }}|i-i'|+|j-j'|=2\\\cdots &\cdots \end{cases}}}And we can use a simple linear learning rate scheduleα(s)=1−s/λ{\displaystyle \alpha (s)=1-s/\lambda }.
Notice in particular, that the update rate doesnotdepend on where the point is in the Euclidean space, only on where it is in the SOM itself. For example, the points(1,1),(1,2){\displaystyle (1,1),(1,2)}are close on the SOM, so they will always update in similar ways, even when they are far apart on the Euclidean space. In contrast, even if the points(1,1),(1,100){\displaystyle (1,1),(1,100)}end up overlapping each other (such as if the SOM looks like a folded towel), they still do not update in similar ways.
Selection of initial weights as good approximations of the final weights is a well-known problem for all iterative methods of artificial neural networks, including self-organizing maps. Kohonen originally proposed random initiation of weights.[10](This approach is reflected by the algorithms described above.) More recently, principal component initialization, in which initial map weights are chosen from the space of the first principal components, has become popular due to the exact reproducibility of the results.[11]
A careful comparison of random initialization to principal component initialization for a one-dimensional map, however, found that the advantages of principal component initialization are not universal. The best initialization method depends on the geometry of the specific dataset. Principal component initialization was preferable (for a one-dimensional map) when the principal curve approximating the dataset could be univalently and linearly projected on the first principal component (quasilinear sets). For nonlinear datasets, however, random initiation performed better.[12]
There are two ways to interpret a SOM. Because in the training phase weights of the whole neighborhood are moved in the same direction, similar items tend to excite adjacent neurons. Therefore, SOM forms a semantic map where similar samples are mapped close together and dissimilar ones apart. This may be visualized by aU-Matrix(Euclidean distance between weight vectors of neighboring cells) of the SOM.[14][15][16]
The other way is to think of neuronal weights as pointers to the input space. They form a discrete approximation of the distribution of training samples. More neurons point to regions with high training sample concentration and fewer where the samples are scarce.
SOM may be considered a nonlinear generalization ofPrincipal components analysis(PCA).[17]It has been shown, using both artificial and real geophysical data, that SOM has many advantages[18][19]over the conventionalfeature extractionmethods such as Empirical Orthogonal Functions (EOF) or PCA. Additionally, researchers found that Clustering and PCA reflect different facets of the same local feedback circuit of human brain, with the SOM providing the shared learning rules that guide both processes. In other words, Clustering and PCA synergize via SOM.[20]
Originally, SOM was not formulated as a solution to an optimisation problem. Nevertheless, there have been several attempts to modify the definition of SOM and to formulate an optimisation problem which gives similar results.[21]For example,Elastic mapsuse the mechanical metaphor of elasticity to approximateprincipal manifolds:[22]the analogy is an elastic membrane and plate.
|
https://en.wikipedia.org/wiki/Self-organizing_map
|
In (unconstrained)mathematical optimization, abacktracking line searchis aline searchmethod to determine the amount to move along a givensearch direction. Its use requires that theobjective functionisdifferentiableand that itsgradientis known.
The method involves starting with a relatively large estimate of thestep sizefor movement along the line search direction, and iteratively shrinking the step size (i.e., "backtracking") until a decrease of the objective function is observed that adequately corresponds to the amount of decrease that is expected, based on the step size and the local gradient of the objective function. A common stopping criterion is theArmijo–Goldstein condition.
Backtracking line search is typically used forgradient descent(GD), but it can also be used in other contexts. For example, it can be used withNewton's methodif theHessian matrixispositive definite.
Given a starting positionx{\displaystyle \mathbf {x} }and a search directionp{\displaystyle \mathbf {p} }, the task of a line search is to determine a step sizeα>0{\displaystyle \alpha >0}that adequately reduces the objective functionf:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }(assumedC1{\displaystyle C^{1}}i.e. continuously differentiable), i.e., to find a value ofα{\displaystyle \alpha }that reducesf(x+αp){\displaystyle f(\mathbf {x} +\alpha \,\mathbf {p} )}relative tof(x){\displaystyle f(\mathbf {x} )}. However, it is usually undesirable to devote substantial resources to finding a value ofα{\displaystyle \alpha }to precisely minimizef{\displaystyle f}. This is because the computing resources needed to find a more precise minimum along one particular direction could instead be employed to identify a better search direction. Once an improved starting point has been identified by the line search, another subsequent line search will ordinarily be performed in a new direction. The goal, then, is just to identify a value ofα{\displaystyle \alpha }that provides a reasonable amount of improvement in the objective function, rather than to find the actual minimizing value ofα{\displaystyle \alpha }.
The backtracking line search starts with a large estimate ofα{\displaystyle \alpha }and iteratively shrinks it. The shrinking continues until a value is found that is small enough to provide a decrease in the objective function that adequately matches the decrease that is expected to be achieved, based on the local function gradient∇f(x).{\displaystyle \nabla f(\mathbf {x} )\,.}
Define the local slope of the function ofα{\displaystyle \alpha }along the search directionp{\displaystyle \mathbf {p} }asm=∇f(x)Tp=⟨∇f(x),p⟩{\displaystyle m=\nabla f(\mathbf {x} )^{\mathrm {T} }\,\mathbf {p} =\langle \nabla f(\mathbf {x} ),\mathbf {p} \rangle }(where⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }denotes thedot product). It is assumed thatp{\displaystyle \mathbf {p} }is a vector for which some local decrease is possible, i.e., it is assumed thatm<0{\displaystyle m<0}.
Based on a selected control parameterc∈(0,1){\displaystyle c\,\in \,(0,1)}, the Armijo–Goldstein condition tests whether a step-wise movement from a current positionx{\displaystyle \mathbf {x} }to a modified positionx+αp{\displaystyle \mathbf {x} +\alpha \,\mathbf {p} }achieves an adequately corresponding decrease in the objective function. The condition is fulfilled, seeArmijo (1966), iff(x+αp)≤f(x)+αcm.{\displaystyle f(\mathbf {x} +\alpha \,\mathbf {p} )\leq f(\mathbf {x} )+\alpha \,c\,m\,.}
This condition, when used appropriately as part of a line search, can ensure that the step size is not excessively large. However, this condition is not sufficient on its own to ensure that the step size is nearly optimal, since any value ofα{\displaystyle \displaystyle \alpha }that is sufficiently small will satisfy the condition.
Thus, the backtracking line search strategy starts with a relatively large step size, and repeatedly shrinks it by a factorτ∈(0,1){\displaystyle \tau \,\in \,(0,1)}until the Armijo–Goldstein condition is fulfilled.
The search will terminate after a finite number of steps for any positive values ofc{\displaystyle c}andτ{\displaystyle \tau }that are less than 1. For example, Armijo used1⁄2for bothc{\displaystyle c}andτ{\displaystyle \tau }inArmijo (1966).
This condition is fromArmijo (1966). Starting with a maximum candidate step size valueα0>0{\displaystyle \alpha _{0}>0\,}, using search control parametersτ∈(0,1){\displaystyle \tau \,\in \,(0,1)}andc∈(0,1){\displaystyle c\,\in \,(0,1)}, the backtracking line search algorithm can be expressed as follows:
In other words, reduceα0{\displaystyle \alpha _{0}}by a factor ofτ{\displaystyle \tau \,}in each iteration until the Armijo–Goldstein condition is fulfilled.
In practice, the above algorithm is typically iterated to produce a sequencexn{\displaystyle \mathbf {x} _{n}},n=1,2,...{\displaystyle n=1,2,...}, to converge to a minimum, provided such a minimum exists andpn{\displaystyle \mathbf {p} _{n}}is selected appropriately in each step. For gradient descent,pn{\displaystyle \mathbf {p} _{n}}is selected as−∇f(xn){\displaystyle -\nabla f(\mathbf {x} _{n})}.
The value ofαj{\displaystyle \alpha _{j}}for thej{\displaystyle j}that fulfills the Armijo–Goldstein condition depends onx{\displaystyle \mathbf {x} }andp{\displaystyle \mathbf {p} }, and is thus denoted below byα(x,p){\displaystyle \alpha (\mathbf {x} ,\mathbf {p} )}. It also depends onf{\displaystyle f},α0{\displaystyle \alpha _{0}},τ{\displaystyle \tau }andc{\displaystyle c}of course, although these dependencies can be left implicit if they are assumed to be fixed with respect to the optimization problem.
The detailed steps are thus, seeArmijo (1966),Bertsekas (2016):
To assure good behavior, it is necessary that some conditions must be satisfied bypn{\displaystyle \mathbf {p} _{n}}. Roughly speakingpn{\displaystyle \mathbf {p} _{n}}should not be too far away from∇f(xn){\displaystyle \nabla f(\mathbf {x} _{n})}. A precise version is as follows (see e.g.Bertsekas (2016)). There are constantsC1,C2>0{\displaystyle C_{1},C_{2}>0}so that the following two conditions are satisfied:
This addresses the question whether there is a systematic way to find a positive numberβ(x,p){\displaystyle \beta (\mathbf {x} ,\mathbf {p} )}- depending on the function f, the pointx{\displaystyle \mathbf {x} }and the descent directionp{\displaystyle \mathbf {p} }- so that alllearning ratesα≤β(x,p){\displaystyle \alpha \leq \beta (\mathbf {x} ,\mathbf {p} )}satisfy Armijo's condition. Whenp=−∇f(x){\displaystyle \mathbf {p} =-\nabla f(\mathbf {x} )}, we can chooseβ(x,p){\displaystyle \beta (\mathbf {x} ,\mathbf {p} )}in the order of1/L(x){\displaystyle 1/L(\mathbf {x} )\,}, whereL(x){\displaystyle L(\mathbf {x} )\,}is a local Lipschitz constant for the gradient∇f{\displaystyle \nabla f\,}near the pointx{\displaystyle \mathbf {x} }(seeLipschitz continuity). If the function isC2{\displaystyle C^{2}}, thenL(x){\displaystyle L(\mathbf {x} )\,}is close to the Hessian of the function at the pointx{\displaystyle \mathbf {x} }. SeeArmijo (1966)for more detail.
In the same situation wherep=−∇f(x){\displaystyle \mathbf {p} =-\nabla f(\mathbf {x} )}, an interesting question is how large learning rates can be chosen in Armijo's condition (that is, when one has no limit onα0{\displaystyle \alpha _{0}}as defined in the section "Function minimization using backtracking line search in practice"), since larger learning rates whenxn{\displaystyle \mathbf {x} _{n}}is closer to the limit point (if exists) can make convergence faster. For example, inWolfe conditions, there is no mention ofα0{\displaystyle \alpha _{0}}but another condition called curvature condition is introduced.
An upper bound for learning rates is shown to exist if one wants the constructed sequencexn{\displaystyle \mathbf {x} _{n}}converges to anon-degenerate critical point, seeTruong & Nguyen (2020): The learning rates must be bounded from above roughly by||H||×||H−1||2{\displaystyle ||H||\times ||H^{-1}||^{2}}. Here H is the Hessian of the function at the limit point,H−1{\displaystyle H^{-1}}is itsinverse, and||.||{\displaystyle ||.||}is thenorm of a linear operator. Thus, this result applies for example when one uses Backtracking line search forMorse functions. Note that in dimension 1,H{\displaystyle H}is a number and hence this upper bound is of the same size as the lower bound in the section "Lower bound for learning rates".
On the other hand, if the limit point is degenerate, then learning rates can be unbounded. For example, a modification of backtracking line search known as unbounded backtracking gradient descent (seeTruong & Nguyen (2020)) allows the learning rate to be half the size||∇f(xn)||−γ{\displaystyle ||\nabla f(\mathbf {x} _{n})||^{-\gamma }}, where1>γ>0{\displaystyle 1>\gamma >0}is a constant. Experiments with simple functions such asf(x,y)=x4+y4{\displaystyle f(x,y)=x^{4}+y^{4}}show that unbounded backtracking gradient descent converges much faster than the basic version described in the section "Function minimization using backtracking line search in practice".
An argument against the use of Backtracking line search, in particular in Large scale optimisation, is that satisfying Armijo's condition is expensive. There is a way (so-called Two-way Backtracking) to go around, with good theoretical guarantees and has been tested with good results ondeep neural networks, seeTruong & Nguyen (2020). (There, one can find also good/stable implementations of Armijo's condition and its combination with some popular algorithms such as Momentum and NAG, on datasets such as Cifar10 and Cifar100.) One observes that if the sequencexn{\displaystyle \mathbf {x} _{n}}converges (as wished when one makes use of an iterative optimisation method), then the sequence of learning ratesαn{\displaystyle \alpha _{n}}should vary little when n is large enough. Therefore, in the search forαn{\displaystyle \alpha _{n}}, if one always starts fromα0{\displaystyle \alpha _{0}}, one would waste a lot of time if it turns out that the sequenceαn{\displaystyle \alpha _{n}}stays far away fromα0{\displaystyle \alpha _{0}}. Instead, one should search forαn{\displaystyle \alpha _{n}}by starting fromαn−1{\displaystyle \alpha _{n-1}}. The second observation is thatαn{\displaystyle \alpha _{n}}could be larger thanαn−1{\displaystyle \alpha _{n-1}}, and hence one should allow to increase learning rate (and not just decrease as in the section Algorithm). Here is the detailed algorithm for Two-way Backtracking: At step n
(InNocedal & Wright (2000)one can find a description of an algorithm with 1), 3) and 4) above, which was not tested in deep neural networks before the cited paper.)
One can save time further by a hybrid mixture between two-way backtracking and the basic standard gradient descent algorithm. This procedure also has good theoretical guarantee and good test performance. Roughly speaking, we run two-way backtracking a few times, then use the learning rate we get from then unchanged, except if the function value increases. Here is precisely how it is done. One choose in advance a numberN{\displaystyle N}, and a numberm≤N{\displaystyle m\leq N}.
Compared with Wolfe's conditions, which is more complicated, Armijo's condition has a better theoretical guarantee. Indeed, so far backtracking line search and its modifications are the most theoretically guaranteed methods among all numerical optimization algorithms concerning convergence tocritical pointsand avoidance ofsaddle points, see below.
Critical pointsare points where the gradient of the objective function is 0. Local minima are critical points, but there are critical points which are not local minima. An example is saddle points.Saddle pointsare critical points, at which there are at least one direction where the function is (local) maximum. Therefore, these points are far from being local minima. For example, if a function has at least one saddle point, then it cannot beconvex. The relevance of saddle points to optimisation algorithms is that in large scale (i.e. high-dimensional) optimisation, one likely sees more saddle points than minima, seeBray & Dean (2007). Hence, a good optimisation algorithm should be able to avoid saddle points. In the setting ofdeep learning, saddle points are also prevalent, seeDauphin et al. (2014). Thus, to apply in deep learning, one needs results for non-convex functions.
For convergence to critical points: For example, if the cost function is areal analytic function, then it is shown inAbsil, Mahony & Andrews (2005)that convergence is guaranteed. The main idea is to useŁojasiewicz inequalitywhich is enjoyed by a real analytic function. For non-smooth functions satisfyingŁojasiewicz inequality, the above convergence guarantee is extended, seeAttouch, Bolte & Svaiter (2011). InBertsekas (2016), there is a proof that for every sequence constructed by backtracking line search, a cluster point (i.e. thelimitof onesubsequence, if the subsequence converges) is a critical point. For the case of a function with at most countably many critical points (such as aMorse function) andcompactsublevels, as well as with Lipschitz continuous gradient where one uses standard GD with learning rate <1/L (see the section "Stochastic gradient descent"), then convergence is guaranteed, see for example Chapter 12 inLange (2013). Here the assumption about compact sublevels is to make sure that one deals with compact sets of the Euclidean space only. In the general case, wheref{\displaystyle f}is only assumed to beC1{\displaystyle C^{1}}and have at most countably many critical points, convergence is guaranteed, seeTruong & Nguyen (2020). In the same reference, similarly convergence is guaranteed for other modifications of Backtracking line search (such as Unbounded backtracking gradient descent mentioned in the section "Upper bound for learning rates"), and even if the function has uncountably many critical points still one can deduce some non-trivial facts about convergence behaviour. In the stochastic setting, under the same assumption that the gradient is Lipschitz continuous and one uses a more restrictive version (requiring in addition that the sum of learning rates is infinite and the sum of squares of learning rates is finite) of diminishing learning rate scheme (see section "Stochastic gradient descent") and moreover the function is strictly convex, then the convergence is established in the well-known resultRobbins & Monro (1951), seeBertsekas & Tsitsiklis (2006)for generalisations to less restrictive versions of a diminishing learning rate scheme. None of these results (for non-convex functions) have been proven for any other optimization algorithm so far.[citation needed]
For avoidance of saddle points: For example, if the gradient of the cost function is Lipschitz continuous and one chooses standard GD with learning rate <1/L, then with a random choice of initial pointx0{\displaystyle \mathbf {x} _{0}}(more precisely, outside a set ofLebesgue measurezero), the sequence constructed will not converge to anon-degeneratesaddle point (proven inLee et al. (2016)), and more generally it is also true that the sequence constructed will not converge to a degenerate saddle point (proven inPanageas & Piliouras (2017)). Under the same assumption that the gradient is Lipschitz continuous and one uses a diminishing learning rate scheme (see the section "Stochastic gradient descent"), then avoidance of saddle points is established inPanageas, Piliouras & Wang (2019).
While it is trivial to mention, if the gradient of a cost function is Lipschitz continuous, with Lipschitz constant L, then with choosing learning rate to be constant and of the size1/L{\displaystyle 1/L}, one has a special case of backtracking line search (for gradient descent). This has been used at least inArmijo (1966). This scheme however requires that one needs to have a good estimate for L, otherwise if learning rate is too big (relative to 1/L) then the scheme has no convergence guarantee. One can see what will go wrong if the cost function is a smoothing (near the point 0) of the function f(t)=|t|. Such a good estimate is, however, difficult and laborious in large dimensions. Also, if the gradient of the function is not globally Lipschitz continuous, then this scheme has no convergence guarantee. For example, this is similar to an exercise inBertsekas (2016), for the cost functionf(t)=|t|1.5{\displaystyle f(t)=|t|^{1.5}\,}and for whatever constant learning rate one chooses, with a random initial point the sequence constructed by this special scheme does not converge to the global minimum 0.
If one does not care about the condition that learning rate must be bounded by 1/L, then this special scheme has been used much older, at least since 1847 byCauchy, which can be called standard GD (not to be confused with stochastic gradient descent, which is abbreviated herein as SGD). In the stochastic setting (such as in the mini-batch setting in deep learning), standard GD is calledstochastic gradient descent, or SGD.
Even if the cost function has globally continuous gradient, good estimate of the Lipschitz constant for the cost functions in deep learning may not be feasible or desirable, given the very high dimensions ofdeep neural networks. Hence, there is a technique of fine-tuning of learning rates in applying standard GD or SGD. One way is to choose many learning rates from a grid search, with the hope that some of the learning rates can give good results. (However, if the loss function does not have global Lipschitz continuous gradient, then the example withf(t)=|t|1.5{\displaystyle f(t)=|t|^{1.5}\,}above shows that grid search cannot help.) Another way is the so-called adaptive standard GD or SGD, some representatives are Adam, Adadelta, RMSProp and so on, see the article onStochastic gradient descent. In adaptive standard GD or SGD, learning rates are allowed to vary at each iterate step n, but in a different manner from Backtracking line search for gradient descent. Apparently, it would be more expensive to use Backtracking line search for gradient descent, since one needs to do a loop search until Armijo's condition is satisfied, while for adaptive standard GD or SGD no loop search is needed. Most of these adaptive standard GD or SGD do not have the descent propertyf(xn+1)≤f(xn){\displaystyle f(x_{n+1})\leq f(x_{n})}, for all n, as Backtracking line search for gradient descent. Only a few has this property, and which have good theoretical properties, but they turn out to be special cases of Backtracking line search or more generally Armijo's conditionArmijo (1966). The first one is when one chooses learning rate to be a constant <1/L, as mentioned above, if one can have a good estimate of L. The second is the so called diminishing learning rate, used in the well-known paper byRobbins & Monro (1951), if again the function has globally Lipschitz continuous gradient (but the Lipschitz constant may be unknown) and the learning rates converge to 0.
In summary, backtracking line search (and its modifications) is a method which is easy to implement, is applicable for very general functions, has very good theoretical guarantee (for both convergence to critical points and avoidance of saddle points) and works well in practice. Several other methods which have good theoretical guarantee, such as diminishing learning rates or standard GD with learning rate <1/L – both require the gradient of the objective function to be Lipschitz continuous, turn out to be a special case of Backtracking line search or satisfy Armijo's condition. Even thougha priorione needs the cost function to be continuously differentiable to apply this method, in practice one can apply this method successfully also for functions which are continuously differentiable on a dense open subset such asf(t)=|t|{\displaystyle f(t)=|t|}orf(t)=ReLu(t)=max{t,0}{\displaystyle f(t)=ReLu(t)=\max\{t,0\}}.
|
https://en.wikipedia.org/wiki/Backtracking_line_search
|
Inmathematics, theconjugate gradient methodis analgorithmfor thenumerical solutionof particularsystems of linear equations, namely those whose matrix ispositive-semidefinite. The conjugate gradient method is often implemented as aniterative algorithm, applicable tosparsesystems that are too large to be handled by a direct implementation or other direct methods such as theCholesky decomposition. Large sparse systems often arise when numerically solvingpartial differential equationsor optimization problems.
The conjugate gradient method can also be used to solve unconstrainedoptimizationproblems such asenergy minimization. It is commonly attributed toMagnus HestenesandEduard Stiefel,[1][2]who programmed it on theZ4,[3]and extensively researched it.[4][5]
Thebiconjugate gradient methodprovides a generalization to non-symmetric matrices. Variousnonlinear conjugate gradient methodsseek minima of nonlinear optimization problems.
Suppose we want to solve thesystem of linear equations
for the vectorx{\displaystyle \mathbf {x} }, where the knownn×n{\displaystyle n\times n}matrixA{\displaystyle \mathbf {A} }issymmetric(i.e.,AT=A{\displaystyle \mathbf {A} ^{\mathsf {T}}=\mathbf {A} }),positive-definite(i.e.xTAx>0{\displaystyle \mathbf {x} ^{\mathsf {T}}\mathbf {Ax} >0}for all non-zero vectorsx{\displaystyle \mathbf {x} }inRn{\displaystyle \mathbb {R} ^{n}}), andreal, andb{\displaystyle \mathbf {b} }is known as well. We denote the unique solution of this system byx∗{\displaystyle \mathbf {x} _{*}}.
The conjugate gradient method can be derived from several different perspectives, including specialization of the conjugate direction method for optimization, and variation of theArnoldi/Lanczositeration foreigenvalueproblems. Despite differences in their approaches, these derivations share a common topic—proving the orthogonality of the residuals and conjugacy of the search directions. These two properties are crucial to developing the well-known succinct formulation of the method.
We say that two non-zero vectorsu{\displaystyle \mathbf {u} }andv{\displaystyle \mathbf {v} }are conjugate (with respect toA{\displaystyle \mathbf {A} }) if
SinceA{\displaystyle \mathbf {A} }is symmetric and positive-definite, the left-hand side defines aninner product
Two vectors are conjugate if and only if they are orthogonal with respect to this inner product. Being conjugate is a symmetric relation: ifu{\displaystyle \mathbf {u} }is conjugate tov{\displaystyle \mathbf {v} }, thenv{\displaystyle \mathbf {v} }is conjugate tou{\displaystyle \mathbf {u} }. Suppose that
is a set ofn{\displaystyle n}mutually conjugate vectors with respect toA{\displaystyle \mathbf {A} }, i.e.piTApj=0{\displaystyle \mathbf {p} _{i}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{j}=0}for alli≠j{\displaystyle i\neq j}.
ThenP{\displaystyle P}forms abasisforRn{\displaystyle \mathbb {R} ^{n}}, and we may express the solutionx∗{\displaystyle \mathbf {x} _{*}}ofAx=b{\displaystyle \mathbf {Ax} =\mathbf {b} }in this basis:
Left-multiplying the problemAx=b{\displaystyle \mathbf {Ax} =\mathbf {b} }with the vectorpkT{\displaystyle \mathbf {p} _{k}^{\mathsf {T}}}yields
and so
This gives the following method[4]for solving the equationAx=b{\displaystyle \mathbf {Ax} =\mathbf {b} }: find a sequence ofn{\displaystyle n}conjugate directions, and then compute the coefficientsαk{\displaystyle \alpha _{k}}.
If we choose the conjugate vectorspk{\displaystyle \mathbf {p} _{k}}carefully, then we may not need all of them to obtain a good approximation to the solutionx∗{\displaystyle \mathbf {x} _{*}}. So, we want to regard the conjugate gradient method as an iterative method. This also allows us to approximately solve systems wheren{\displaystyle n}is so large that the direct method would take too much time.
We denote the initial guess forx∗{\displaystyle \mathbf {x} _{*}}byx0{\displaystyle \mathbf {x} _{0}}(we can assume without loss of generality thatx0=0{\displaystyle \mathbf {x} _{0}=\mathbf {0} }, otherwise consider the systemAz=b−Ax0{\displaystyle \mathbf {Az} =\mathbf {b} -\mathbf {Ax} _{0}}instead). Starting withx0{\displaystyle \mathbf {x} _{0}}we search for the solution and in each iteration we need a metric to tell us whether we are closer to the solutionx∗{\displaystyle \mathbf {x} _{*}}(that is unknown to us). This metric comes from the fact that the solutionx∗{\displaystyle \mathbf {x} _{*}}is also the unique minimizer of the followingquadratic function
The existence of a unique minimizer is apparent as itsHessian matrixof second derivatives is symmetric positive-definite
and that the minimizer (useDf(x)=0{\displaystyle Df(\mathbf {x} )=0}) solves the initial problem follows from its first derivative
This suggests taking the first basis vectorp0{\displaystyle \mathbf {p} _{0}}to be the negative of the gradient off{\displaystyle f}atx=x0{\displaystyle \mathbf {x} =\mathbf {x} _{0}}. The gradient off{\displaystyle f}equalsAx−b{\displaystyle \mathbf {Ax} -\mathbf {b} }. Starting with an initial guessx0{\displaystyle \mathbf {x} _{0}}, this means we takep0=b−Ax0{\displaystyle \mathbf {p} _{0}=\mathbf {b} -\mathbf {Ax} _{0}}. The other vectors in the basis will be conjugate to the gradient, hence the nameconjugate gradient method. Note thatp0{\displaystyle \mathbf {p} _{0}}is also theresidualprovided by this initial step of the algorithm.
Letrk{\displaystyle \mathbf {r} _{k}}be theresidualat thek{\displaystyle k}th step:
As observed above,rk{\displaystyle \mathbf {r} _{k}}is the negative gradient off{\displaystyle f}atxk{\displaystyle \mathbf {x} _{k}}, so thegradient descentmethod would require to move in the directionrk. Here, however, we insist that the directionspk{\displaystyle \mathbf {p} _{k}}must be conjugate to each other. A practical way to enforce this is by requiring that the next search direction be built out of the current residual and all previous search directions. The conjugation constraint is an orthonormal-type constraint and hence the algorithm can be viewed as an example ofGram-Schmidt orthonormalization. This gives the following expression:
(see the picture at the top of the article for the effect of the conjugacy constraint on convergence). Following this direction, the next optimal location is given by
with
where the last equality follows from the definition ofrk{\displaystyle \mathbf {r} _{k}}.
The expression forαk{\displaystyle \alpha _{k}}can be derived if one substitutes the expression forxk+1intofand minimizing it with respect toαk{\displaystyle \alpha _{k}}
The above algorithm gives the most straightforward explanation of the conjugate gradient method. Seemingly, the algorithm as stated requires storage of all previous searching directions and residue vectors, as well as many matrix–vector multiplications, and thus can be computationally expensive. However, a closer analysis of the algorithm shows thatri{\displaystyle \mathbf {r} _{i}}is orthogonal torj{\displaystyle \mathbf {r} _{j}}, i.e.riTrj=0{\displaystyle \mathbf {r} _{i}^{\mathsf {T}}\mathbf {r} _{j}=0}, fori≠j{\displaystyle i\neq j}. Andpi{\displaystyle \mathbf {p} _{i}}isA{\displaystyle \mathbf {A} }-orthogonal topj{\displaystyle \mathbf {p} _{j}}, i.e.piTApj=0{\displaystyle \mathbf {p} _{i}^{\mathsf {T}}\mathbf {A} \mathbf {p} _{j}=0}, fori≠j{\displaystyle i\neq j}. This can be regarded that as the algorithm progresses,pi{\displaystyle \mathbf {p} _{i}}andri{\displaystyle \mathbf {r} _{i}}span the sameKrylov subspace, whereri{\displaystyle \mathbf {r} _{i}}form the orthogonal basis with respect to the standard inner product, andpi{\displaystyle \mathbf {p} _{i}}form the orthogonal basis with respect to the inner product induced byA{\displaystyle \mathbf {A} }. Therefore,xk{\displaystyle \mathbf {x} _{k}}can be regarded as the projection ofx{\displaystyle \mathbf {x} }on the Krylov subspace.
That is, if the CG method starts withx0=0{\displaystyle \mathbf {x} _{0}=0}, then[6]xk=argminy∈Rn{(x−y)⊤A(x−y):y∈span{b,Ab,…,Ak−1b}}{\displaystyle x_{k}=\mathrm {argmin} _{y\in \mathbb {R} ^{n}}{\left\{(x-y)^{\top }A(x-y):y\in \operatorname {span} \left\{b,Ab,\ldots ,A^{k-1}b\right\}\right\}}}The algorithm is detailed below for solvingAx=b{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} }whereA{\displaystyle \mathbf {A} }is a real, symmetric, positive-definite matrix. The input vectorx0{\displaystyle \mathbf {x} _{0}}can be an approximate initial solution or0{\displaystyle \mathbf {0} }. It is a different formulation of the exact procedure described above.
This is the most commonly used algorithm. The same formula forβk{\displaystyle \beta _{k}}is also used in the Fletcher–Reevesnonlinear conjugate gradient method.
We note thatx1{\displaystyle \mathbf {x} _{1}}is computed by thegradient descentmethod applied tox0{\displaystyle \mathbf {x} _{0}}. Settingβk=0{\displaystyle \beta _{k}=0}would similarly makexk+1{\displaystyle \mathbf {x} _{k+1}}computed by thegradient descentmethod fromxk{\displaystyle \mathbf {x} _{k}}, i.e., can be used as a simple implementation of a restart of the conjugate gradient iterations.[4]Restarts could slow down convergence, but may improve stability if the conjugate gradient method misbehaves, e.g., due toround-off error.
The formulasxk+1:=xk+αkpk{\displaystyle \mathbf {x} _{k+1}:=\mathbf {x} _{k}+\alpha _{k}\mathbf {p} _{k}}andrk:=b−Axk{\displaystyle \mathbf {r} _{k}:=\mathbf {b} -\mathbf {Ax} _{k}}, which both hold in exact arithmetic, make the formulasrk+1:=rk−αkApk{\displaystyle \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}}andrk+1:=b−Axk+1{\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}}mathematically equivalent. The former is used in the algorithm to avoid an extra multiplication byA{\displaystyle \mathbf {A} }since the vectorApk{\displaystyle \mathbf {Ap} _{k}}is already computed to evaluateαk{\displaystyle \alpha _{k}}. The latter may be more accurate, substituting the explicit calculationrk+1:=b−Axk+1{\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}}for the implicit one by the recursion subject toround-off erroraccumulation, and is thus recommended for an occasional evaluation.[7]
A norm of the residual is typically used for stopping criteria. The norm of the explicit residualrk+1:=b−Axk+1{\displaystyle \mathbf {r} _{k+1}:=\mathbf {b} -\mathbf {Ax} _{k+1}}provides a guaranteed level of accuracy both in exact arithmetic and in the presence of therounding errors, where convergence naturally stagnates. In contrast, the implicit residualrk+1:=rk−αkApk{\displaystyle \mathbf {r} _{k+1}:=\mathbf {r} _{k}-\alpha _{k}\mathbf {Ap} _{k}}is known to keep getting smaller in amplitude well below the level ofrounding errorsand thus cannot be used to determine the stagnation of convergence.
In the algorithm,αk{\displaystyle \alpha _{k}}is chosen such thatrk+1{\displaystyle \mathbf {r} _{k+1}}is orthogonal tork{\displaystyle \mathbf {r} _{k}}. The denominator is simplified from
sincerk+1=pk+1−βkpk{\displaystyle \mathbf {r} _{k+1}=\mathbf {p} _{k+1}-\mathbf {\beta } _{k}\mathbf {p} _{k}}. Theβk{\displaystyle \beta _{k}}is chosen such thatpk+1{\displaystyle \mathbf {p} _{k+1}}is conjugate topk{\displaystyle \mathbf {p} _{k}}. Initially,βk{\displaystyle \beta _{k}}is
using
and equivalently
Apk=1αk(rk−rk+1),{\displaystyle \mathbf {A} \mathbf {p} _{k}={\frac {1}{\alpha _{k}}}(\mathbf {r} _{k}-\mathbf {r} _{k+1}),}
the numerator ofβk{\displaystyle \beta _{k}}is rewritten as
becauserk+1{\displaystyle \mathbf {r} _{k+1}}andrk{\displaystyle \mathbf {r} _{k}}are orthogonal by design. The denominator is rewritten as
using that the search directionspk{\displaystyle \mathbf {p} _{k}}are conjugated and again that the residuals are orthogonal. This gives theβ{\displaystyle \beta }in the algorithm after cancellingαk{\displaystyle \alpha _{k}}.
Consider the linear systemAx=bgiven by
we will perform two steps of the conjugate gradient method beginning with the initial guess
in order to find an approximate solution to the system.
For reference, the exact solution is
Our first step is to calculate the residual vectorr0associated withx0. This residual is computed from the formular0=b-Ax0, and in our case is equal to
Since this is the first iteration, we will use the residual vectorr0as our initial search directionp0; the method of selectingpkwill change in further iterations.
We now compute the scalarα0using the relationship
We can now computex1using the formula
This result completes the first iteration, the result being an "improved" approximate solution to the system,x1. We may now move on and compute the next residual vectorr1using the formula
Our next step in the process is to compute the scalarβ0that will eventually be used to determine the next search directionp1.
Now, using this scalarβ0, we can compute the next search directionp1using the relationship
We now compute the scalarα1using our newly acquiredp1using the same method as that used forα0.
Finally, we findx2using the same method as that used to findx1.
The result,x2, is a "better" approximation to the system's solution thanx1andx0. If exact arithmetic were to be used in this example instead of limited-precision, then the exact solution would theoretically have been reached aftern= 2 iterations (nbeing the order of the system).
Under exact arithmetic, the number of iterations required is no more than the order of the matrix. This behavior is known as thefinite termination propertyof the conjugate gradient method. It refers to the method's ability to reach the exact solution of a linear system in a finite number of steps—at most equal to the dimension of the system—when exact arithmetic is used. This property arises from the fact that, at each iteration, the method generates a residual vector that is orthogonal to all previous residuals. These residuals form a mutually orthogonal set.
In an \(n\)-dimensional space, it is impossible to construct more than \(n\) linearly independent and mutually orthogonal vectors unless one of them is the zero vector. Therefore, once a zero residual appears, the method has reached the solution and must terminate. This ensures that the conjugate gradient method converges in at most \(n\) steps.
To demonstrate this, consider the system:
A=[3−2−24],b=[11]{\displaystyle A={\begin{bmatrix}3&-2\\-2&4\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}1\\1\end{bmatrix}}}
We start from an initial guessx0=[12]{\displaystyle \mathbf {x} _{0}={\begin{bmatrix}1\\2\end{bmatrix}}}. SinceA{\displaystyle A}is symmetric positive-definite and the system is 2-dimensional, the conjugate gradient method should find the exact solution in no more than 2 steps. The following MATLAB code demonstrates this behavior:
The output confirms that the method reaches[11]{\displaystyle {\begin{bmatrix}1\\1\end{bmatrix}}}after two iterations, consistent with the theoretical prediction. This example illustrates how the conjugate gradient method behaves as a direct method under idealized conditions.
The finite termination property also has practical implications in solving large sparse systems, which frequently arise in scientific and engineering applications. For instance, discretizing the two-dimensional Laplace equation∇2u=0{\displaystyle \nabla ^{2}u=0}using finite differences on a uniform grid leads to a sparse linear systemAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, whereA{\displaystyle A}is symmetric and positive definite.
Using a5×5{\displaystyle 5\times 5}interior grid yields a25×25{\displaystyle 25\times 25}system, and the coefficient matrixA{\displaystyle A}has a five-point stencil pattern. Each row ofA{\displaystyle A}contains at most five nonzero entries corresponding to the central point and its immediate neighbors. For example, the matrix generated from such a grid may look like:
A=[4−10⋯−10⋯−14−1⋯00⋯0−14−100⋯⋮⋮⋱⋱⋱⋮−10⋯−14−1⋯00⋯0−14⋯⋮⋮⋯⋯⋯⋱]{\displaystyle A={\begin{bmatrix}4&-1&0&\cdots &-1&0&\cdots \\-1&4&-1&\cdots &0&0&\cdots \\0&-1&4&-1&0&0&\cdots \\\vdots &\vdots &\ddots &\ddots &\ddots &\vdots \\-1&0&\cdots &-1&4&-1&\cdots \\0&0&\cdots &0&-1&4&\cdots \\\vdots &\vdots &\cdots &\cdots &\cdots &\ddots \end{bmatrix}}}
Although the system dimension is 25, the conjugate gradient method is theoretically guaranteed to terminate in at most 25 iterations under exact arithmetic. In practice, convergence often occurs in far fewer steps due to the matrix's spectral properties. This efficiency makes CGM particularly attractive for solving large-scale systems arising from partial differential equations, such as those found in heat conduction, fluid dynamics, and electrostatics.
The conjugate gradient method can theoretically be viewed as a direct method, as in the absence ofround-off errorit produces the exact solution after a finite number of iterations, which is not larger than the size of the matrix. In practice, the exact solution is never obtained since the conjugate gradient method is unstable with respect to even small perturbations, e.g., most directions are not in practice conjugate, due to a degenerative nature of generating the Krylov subspaces.
As aniterative method, the conjugate gradient method monotonically (in the energy norm) improves approximationsxk{\displaystyle \mathbf {x} _{k}}to the exact solution and may reach the required tolerance after a relatively small (compared to the problem size) number of iterations. The improvement is typically linear and its speed is determined by thecondition numberκ(A){\displaystyle \kappa (A)}of the system matrixA{\displaystyle A}: the largerκ(A){\displaystyle \kappa (A)}is, the slower the improvement.[8]
However, an interesting case appears when the eigenvalues are spaced logarithmically for a large symmetric matrix. For example, letA=QDQT{\displaystyle A=QDQ^{T}}whereQ{\displaystyle Q}is a random orthogonal matrix andD{\displaystyle D}is a diagonal matrix with eigenvalues ranging fromλn=1{\displaystyle \lambda _{n}=1}toλ1=106{\displaystyle \lambda _{1}=10^{6}}, spaced logarithmically. Despite the finite termination property of CGM, where the exact solution should theoretically be reached in at mostn{\displaystyle n}steps, the method may exhibit stagnation in convergence. In such a scenario, even after many more iterations—e.g., ten times the matrix size—the error may only decrease modestly (e.g., to10−5{\displaystyle 10^{-5}}). Moreover, the iterative error may oscillate significantly, making it unreliable as a stopping condition. This poor convergence is not explained by the condition number alone (e.g.,κ2(A)=106{\displaystyle \kappa _{2}(A)=10^{6}}), but rather by the eigenvalue distribution itself. When the eigenvalues are more evenly spaced or randomly distributed, such convergence issues are typically absent, highlighting that CGM performance depends not only onκ(A){\displaystyle \kappa (A)}but also on how the eigenvalues are distributed.[9]
Ifκ(A){\displaystyle \kappa (A)}is large,preconditioningis commonly used to replace the original systemAx−b=0{\displaystyle \mathbf {Ax} -\mathbf {b} =0}withM−1(Ax−b)=0{\displaystyle \mathbf {M} ^{-1}(\mathbf {Ax} -\mathbf {b} )=0}such thatκ(M−1A){\displaystyle \kappa (\mathbf {M} ^{-1}\mathbf {A} )}is smaller thanκ(A){\displaystyle \kappa (\mathbf {A} )}, see below.
Define a subset of polynomials as
whereΠk{\displaystyle \Pi _{k}}is the set ofpolynomialsof maximal degreek{\displaystyle k}.
Let(xk)k{\displaystyle \left(\mathbf {x} _{k}\right)_{k}}be the iterative approximations of the exact solutionx∗{\displaystyle \mathbf {x} _{*}}, and define the errors asek:=xk−x∗{\displaystyle \mathbf {e} _{k}:=\mathbf {x} _{k}-\mathbf {x} _{*}}.
Now, the rate of convergence can be approximated as[4][10]
whereσ(A){\displaystyle \sigma (\mathbf {A} )}denotes thespectrum, andκ(A){\displaystyle \kappa (\mathbf {A} )}denotes thecondition number.
This showsk=12κ(A)log(‖e0‖Aε−1){\displaystyle k={\tfrac {1}{2}}{\sqrt {\kappa (\mathbf {A} )}}\log \left(\left\|\mathbf {e} _{0}\right\|_{\mathbf {A} }\varepsilon ^{-1}\right)}iterations suffices to reduce the error to2ε{\displaystyle 2\varepsilon }for anyε>0{\displaystyle \varepsilon >0}.
Note, the important limit whenκ(A){\displaystyle \kappa (\mathbf {A} )}tends to∞{\displaystyle \infty }
This limit shows a faster convergence rate compared to the iterative methods ofJacobiorGauss–Seidelwhich scale as≈1−2κ(A){\displaystyle \approx 1-{\frac {2}{\kappa (\mathbf {A} )}}}.
Noround-off erroris assumed in the convergence theorem, but the convergence bound is commonly valid in practice as theoretically explained[5]byAnne Greenbaum.
If initialized randomly, the first stage of iterations is often the fastest, as the error is eliminated within the Krylov subspace that initially reflects a smaller effective condition number. The second stage of convergence is typically well defined by the theoretical convergence bound withκ(A){\textstyle {\sqrt {\kappa (\mathbf {A} )}}}, but may be super-linear, depending on a distribution of the spectrum of the matrixA{\displaystyle A}and the spectral distribution of the error.[5]In the last stage, the smallest attainable accuracy is reached and the convergence stalls or the method may even start diverging. In typical scientific computing applications indouble-precision floating-point formatfor matrices of large sizes, the conjugate gradient method uses a stopping criterion with a tolerance that terminates the iterations during the first or second stage.
In most cases,preconditioningis necessary to ensure fast convergence of the conjugate gradient method. IfM−1{\displaystyle \mathbf {M} ^{-1}}is symmetric positive-definite andM−1A{\displaystyle \mathbf {M} ^{-1}\mathbf {A} }has a better condition number thanA,{\displaystyle \mathbf {A} ,}a preconditioned conjugate gradient method can be used. It takes the following form:[11]
The above formulation is equivalent to applying the regular conjugate gradient method to the preconditioned system[12]
where
The Cholesky decomposition of the preconditioner must be used to keep the symmetry (and positive definiteness) of the system. However, this decomposition does not need to be computed, and it is sufficient to knowM−1{\displaystyle \mathbf {M} ^{-1}}. It can be shown thatE−1A(E−1)T{\displaystyle \mathbf {E} ^{-1}\mathbf {A} (\mathbf {E} ^{-1})^{\mathsf {T}}}has the same spectrum asM−1A{\displaystyle \mathbf {M} ^{-1}\mathbf {A} }.
The preconditioner matrixMhas to be symmetric positive-definite and fixed, i.e., cannot change from iteration to iteration.
If any of these assumptions on the preconditioner is violated, the behavior of the preconditioned conjugate gradient method may become unpredictable.
An example of a commonly usedpreconditioneris theincomplete Cholesky factorization.[13]
It is important to keep in mind that we don't want to invert the matrixM{\displaystyle \mathbf {M} }explicitly in order to getM−1{\displaystyle \mathbf {M} ^{-1}}for use it in the process, since invertingM{\displaystyle \mathbf {M} }would take more time/computational resources than solving the conjugate gradient algorithm itself. As an example, let's say that we are using a preconditioner coming from incomplete Cholesky factorization. The resulting matrix is the lower triangular matrixL{\displaystyle \mathbf {L} }, and the preconditioner matrix is:
M=LLT{\displaystyle \mathbf {M} =\mathbf {LL} ^{\mathsf {T}}}
Then we have to solve:
Mz=r{\displaystyle \mathbf {Mz} =\mathbf {r} }
z=M−1r{\displaystyle \mathbf {z} =\mathbf {M} ^{-1}\mathbf {r} }
But:
M−1=(L−1)TL−1{\displaystyle \mathbf {M} ^{-1}=(\mathbf {L} ^{-1})^{\mathsf {T}}\mathbf {L} ^{-1}}
Then:
z=(L−1)TL−1r{\displaystyle \mathbf {z} =(\mathbf {L} ^{-1})^{\mathsf {T}}\mathbf {L} ^{-1}\mathbf {r} }
Let's take an intermediary vectora{\displaystyle \mathbf {a} }:
a=L−1r{\displaystyle \mathbf {a} =\mathbf {L} ^{-1}\mathbf {r} }
r=La{\displaystyle \mathbf {r} =\mathbf {L} \mathbf {a} }
Sincer{\displaystyle \mathbf {r} }andL{\displaystyle \mathbf {L} }and known, andL{\displaystyle \mathbf {L} }is lower triangular, solving fora{\displaystyle \mathbf {a} }is easy and computationally cheap by usingforward substitution. Then, we substitutea{\displaystyle \mathbf {a} }in the original equation:
z=(L−1)Ta{\displaystyle \mathbf {z} =(\mathbf {L} ^{-1})^{\mathsf {T}}\mathbf {a} }
a=LTz{\displaystyle \mathbf {a} =\mathbf {L} ^{\mathsf {T}}\mathbf {z} }
Sincea{\displaystyle \mathbf {a} }andLT{\displaystyle \mathbf {L} ^{\mathsf {T}}}are known, andLT{\displaystyle \mathbf {L} ^{\mathsf {T}}}is upper triangular, solving forz{\displaystyle \mathbf {z} }is easy and computationally cheap by usingbackward substitution.
Using this method, there is no need to invertM{\displaystyle \mathbf {M} }orL{\displaystyle \mathbf {L} }explicitly at all, and we still obtainz{\displaystyle \mathbf {z} }.
In numerically challenging applications, sophisticated preconditioners are used, which may lead to variable preconditioning, changing between iterations. Even if the preconditioner is symmetric positive-definite on every iteration, the fact that it may change makes the arguments above invalid, and in practical tests leads to a significant slow down of the convergence of the algorithm presented above. Using thePolak–Ribièreformula
instead of theFletcher–Reevesformula
may dramatically improve the convergence in this case.[14]This version of the preconditioned conjugate gradient method can be called[15]flexible, as it allows for variable preconditioning.
The flexible version is also shown[16]to be robust even if the preconditioner is not symmetric positive definite (SPD).
The implementation of the flexible version requires storing an extra vector. For a fixed SPD preconditioner,rk+1Tzk=0,{\displaystyle \mathbf {r} _{k+1}^{\mathsf {T}}\mathbf {z} _{k}=0,}so both formulas forβkare equivalent in exact arithmetic, i.e., without theround-off error.
The mathematical explanation of the better convergence behavior of the method with thePolak–Ribièreformula is that the method islocally optimalin this case, in particular, it does not converge slower than the locally optimal steepest descent method.[17]
In both the original and the preconditioned conjugate gradient methods one only needs to setβk:=0{\displaystyle \beta _{k}:=0}in order to make them locally optimal, using theline search,steepest descentmethods. With this substitution, vectorspare always the same as vectorsz, so there is no need to store vectorsp. Thus, every iteration of thesesteepest descentmethods is a bit cheaper compared to that for the conjugate gradient methods. However, the latter converge faster, unless a (highly) variable and/or non-SPDpreconditioneris used, see above.
The conjugate gradient method can also be derived usingoptimal control theory.[18]In this approach, the conjugate gradient method falls out as anoptimal feedback controller,u=k(x,v):=−γa∇f(x)−γbv{\displaystyle u=k(x,v):=-\gamma _{a}\nabla f(x)-\gamma _{b}v}for thedouble integrator system,x˙=v,v˙=u{\displaystyle {\dot {x}}=v,\quad {\dot {v}}=u}The quantitiesγa{\displaystyle \gamma _{a}}andγb{\displaystyle \gamma _{b}}are variable feedback gains.[18]
The conjugate gradient method can be applied to an arbitraryn-by-mmatrix by applying it tonormal equationsATAand right-hand side vectorATb, sinceATAis a symmetricpositive-semidefinitematrix for anyA. The result isconjugate gradient on the normal equations(CGNorCGNR).
As an iterative method, it is not necessary to formATAexplicitly in memory but only to perform the matrix–vector and transpose matrix–vector multiplications. Therefore, CGNR is particularly useful whenAis asparse matrixsince these operations are usually extremely efficient. However the downside of forming the normal equations is that thecondition numberκ(ATA) is equal to κ2(A) and so the rate of convergence of CGNR may be slow and the quality of the approximate solution may be sensitive to roundoff errors. Finding a goodpreconditioneris often an important part of using the CGNR method.
Several algorithms have been proposed (e.g., CGLS, LSQR). TheLSQRalgorithm purportedly has the best numerical stability whenAis ill-conditioned, i.e.,Ahas a largecondition number.
The conjugate gradient method with a trivial modification is extendable to solving, given complex-valued matrix A and vector b, the system of linear equationsAx=b{\displaystyle \mathbf {A} \mathbf {x} =\mathbf {b} }for the complex-valued vector x, where A isHermitian(i.e., A' = A) andpositive-definite matrix, and the symbol ' denotes theconjugate transpose. The trivial modification is simply substituting theconjugate transposefor the realtransposeeverywhere.
The advantages and disadvantages of the conjugate gradient methods are summarized in the lecture notes by Nemirovsky and BenTal.[19]: Sec.7.3
This example is from[20]Lett∈(0,1){\textstyle t\in (0,1)}, and defineW=[ttt1+ttt1+ttt⋱⋱⋱tt1+t],b=[10⋮0]{\displaystyle W={\begin{bmatrix}t&{\sqrt {t}}&&&&\\{\sqrt {t}}&1+t&{\sqrt {t}}&&&\\&{\sqrt {t}}&1+t&{\sqrt {t}}&&\\&&{\sqrt {t}}&\ddots &\ddots &\\&&&\ddots &&\\&&&&&{\sqrt {t}}\\&&&&{\sqrt {t}}&1+t\end{bmatrix}},\quad b={\begin{bmatrix}1\\0\\\vdots \\0\end{bmatrix}}}SinceW{\displaystyle W}is invertible, there exists a unique solution toWx=b{\textstyle Wx=b}. Solving it by conjugate gradient descent gives us rather bad convergence:‖b−Wxk‖2=(1/t)k,‖b−Wxn‖2=0{\displaystyle \|b-Wx_{k}\|^{2}=(1/t)^{k},\quad \|b-Wx_{n}\|^{2}=0}In words, during the CG process, the error grows exponentially, until it suddenly becomes zero as the unique solution is found.
|
https://en.wikipedia.org/wiki/Conjugate_gradient_method
|
Rprop, short for resilientbackpropagation, is a learningheuristicforsupervised learninginfeedforwardartificial neural networks. This is afirst-orderoptimizationalgorithm. This algorithm was created by Martin Riedmiller and Heinrich Braun in 1992.[1]
Similarly to theManhattan update rule, Rprop takes into account only thesignof thepartial derivativeover all patterns (not the magnitude), and acts independently on each "weight". For each weight, if there was a sign change of the partial derivative of the total error function compared to the last iteration, the update value for that weight is multiplied by a factorη−, whereη−< 1. If the last iteration produced the same sign, the update value is multiplied by a factor ofη+, whereη+> 1. The update values are calculated for each weight in the above manner, and finally each weight is changed by its own update value, in the opposite direction of that weight's partial derivative, so as to minimise the total error function.η+is empirically set to 1.2 andη−to 0.5.[citation needed]
Rprop can result in very large weight increments or decrements if the gradients are large, which is a problem when using mini-batches as opposed to full batches.RMSpropaddresses this problem by keeping the moving average of the squared gradients for each weight and dividing the gradient by the square root of the mean square.[citation needed]
RPROP is abatch update algorithm. Next to thecascade correlation algorithmand theLevenberg–Marquardt algorithm, Rprop is one of the fastest weight update mechanisms.[citation needed]
Martin Riedmiller developed three algorithms, all named RPROP. Igel and Hüsken assigned names to them and added a new variant:[2][3]
|
https://en.wikipedia.org/wiki/Rprop
|
Inmachine learning, thedelta ruleis agradient descentlearning rule for updating the weights of the inputs toartificial neuronsin asingle-layer neural network.[1]It can be derived as thebackpropagationalgorithm for a single-layer neural network with mean-square error loss function.
For a neuronj{\displaystyle j}withactivation functiong(x){\displaystyle g(x)}, the delta rule for neuronj{\displaystyle j}'si{\displaystyle i}-th weightwji{\displaystyle w_{ji}}is given by
Δwji=α(tj−yj)g′(hj)xi,{\displaystyle \Delta w_{ji}=\alpha (t_{j}-y_{j})g'(h_{j})x_{i},}
where
It holds thathj=∑ixiwji{\textstyle h_{j}=\sum _{i}x_{i}w_{ji}}andyj=g(hj){\displaystyle y_{j}=g(h_{j})}.
The delta rule is commonly stated in simplified form for a neuron with a linear activation function asΔwji=α(tj−yj)xi{\displaystyle \Delta w_{ji}=\alpha \left(t_{j}-y_{j}\right)x_{i}}
While the delta rule is similar to theperceptron's update rule, the derivation is different. The perceptron uses theHeaviside step functionas the activation functiong(h){\displaystyle g(h)}, and that means thatg′(h){\displaystyle g'(h)}does not exist at zero, and is equal to zero elsewhere, which makes the direct application of the delta rule impossible.
The delta rule is derived by attempting to minimize the error in the output of the neural network throughgradient descent. The error for a neural network withj{\displaystyle j}outputs can be measured asE=∑j12(tj−yj)2.{\displaystyle E=\sum _{j}{\tfrac {1}{2}}\left(t_{j}-y_{j}\right)^{2}.}
In this case, we wish to move through "weight space" of the neuron (the space of all possible values of all of the neuron's weights) in proportion to the gradient of the error function with respect to each weight. In order to do that, we calculate thepartial derivativeof the error with respect to each weight. For thei{\displaystyle i}th weight, this derivative can be written as∂E∂wji.{\displaystyle {\frac {\partial E}{\partial w_{ji}}}.}
Because we are only concerning ourselves with thej{\displaystyle j}-th neuron, we can substitute the error formula above while omitting the summation:∂E∂wji=∂∂wji[12(tj−yj)2]{\displaystyle {\frac {\partial E}{\partial w_{ji}}}={\frac {\partial }{\partial w_{ji}}}\left[{\frac {1}{2}}\left(t_{j}-y_{j}\right)^{2}\right]}
Next we use thechain ruleto split this into two derivatives:∂E∂wji=∂(12(tj−yj)2)∂yj∂yj∂wji{\displaystyle {\frac {\partial E}{\partial w_{ji}}}={\frac {\partial \left({\frac {1}{2}}\left(t_{j}-y_{j}\right)^{2}\right)}{\partial y_{j}}}{\frac {\partial y_{j}}{\partial w_{ji}}}}
To find the left derivative, we simply apply thepower ruleand the chain rule:∂E∂wji=−(tj−yj)∂yj∂wji{\displaystyle {\frac {\partial E}{\partial w_{ji}}}=-\left(t_{j}-y_{j}\right){\frac {\partial y_{j}}{\partial w_{ji}}}}
To find the right derivative, we again apply the chain rule, this time differentiating with respect to the total input toj{\displaystyle j},hj{\displaystyle h_{j}}:∂E∂wji=−(tj−yj)∂yj∂hj∂hj∂wji{\displaystyle {\frac {\partial E}{\partial w_{ji}}}=-\left(t_{j}-y_{j}\right){\frac {\partial y_{j}}{\partial h_{j}}}{\frac {\partial h_{j}}{\partial w_{ji}}}}
Note that the output of thej{\displaystyle j}th neuron,yj{\displaystyle y_{j}}, is just the neuron's activation functiong{\displaystyle g}applied to the neuron's inputhj{\displaystyle h_{j}}. We can therefore write the derivative ofyj{\displaystyle y_{j}}with respect tohj{\displaystyle h_{j}}simply asg{\displaystyle g}'s first derivative:∂E∂wji=−(tj−yj)g′(hj)∂hj∂wji{\displaystyle {\frac {\partial E}{\partial w_{ji}}}=-\left(t_{j}-y_{j}\right)g'(h_{j}){\frac {\partial h_{j}}{\partial w_{ji}}}}
Next we rewritehj{\displaystyle h_{j}}in the last term as the sum over allk{\displaystyle k}weights of each weightwjk{\displaystyle w_{jk}}times its corresponding inputxk{\displaystyle x_{k}}:∂E∂wji=−(tj−yj)g′(hj)∂∂wji[∑ixiwji]{\displaystyle {\frac {\partial E}{\partial w_{ji}}}=-\left(t_{j}-y_{j}\right)g'(h_{j})\;{\frac {\partial }{\partial w_{ji}}}\!\!\left[\sum _{i}x_{i}w_{ji}\right]}
Because we are only concerned with thei{\displaystyle i}th weight, the only term of the summation that is relevant isxiwji{\displaystyle x_{i}w_{ji}}. Clearly,∂(xiwji)∂wji=xi.{\displaystyle {\frac {\partial (x_{i}w_{ji})}{\partial w_{ji}}}=x_{i}.}giving us our final equation for the gradient:∂E∂wji=−(tj−yj)g′(hj)xi{\displaystyle {\frac {\partial E}{\partial w_{ji}}}=-\left(t_{j}-y_{j}\right)g'(h_{j})x_{i}}
As noted above, gradient descent tells us that our change for each weight should be proportional to the gradient. Choosing a proportionality constantα{\displaystyle \alpha }and eliminating the minus sign to enable us to move the weight in the negative direction of the gradient to minimize error, we arrive at our target equation:Δwji=α(tj−yj)g′(hj)xi.{\displaystyle \Delta w_{ji}=\alpha (t_{j}-y_{j})g'(h_{j})x_{i}.}
|
https://en.wikipedia.org/wiki/Delta_rule
|
In the unconstrainedminimizationproblem, theWolfe conditionsare a set of inequalities for performing inexactline search, especially inquasi-Newton methods, first published byPhilip Wolfein 1969.[1][2]
In these methods the idea is to findminxf(x){\displaystyle \min _{x}f(\mathbf {x} )}for somesmoothf:Rn→R{\displaystyle f\colon \mathbb {R} ^{n}\to \mathbb {R} }. Each step often involves approximately solving the subproblemminαf(xk+αpk){\displaystyle \min _{\alpha }f(\mathbf {x} _{k}+\alpha \mathbf {p} _{k})}wherexk{\displaystyle \mathbf {x} _{k}}is the current best guess,pk∈Rn{\displaystyle \mathbf {p} _{k}\in \mathbb {R} ^{n}}is a search direction, andα∈R{\displaystyle \alpha \in \mathbb {R} }is the step length.
The inexact line searches provide an efficient way of computing an acceptable step lengthα{\displaystyle \alpha }that reduces theobjective function'sufficiently', rather than minimizing the objective function overα∈R+{\displaystyle \alpha \in \mathbb {R} ^{+}}exactly. A line search algorithm can use Wolfe conditions as a requirement for any guessedα{\displaystyle \alpha }, before finding a new search directionpk{\displaystyle \mathbf {p} _{k}}.
A step lengthαk{\displaystyle \alpha _{k}}is said to satisfy theWolfe conditions, restricted to the directionpk{\displaystyle \mathbf {p} _{k}}, if the following two inequalities hold:
with0<c1<c2<1{\displaystyle 0<c_{1}<c_{2}<1}. (In examining condition (ii), recall that to ensure thatpk{\displaystyle \mathbf {p} _{k}}is a descent direction, we havepkT∇f(xk)<0{\displaystyle \mathbf {p} _{k}^{\mathrm {T} }\nabla f(\mathbf {x} _{k})<0}, as in the case ofgradient descent, wherepk=−∇f(xk){\displaystyle \mathbf {p} _{k}=-\nabla f(\mathbf {x} _{k})}, orNewton–Raphson, wherepk=−H−1∇f(xk){\displaystyle \mathbf {p} _{k}=-\mathbf {H} ^{-1}\nabla f(\mathbf {x} _{k})}withH{\displaystyle \mathbf {H} }positive definite.)
c1{\displaystyle c_{1}}is usually chosen to be quite small whilec2{\displaystyle c_{2}}is much larger;Nocedaland Wright give example values ofc1=10−4{\displaystyle c_{1}=10^{-4}}andc2=0.9{\displaystyle c_{2}=0.9}for Newton or quasi-Newton methods andc2=0.1{\displaystyle c_{2}=0.1}for the nonlinearconjugate gradient method.[3]Inequality i) is known as theArmijo rule[4]and ii) as thecurvature condition; i) ensures that the step lengthαk{\displaystyle \alpha _{k}}decreasesf{\displaystyle f}'sufficiently', and ii) ensures that the slope has been reduced sufficiently. Conditions i) and ii) can be interpreted as respectively providing an upper and lower bound on the admissible step length values.
Denote a univariate functionφ{\displaystyle \varphi }restricted to the directionpk{\displaystyle \mathbf {p} _{k}}asφ(α)=f(xk+αpk){\displaystyle \varphi (\alpha )=f(\mathbf {x} _{k}+\alpha \mathbf {p} _{k})}. The Wolfe conditions can result in a value for the step length that is not close to a minimizer ofφ{\displaystyle \varphi }. If we modify the curvature condition to the following,
then i) and iii) together form the so-calledstrong Wolfe conditions, and forceαk{\displaystyle \alpha _{k}}to lie close to acritical pointofφ{\displaystyle \varphi }.
The principal reason for imposing the Wolfe conditions in an optimization algorithm wherexk+1=xk+αpk{\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\alpha \mathbf {p} _{k}}is to ensure convergence of the gradient to zero. In particular, if the cosine of the angle betweenpk{\displaystyle \mathbf {p} _{k}}and the gradient,cosθk=∇f(xk)Tpk‖∇f(xk)‖‖pk‖{\displaystyle \cos \theta _{k}={\frac {\nabla f(\mathbf {x} _{k})^{\mathrm {T} }\mathbf {p} _{k}}{\|\nabla f(\mathbf {x} _{k})\|\|\mathbf {p} _{k}\|}}}is bounded away from zero and the i) and ii) conditions hold, then∇f(xk)→0{\displaystyle \nabla f(\mathbf {x} _{k})\rightarrow 0}.
An additional motivation, in the case of aquasi-Newton method, is that ifpk=−Bk−1∇f(xk){\displaystyle \mathbf {p} _{k}=-B_{k}^{-1}\nabla f(\mathbf {x} _{k})}, where the matrixBk{\displaystyle B_{k}}is updated by theBFGSorDFPformula, then ifBk{\displaystyle B_{k}}is positive definite ii) impliesBk+1{\displaystyle B_{k+1}}is also positive definite.
Wolfe's conditions are more complicated than Armijo's condition, and a gradient descent algorithm based on Armijo's condition has a better theoretical guarantee than one based on Wolfe conditions (see the sections on"Upper bound for learning rates"and"Theoretical guarantee"in theBacktracking line searcharticle).
|
https://en.wikipedia.org/wiki/Wolfe_conditions
|
Inmathematics,preconditioningis the application of a transformation, called thepreconditioner, that conditions a given problem into a form that is more suitable fornumericalsolving methods. Preconditioning is typically related to reducing acondition numberof the problem. The preconditioned problem is then usually solved by aniterative method.
Inlinear algebraandnumerical analysis, apreconditionerP{\displaystyle P}of a matrixA{\displaystyle A}is a matrix such thatP−1A{\displaystyle P^{-1}A}has a smallercondition numberthanA{\displaystyle A}. It is also common to callT=P−1{\displaystyle T=P^{-1}}the preconditioner, rather thanP{\displaystyle P}, sinceP{\displaystyle P}itself is rarely explicitly available. In modern preconditioning, the application ofT=P−1{\displaystyle T=P^{-1}}, i.e., multiplication of a column vector, or a block of column vectors, byT=P−1{\displaystyle T=P^{-1}}, is commonly performed in amatrix-free fashion, i.e., where neitherP{\displaystyle P}, norT=P−1{\displaystyle T=P^{-1}}(and often not evenA{\displaystyle A}) are explicitly available in a matrix form.
Preconditioners are useful initerative methodsto solve a linear systemAx=b{\displaystyle Ax=b}forx{\displaystyle x}since therate of convergencefor most iterative linear solvers increases because thecondition numberof a matrix decreases as a result of preconditioning. Preconditioned iterative solvers typically outperform direct solvers, e.g.,Gaussian elimination, for large, especially forsparse, matrices. Iterative solvers can be used asmatrix-free methods, i.e. become the only choice if the coefficient matrixA{\displaystyle A}is not stored explicitly, but is accessed by evaluating matrix-vector products.
Instead of solving the original linear systemAx=b{\displaystyle Ax=b}forx{\displaystyle x}, one may consider therightpreconditioned systemAP−1(Px)=b{\displaystyle AP^{-1}(Px)=b}and solveAP−1y=b{\displaystyle AP^{-1}y=b}fory{\displaystyle y}andPx=y{\displaystyle Px=y}forx{\displaystyle x}.
Alternatively, one may solve theleftpreconditioned systemP−1(Ax−b)=0.{\displaystyle P^{-1}(Ax-b)=0.}
Both systems give the same solution as the original system as long as the preconditioner matrixP{\displaystyle P}isnonsingular. The left preconditioning is more traditional.
Thetwo-sidedpreconditioned systemQAP−1(Px)=Qb{\displaystyle QAP^{-1}(Px)=Qb}may be beneficial, e.g., to preserve the matrix symmetry: if the original matrixA{\displaystyle A}is real symmetric and real preconditionersQ{\displaystyle Q}andP{\displaystyle P}satisfyQT=P−1{\displaystyle Q^{T}=P^{-1}}then the preconditioned matrixQAP−1{\displaystyle QAP^{-1}}is also symmetric. The two-sided preconditioning is common fordiagonal scalingwhere the preconditionersQ{\displaystyle Q}andP{\displaystyle P}are diagonal and scaling is applied both to columns and rows of the original matrixA{\displaystyle A}, e.g., in order to decrease the dynamic range of entries of the matrix.
The goal of preconditioning is reducing thecondition number, e.g., of the left or right preconditioned system matrixP−1A{\displaystyle P^{-1}A}orAP−1{\displaystyle AP^{-1}}. Small condition numbers benefit fast convergence of iterative solvers and improve stability of the solution with respect to perturbations in the system matrix and the right-hand side, e.g., allowing for more aggressivequantizationof the matrix entries using lowercomputer precision.
The preconditioned matrixP−1A{\displaystyle P^{-1}A}orAP−1{\displaystyle AP^{-1}}is rarely explicitly formed. Only the action of applying the preconditioner solve operationP−1{\displaystyle P^{-1}}to a given vector may need to be computed.
Typically there is a trade-off in the choice ofP{\displaystyle P}. Since the operatorP−1{\displaystyle P^{-1}}must be applied at each step of the iterative linear solver, it should have a small cost (computing time) of applying theP−1{\displaystyle P^{-1}}operation. The cheapest preconditioner would therefore beP=I{\displaystyle P=I}since thenP−1=I.{\displaystyle P^{-1}=I.}Clearly, this results in the original linear system and the preconditioner does nothing. At the other extreme, the choiceP=A{\displaystyle P=A}givesP−1A=AP−1=I,{\displaystyle P^{-1}A=AP^{-1}=I,}which has optimalcondition numberof 1, requiring a single iteration for convergence; however in this caseP−1=A−1,{\displaystyle P^{-1}=A^{-1},}and applying the preconditioner is as difficult as solving the original system. One therefore choosesP{\displaystyle P}as somewhere between these two extremes, in an attempt to achieve a minimal number of linear iterations while keeping the operatorP−1{\displaystyle P^{-1}}as simple as possible. Some examples of typical preconditioning approaches are detailed below.
Preconditioned iterative methods forAx−b=0{\displaystyle Ax-b=0}are, in most cases, mathematically equivalent to standard iterative methods applied to the preconditioned systemP−1(Ax−b)=0.{\displaystyle P^{-1}(Ax-b)=0.}For example, the standardRichardson iterationfor solvingAx−b=0{\displaystyle Ax-b=0}isxn+1=xn−γn(Axn−b),n≥0.{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-\gamma _{n}(A\mathbf {x} _{n}-\mathbf {b} ),\ n\geq 0.}
Applied to the preconditioned systemP−1(Ax−b)=0,{\displaystyle P^{-1}(Ax-b)=0,}it turns into a preconditioned methodxn+1=xn−γnP−1(Axn−b),n≥0.{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-\gamma _{n}P^{-1}(A\mathbf {x} _{n}-\mathbf {b} ),\ n\geq 0.}
Examples of popular preconditionediterative methodsfor linear systems include thepreconditioned conjugate gradient method, thebiconjugate gradient method, andgeneralized minimal residual method. Iterative methods, which use scalar products to compute the iterative parameters, require corresponding changes in the scalar product together with substitutingP−1(Ax−b)=0{\displaystyle P^{-1}(Ax-b)=0}forAx−b=0.{\displaystyle Ax-b=0.}
Astationary iterative methodis determined by the matrix splittingA=M−N{\displaystyle A=M-N}and the iteration matrixC=I−M−1A{\displaystyle C=I-M^{-1}A}. Assuming that
thecondition numberκ(M−1A){\displaystyle \kappa (M^{-1}A)}is bounded above byκ(M−1A)≤1+ρ(C)1−ρ(C).{\displaystyle \kappa (M^{-1}A)\leq {\frac {1+\rho (C)}{1-\rho (C)}}\,.}
For asymmetricpositive definitematrixA{\displaystyle A}the preconditionerP{\displaystyle P}is typically chosen to be symmetric positive definite as well. The preconditioned operatorP−1A{\displaystyle P^{-1}A}is then also symmetric positive definite, but with respect to theP{\displaystyle P}-basedscalar product. In this case, the desired effect in applying a preconditioner is to make thequadratic formof the preconditioned operatorP−1A{\displaystyle P^{-1}A}with respect to theP{\displaystyle P}-basedscalar productto be nearly spherical.[1]
DenotingT=P−1{\displaystyle T=P^{-1}}, we highlight that preconditioning is practically implemented as multiplying some vectorr{\displaystyle r}byT{\displaystyle T}, i.e., computing the productTr.{\displaystyle Tr.}In many applications,T{\displaystyle T}is not given as a matrix, but rather as an operatorT(r){\displaystyle T(r)}acting on the vectorr{\displaystyle r}. Some popular preconditioners, however, change withr{\displaystyle r}and the dependence onr{\displaystyle r}may not be linear. Typical examples involve using non-lineariterative methods, e.g., theconjugate gradient method, as a part of the preconditioner construction. Such preconditioners may be practically very efficient, however, their behavior is hard to predict theoretically.
One interesting particular case of variable preconditioning is random preconditioning, e.g.,multigridpreconditioning on random coarse grids.[2]If used ingradient descentmethods, random preconditioning can be viewed as an implementation ofstochastic gradient descentand can lead to faster convergence, compared to fixed preconditioning, since it breaks the asymptotic "zig-zag" pattern of thegradient descent.
The most common use of preconditioning is for iterative solution of linear systems resulting from approximations ofpartial differential equations. The better the approximation quality, the larger the matrix size is. In such a case, the goal of optimal preconditioning is, on the one side, to make the spectral condition number ofP−1A{\displaystyle P^{-1}A}to be bounded from above by a constant independent of the matrix size, which is calledspectrally equivalentpreconditioning byD'yakonov. On the other hand, the cost of application of theP−1{\displaystyle P^{-1}}should ideally be proportional (also independent of the matrix size) to the cost of multiplication ofA{\displaystyle A}by a vector.
TheJacobi preconditioneris one of the simplest forms of preconditioning, in which the preconditioner is chosen to be the diagonal of the matrixP=diag(A).{\displaystyle P=\mathrm {diag} (A).}AssumingAii≠0,∀i{\displaystyle A_{ii}\neq 0,\forall i}, we getPij−1=δijAij.{\displaystyle P_{ij}^{-1}={\frac {\delta _{ij}}{A_{ij}}}.}It is efficient fordiagonally dominant matricesA{\displaystyle A}. It is used in analysis software for beam problems or 1-D problems (EX:-STAAD.Pro)
TheSparse Approximate Inversepreconditioner minimises‖AT−I‖F,{\displaystyle \|AT-I\|_{F},}where‖⋅‖F{\displaystyle \|\cdot \|_{F}}is theFrobenius normandT=P−1{\displaystyle T=P^{-1}}is from some suitably constrained set ofsparse matrices. Under the Frobenius norm, this reduces to solving numerous independent least-squares problems (one for every column). The entries inT{\displaystyle T}must be restricted to some sparsity pattern or the problem remains as difficult and time-consuming as finding the exact inverse ofA{\displaystyle A}. The method was introduced by M.J. Grote and T. Huckle together with an approach to selecting sparsity patterns.[3]
Eigenvalue problems can be framed in several alternative ways, each leading to its own preconditioning. The traditional preconditioning is based on the so-calledspectral transformations.Knowing (approximately) the targeted eigenvalue, one can compute the corresponding eigenvector by solving the related homogeneous linear system, thus allowing to use preconditioning for linear system. Finally, formulating the eigenvalue problem as optimization of theRayleigh quotientbrings preconditioned optimization techniques to the scene.[4]
By analogy with linear systems, for aneigenvalueproblemAx=λx{\displaystyle Ax=\lambda x}one may be tempted to replace the matrixA{\displaystyle A}with the matrixP−1A{\displaystyle P^{-1}A}using a preconditionerP{\displaystyle P}. However, this makes sense only if the seekingeigenvectorsofA{\displaystyle A}andP−1A{\displaystyle P^{-1}A}are the same. This is the case for spectral transformations.
The most popular spectral transformation is the so-calledshift-and-inverttransformation, where for a given scalarα{\displaystyle \alpha }, called theshift, the original eigenvalue problemAx=λx{\displaystyle Ax=\lambda x}is replaced with the shift-and-invert problem(A−αI)−1x=μx{\displaystyle (A-\alpha I)^{-1}x=\mu x}. The eigenvectors are preserved, and one can solve the shift-and-invert problem by an iterative solver, e.g., thepower iteration. This gives theInverse iteration, which normally converges to the eigenvector, corresponding to the eigenvalue closest to the shiftα{\displaystyle \alpha }. TheRayleigh quotient iterationis a shift-and-invert method with a variable shift.
Spectral transformations are specific for eigenvalue problems and have no analogs for linear systems. They require accurate numerical calculation of the transformation involved, which becomes the main bottleneck for large problems.
To make a close connection to linear systems, let us suppose that the targeted eigenvalueλ⋆{\displaystyle \lambda _{\star }}is known (approximately). Then one can compute the corresponding eigenvector from the homogeneous linear system(A−λ⋆I)x=0{\displaystyle (A-\lambda _{\star }I)x=0}. Using the concept of left preconditioning for linear systems, we obtainT(A−λ⋆I)x=0{\displaystyle T(A-\lambda _{\star }I)x=0}, whereT{\displaystyle T}is the preconditioner, which we can try to solve using theRichardson iteration
xn+1=xn−γnT(A−λ⋆I)xn,n≥0.{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-\gamma _{n}T(A-\lambda _{\star }I)\mathbf {x} _{n},\ n\geq 0.}
TheMoore–Penrose pseudoinverseT=(A−λ⋆I)+{\displaystyle T=(A-\lambda _{\star }I)^{+}}is the preconditioner, which makes theRichardson iterationabove converge in one step withγn=1{\displaystyle \gamma _{n}=1}, sinceI−(A−λ⋆I)+(A−λ⋆I){\displaystyle I-(A-\lambda _{\star }I)^{+}(A-\lambda _{\star }I)}, denoted byP⋆{\displaystyle P_{\star }}, is the orthogonal projector on the eigenspace, corresponding toλ⋆{\displaystyle \lambda _{\star }}. The choiceT=(A−λ⋆I)+{\displaystyle T=(A-\lambda _{\star }I)^{+}}is impractical for three independent reasons. First,λ⋆{\displaystyle \lambda _{\star }}is actually not known, although it can be replaced with its approximationλ~⋆{\displaystyle {\tilde {\lambda }}_{\star }}. Second, the exactMoore–Penrose pseudoinverserequires the knowledge of the eigenvector, which we are trying to find. This can be somewhat circumvented by the use of theJacobi–Davidson preconditionerT=(I−P~⋆)(A−λ~⋆I)−1(I−P~⋆){\displaystyle T=(I-{\tilde {P}}_{\star })(A-{\tilde {\lambda }}_{\star }I)^{-1}(I-{\tilde {P}}_{\star })}, whereP~⋆{\displaystyle {\tilde {P}}_{\star }}approximatesP⋆{\displaystyle P_{\star }}. Last, but not least, this approach requires accurate numerical solution of linear system with the system matrix(A−λ~⋆I){\displaystyle (A-{\tilde {\lambda }}_{\star }I)}, which becomes as expensive for large problems as the shift-and-invert method above. If the solution is not accurate enough, step two may be redundant.[4]
Let us first replace the theoretical valueλ⋆{\displaystyle \lambda _{\star }}in theRichardson iterationabove with its current approximationλn{\displaystyle \lambda _{n}}to obtain a practical algorithmxn+1=xn−γnT(A−λnI)xn,n≥0.{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-\gamma _{n}T(A-\lambda _{n}I)\mathbf {x} _{n},\ n\geq 0.}
A popular choice isλn=ρ(xn){\displaystyle \lambda _{n}=\rho (x_{n})}using theRayleigh quotientfunctionρ(⋅){\displaystyle \rho (\cdot )}. Practical preconditioning may be as trivial as just usingT=(diag(A))−1{\displaystyle T=(\operatorname {diag} (A))^{-1}}orT=(diag(A−λnI))−1.{\displaystyle T=(\operatorname {diag} (A-\lambda _{n}I))^{-1}.}For some classes of eigenvalue problems the efficiency ofT≈A−1{\displaystyle T\approx A^{-1}}has been demonstrated, both numerically and theoretically. The choiceT≈A−1{\displaystyle T\approx A^{-1}}allows one to easily utilize for eigenvalue problems the vast variety of preconditioners developed for linear systems.
Due to the changing valueλn{\displaystyle \lambda _{n}}, a comprehensive theoretical convergence analysis is much more difficult, compared to the linear systems case, even for the simplest methods, such as theRichardson iteration.
Inoptimization, preconditioning is typically used to acceleratefirst-orderoptimizationalgorithms.
For example, to find alocal minimumof a real-valued functionF(x){\displaystyle F(\mathbf {x} )}usinggradient descent, one takes steps proportional to thenegativeof thegradient−∇F(a){\displaystyle -\nabla F(\mathbf {a} )}(or of the approximate gradient) of the function at the current point:xn+1=xn−γn∇F(xn),n≥0.{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-\gamma _{n}\nabla F(\mathbf {x} _{n}),\ n\geq 0.}
The preconditioner is applied to the gradient:xn+1=xn−γnP−1∇F(xn),n≥0.{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-\gamma _{n}P^{-1}\nabla F(\mathbf {x} _{n}),\ n\geq 0.}
Preconditioning here can be viewed as changing the geometry of the vector space with the goal to make the level sets look like circles.[5]In this case the preconditioned gradient aims closer to the point of the extrema as on the figure, which speeds up the convergence.
The minimum of a quadratic functionF(x)=12xTAx−xTb,{\displaystyle F(\mathbf {x} )={\tfrac {1}{2}}\mathbf {x} ^{T}A\mathbf {x} -\mathbf {x} ^{T}\mathbf {b} ,}wherex{\displaystyle \mathbf {x} }andb{\displaystyle \mathbf {b} }are real column-vectors andA{\displaystyle A}is a realsymmetricpositive-definite matrix, is exactly the solution of the linear equationAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }. Since∇F(x)=Ax−b{\displaystyle \nabla F(\mathbf {x} )=A\mathbf {x} -\mathbf {b} }, the preconditionedgradient descentmethod of minimizingF(x){\displaystyle F(\mathbf {x} )}isxn+1=xn−γnP−1(Axn−b),n≥0.{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-\gamma _{n}P^{-1}(A\mathbf {x} _{n}-\mathbf {b} ),\ n\geq 0.}
This is the preconditionedRichardson iterationfor solving asystem of linear equations.
The minimum of theRayleigh quotientρ(x)=xTAxxTx,{\displaystyle \rho (\mathbf {x} )={\frac {\mathbf {x} ^{T}A\mathbf {x} }{\mathbf {x} ^{T}\mathbf {x} }},}wherex{\displaystyle \mathbf {x} }is a real non-zero column-vector andA{\displaystyle A}is a realsymmetricpositive-definite matrix, is the smallesteigenvalueofA{\displaystyle A}, while the minimizer is the correspondingeigenvector. Since∇ρ(x){\displaystyle \nabla \rho (\mathbf {x} )}is proportional toAx−ρ(x)x{\displaystyle A\mathbf {x} -\rho (\mathbf {x} )\mathbf {x} }, the preconditionedgradient descentmethod of minimizingρ(x){\displaystyle \rho (\mathbf {x} )}isxn+1=xn−γnP−1(Axn−ρ(xn)xn),n≥0.{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-\gamma _{n}P^{-1}(A\mathbf {x} _{n}-\rho (\mathbf {x_{n}} )\mathbf {x_{n}} ),\ n\geq 0.}
This is an analog of preconditionedRichardson iterationfor solving eigenvalue problems.
In many cases, it may be beneficial to change the preconditioner at some or even every step of aniterative algorithmin order to accommodate for a changing shape of the level sets, as inxn+1=xn−γnPn−1∇F(xn),n≥0.{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-\gamma _{n}P_{n}^{-1}\nabla F(\mathbf {x} _{n}),\ n\geq 0.}
One should have in mind, however, that constructing an efficient preconditioner is very often computationally expensive. The increased cost of updating the preconditioner can easily override the positive effect of faster convergence. IfPn−1=Hn{\displaystyle P_{n}^{-1}=H_{n}}, aBFGSapproximation of the inverse hessian matrix, this method is referred to as aQuasi-Newton method.
|
https://en.wikipedia.org/wiki/Preconditioning
|
Innumericaloptimization, theBroyden–Fletcher–Goldfarb–Shanno(BFGS)algorithmis aniterative methodfor solving unconstrainednonlinear optimizationproblems.[1]Like the relatedDavidon–Fletcher–Powell method, BFGS determines thedescent directionbypreconditioningthegradientwith curvature information. It does so by gradually improving an approximation to theHessian matrixof theloss function, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalizedsecant method.[2]
Since the updates of the BFGS curvature matrix do not requirematrix inversion, itscomputational complexityis onlyO(n2){\displaystyle {\mathcal {O}}(n^{2})}, compared toO(n3){\displaystyle {\mathcal {O}}(n^{3})}inNewton's method. Also in common use isL-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints.[3]The BFGS matrix also admits acompact representation, which makes it better suited for large constrained problems.
The algorithm is named afterCharles George Broyden,Roger Fletcher,Donald GoldfarbandDavid Shanno.[4][5][6][7]
The optimization problem is to minimizef(x){\displaystyle f(\mathbf {x} )}, wherex{\displaystyle \mathbf {x} }is a vector inRn{\displaystyle \mathbb {R} ^{n}}, andf{\displaystyle f}is a differentiable scalar function. There are no constraints on the values thatx{\displaystyle \mathbf {x} }can take.
The algorithm begins at an initial estimatex0{\displaystyle \mathbf {x} _{0}}for the optimal value and proceeds iteratively to get a better estimate at each stage.
Thesearch directionpkat stagekis given by the solution of the analogue of the Newton equation:
whereBk{\displaystyle B_{k}}is an approximation to theHessian matrixatxk{\displaystyle \mathbf {x} _{k}}, which is updated iteratively at each stage, and∇f(xk){\displaystyle \nabla f(\mathbf {x} _{k})}is the gradient of the function evaluated atxk. Aline searchin the directionpkis then used to find the next pointxk+1by minimizingf(xk+γpk){\displaystyle f(\mathbf {x} _{k}+\gamma \mathbf {p} _{k})}over the scalarγ>0.{\displaystyle \gamma >0.}
The quasi-Newton condition imposed on the update ofBk{\displaystyle B_{k}}is
Letyk=∇f(xk+1)−∇f(xk){\displaystyle \mathbf {y} _{k}=\nabla f(\mathbf {x} _{k+1})-\nabla f(\mathbf {x} _{k})}andsk=xk+1−xk{\displaystyle \mathbf {s} _{k}=\mathbf {x} _{k+1}-\mathbf {x} _{k}}, thenBk+1{\displaystyle B_{k+1}}satisfies
which is the secant equation.
The curvature conditionsk⊤yk>0{\displaystyle \mathbf {s} _{k}^{\top }\mathbf {y} _{k}>0}should be satisfied forBk+1{\displaystyle B_{k+1}}to be positive definite, which can be verified by pre-multiplying the secant equation withskT{\displaystyle \mathbf {s} _{k}^{T}}. If the function is notstrongly convex, then the condition has to be enforced explicitly e.g. by finding a pointxk+1satisfying theWolfe conditions, which entail the curvature condition, using line search.
Instead of requiring the full Hessian matrix at the pointxk+1{\displaystyle \mathbf {x} _{k+1}}to be computed asBk+1{\displaystyle B_{k+1}}, the approximate Hessian at stagekis updated by the addition of two matrices:
BothUk{\displaystyle U_{k}}andVk{\displaystyle V_{k}}are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS andDFPupdating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known assymmetric rank-onemethod, which does not guarantee thepositive definiteness. In order to maintain the symmetry and positive definiteness ofBk+1{\displaystyle B_{k+1}}, the update form can be chosen asBk+1=Bk+αuu⊤+βvv⊤{\displaystyle B_{k+1}=B_{k}+\alpha \mathbf {u} \mathbf {u} ^{\top }+\beta \mathbf {v} \mathbf {v} ^{\top }}. Imposing the secant condition,Bk+1sk=yk{\displaystyle B_{k+1}\mathbf {s} _{k}=\mathbf {y} _{k}}. Choosingu=yk{\displaystyle \mathbf {u} =\mathbf {y} _{k}}andv=Bksk{\displaystyle \mathbf {v} =B_{k}\mathbf {s} _{k}}, we can obtain:[8]
Finally, we substituteα{\displaystyle \alpha }andβ{\displaystyle \beta }intoBk+1=Bk+αuu⊤+βvv⊤{\displaystyle B_{k+1}=B_{k}+\alpha \mathbf {u} \mathbf {u} ^{\top }+\beta \mathbf {v} \mathbf {v} ^{\top }}and get the update equation ofBk+1{\displaystyle B_{k+1}}:
Consider the following unconstrained optimization problemminimizex∈Rnf(x),{\displaystyle {\begin{aligned}{\underset {\mathbf {x} \in \mathbb {R} ^{n}}{\text{minimize}}}\quad &f(\mathbf {x} ),\end{aligned}}}wheref:Rn→R{\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} }is a nonlinear objective function.
From an initial guessx0∈Rn{\displaystyle \mathbf {x} _{0}\in \mathbb {R} ^{n}}and an initial guess of the Hessian matrixB0∈Rn×n{\displaystyle B_{0}\in \mathbb {R} ^{n\times n}}the following steps are repeated asxk{\displaystyle \mathbf {x} _{k}}converges to the solution:
Convergence can be determined by observing the norm of the gradient; given someϵ>0{\displaystyle \epsilon >0}, one may stop the algorithm when||∇f(xk)||≤ϵ.{\displaystyle ||\nabla f(\mathbf {x} _{k})||\leq \epsilon .}IfB0{\displaystyle B_{0}}is initialized withB0=I{\displaystyle B_{0}=I}, the first step will be equivalent to agradient descent, but further steps are more and more refined byBk{\displaystyle B_{k}}, the approximation to the Hessian.
The first step of the algorithm is carried out using the inverse of the matrixBk{\displaystyle B_{k}}, which can be obtained efficiently by applying theSherman–Morrison formulato the step 5 of the algorithm, giving
This can be computed efficiently without temporary matrices, recognizing thatBk−1{\displaystyle B_{k}^{-1}}is symmetric,
and thatykTBk−1yk{\displaystyle \mathbf {y} _{k}^{\mathrm {T} }B_{k}^{-1}\mathbf {y} _{k}}andskTyk{\displaystyle \mathbf {s} _{k}^{\mathrm {T} }\mathbf {y} _{k}}are scalars, using an expansion such as
Therefore, in order to avoid any matrix inversion, theinverseof the Hessian can be approximated instead of the Hessian itself:Hk=defBk−1.{\displaystyle H_{k}{\overset {\operatorname {def} }{=}}B_{k}^{-1}.}[9]
From an initial guessx0{\displaystyle \mathbf {x} _{0}}and an approximateinvertedHessian matrixH0{\displaystyle H_{0}}the following steps are repeated asxk{\displaystyle \mathbf {x} _{k}}converges to the solution:
In statistical estimation problems (such asmaximum likelihoodor Bayesian inference),credible intervalsorconfidence intervalsfor the solution can be estimated from theinverseof the final Hessian matrix[citation needed]. However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.[10]
The BFGS update formula heavily relies on the curvaturesk⊤yk{\displaystyle \mathbf {s} _{k}^{\top }\mathbf {y} _{k}}being strictly positive and bounded away from zero.
This condition is satisfied when we perform a line search with Wolfe conditions on a convex target.
However, some real-life applications (like Sequential Quadratic Programming methods) routinely produce negative or nearly-zero curvatures.
This can occur when optimizing a nonconvex target or when employing a trust-region approach instead of a line search.
It is also possible to produce spurious values due to noise in the target.
In such cases, one of the so-called damped BFGS updates can be used (see[11]) which modifysk{\displaystyle \mathbf {s} _{k}}and/oryk{\displaystyle \mathbf {y} _{k}}in order to obtain a more robust update.
Notable open source implementations are:
Notable proprietary implementations include:
|
https://en.wikipedia.org/wiki/Broyden%E2%80%93Fletcher%E2%80%93Goldfarb%E2%80%93Shanno_algorithm
|
TheDavidon–Fletcher–Powell formula(orDFP; named afterWilliam C. Davidon,Roger Fletcher, andMichael J. D. Powell) finds the solution to the secant equation that is closest to the current estimate and satisfies the curvature condition. It was the firstquasi-Newton methodto generalize thesecant methodto a multidimensional problem. This update maintains the symmetry and positive definiteness of theHessian matrix.
Given a functionf(x){\displaystyle f(x)}, itsgradient(∇f{\displaystyle \nabla f}), andpositive-definiteHessian matrixB{\displaystyle B}, theTaylor seriesis
and theTaylor seriesof the gradient itself (secant equation)
is used to updateB{\displaystyle B}.
The DFP formula finds a solution that is symmetric, positive-definite and closest to the current approximate value ofBk{\displaystyle B_{k}}:
where
andBk{\displaystyle B_{k}}is a symmetric andpositive-definite matrix.
The corresponding update to the inverse Hessian approximationHk=Bk−1{\displaystyle H_{k}=B_{k}^{-1}}is given by
B{\displaystyle B}is assumed to be positive-definite, and the vectorsskT{\displaystyle s_{k}^{T}}andy{\displaystyle y}must satisfy the curvature condition
The DFP formula is quite effective, but it was soon superseded by theBroyden–Fletcher–Goldfarb–Shanno formula, which is itsdual(interchanging the roles ofyands).[1]
By unwinding the matrix recurrence forBk{\displaystyle B_{k}}, the DFP formula can be expressed
as acompact matrix representation. Specifically, defining
Sk=[s0s1…sk−1],{\displaystyle S_{k}={\begin{bmatrix}s_{0}&s_{1}&\ldots &s_{k-1}\end{bmatrix}},}Yk=[y0y1…yk−1],{\displaystyle Y_{k}={\begin{bmatrix}y_{0}&y_{1}&\ldots &y_{k-1}\end{bmatrix}},}
and upper triangular and diagonal matrices
(Rk)ij:=(RkSY)ij=si−1Tyj−1,(RkYS)ij=yi−1Tsj−1,(Dk)ii:=(DkSY)ii=si−1Tyi−1for1≤i≤j≤k{\displaystyle {\big (}R_{k}{\big )}_{ij}:={\big (}R_{k}^{\text{SY}}{\big )}_{ij}=s_{i-1}^{T}y_{j-1},\quad {\big (}R_{k}^{\text{YS}}{\big )}_{ij}=y_{i-1}^{T}s_{j-1},\quad (D_{k})_{ii}:={\big (}D_{k}^{\text{SY}}{\big )}_{ii}=s_{i-1}^{T}y_{i-1}\quad \quad {\text{ for }}1\leq i\leq j\leq k}
the DFP matrix has the equivalent formula
Bk=B0+JkNk−1JkT,{\displaystyle B_{k}=B_{0}+J_{k}N_{k}^{-1}J_{k}^{T},}
Jk=[YkYk−B0Sk]{\displaystyle J_{k}={\begin{bmatrix}Y_{k}&Y_{k}-B_{0}S_{k}\end{bmatrix}}}
Nk=[0k×kRkYS(RkYS)TRk+RkT−(Dk+SkTB0Sk)]{\displaystyle N_{k}={\begin{bmatrix}0_{k\times k}&R_{k}^{\text{YS}}\\{\big (}R_{k}^{\text{YS}}{\big )}^{T}&R_{k}+R_{k}^{T}-(D_{k}+S_{k}^{T}B_{0}S_{k})\end{bmatrix}}}
The inverse compact representation can be found by applying theSherman-Morrison-Woodbury inversetoBk{\displaystyle B_{k}}. The compact representation is particularly useful for limited-memory and constrained problems.[2]
|
https://en.wikipedia.org/wiki/Davidon%E2%80%93Fletcher%E2%80%93Powell_formula
|
TheNelder–Mead method(alsodownhill simplex method,amoeba method, orpolytope method) is anumerical methodused to find the minimum or maximum of anobjective functionin a multidimensional space. It is adirect searchmethod (based on function comparison) and is often applied to nonlinearoptimizationproblems for which derivatives may not be known. However, the Nelder–Mead technique is aheuristicsearch method that can converge tonon-stationary points[1]on problems that can be solved by alternative methods.[2]
The Nelder–Mead technique was proposed byJohn NelderandRoger Meadin 1965,[3]as a development of the method of Spendley et al.[4]
The method uses the concept of asimplex, which is a specialpolytopeofn+ 1 vertices inndimensions. Examples of simplices include a line segment in one-dimensional space, a triangle in two-dimensional space, atetrahedronin three-dimensional space, and so forth.
The method approximates a local optimum of a problem withnvariables when the objective function varies smoothly and isunimodal. Typical implementations minimize functions, and we maximizef(x){\displaystyle f(\mathbf {x} )}by minimizing−f(x){\displaystyle -f(\mathbf {x} )}.
For example, a suspension bridge engineer has to choose how thick each strut, cable, and pier must be. These elements are interdependent, but it is not easy to visualize the impact of changing any specific element. Simulation of such complicated structures is often extremely computationally expensive to run, possibly taking upwards of hours per execution. The Nelder–Mead method requires, in the original variant, no more than two evaluations per iteration, except for theshrinkoperation described later, which is attractive compared to some other direct-search optimization methods. However, the overall number of iterations to proposed optimum may be high.
Nelder–Mead inndimensions maintains a set ofn+ 1 test points arranged as asimplex. It then extrapolates the behavior of the objective function measured at each test point in order to find a new test point and to replace one of the old test points with the new one, and so the technique progresses. The simplest approach is to replace the worst point with a point reflected through thecentroidof the remainingnpoints. If this point is better than the best current point, then we can try stretching exponentially out along this line. On the other hand, if this new point isn't much better than the previous value, then we are stepping across a valley, so we shrink the simplex towards a better point. An intuitive explanation of the algorithm from "Numerical Recipes":[5]
The downhill simplex method now takes a series of steps, most steps just moving the point of the simplex where the function is largest (“highest point”) through the opposite face of the simplex to a lower point. These steps are called reflections, and they are constructed to conserve the volume of the simplex (and hence maintain its nondegeneracy). When it can do so, the method expands the simplex in one or another direction to take larger steps. When it reaches a “valley floor”, the method contracts itself in the transverse direction and tries to ooze down the valley. If there is a situation where the simplex is trying to “pass through the eye of a needle”, it contracts itself in all directions, pulling itself in around its lowest (best) point.
Unlike modern optimization methods, the Nelder–Mead heuristic can converge to a non-stationary point, unless the problem satisfies stronger conditions than are necessary for modern methods.[1]Modern improvements over the Nelder–Mead heuristic have been known since 1979.[2]
Many variations exist depending on the actual nature of the problem being solved. A common variant uses a constant-size, small simplex that roughly follows the gradient direction (which givessteepest descent). Visualize a small triangle on an elevation map flip-flopping its way down a valley to a local bottom. This method is also known as theflexible polyhedron method. This, however, tends to perform poorly against the method described in this article because it makes small, unnecessary steps in areas of little interest.
(This approximates the procedure in the original Nelder–Mead article.)
We are trying to minimize the functionf(x){\displaystyle f(\mathbf {x} )}, wherex∈Rn{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}. Our current test points arex1,…,xn+1{\displaystyle \mathbf {x} _{1},\ldots ,\mathbf {x} _{n+1}}.
Note:α{\displaystyle \alpha },γ{\displaystyle \gamma },ρ{\displaystyle \rho }andσ{\displaystyle \sigma }are respectively the reflection, expansion, contraction and shrink coefficients. Standard values areα=1{\displaystyle \alpha =1},γ=2{\displaystyle \gamma =2},ρ=1/2{\displaystyle \rho =1/2}andσ=1/2{\displaystyle \sigma =1/2}.
For thereflection, sincexn+1{\displaystyle \mathbf {x} _{n+1}}is the vertex with the higher associated value among the vertices, we can expect to find a lower value at the reflection ofxn+1{\displaystyle \mathbf {x} _{n+1}}in the opposite face formed by all verticesxi{\displaystyle \mathbf {x} _{i}}exceptxn+1{\displaystyle \mathbf {x} _{n+1}}.
For theexpansion, if the reflection pointxr{\displaystyle \mathbf {x} _{r}}is the new minimum along the vertices, we can expect to find interesting values along the direction fromxo{\displaystyle \mathbf {x} _{o}}toxr{\displaystyle \mathbf {x} _{r}}.
Concerning thecontraction, iff(xr)>f(xn){\displaystyle f(\mathbf {x} _{r})>f(\mathbf {x} _{n})}, we can expect that a better value will be inside the simplex formed by all the verticesxi{\displaystyle \mathbf {x} _{i}}.
Finally, theshrinkhandles the rare case that contracting away from the largest point increasesf{\displaystyle f}, something that cannot happen sufficiently close to a non-singular minimum. In that case we contract towards the lowest point in the expectation of finding a simpler landscape. However, Nash notes that finite-precision arithmetic can sometimes fail to actually shrink the simplex, and implemented a check that the size is actually reduced.[6]
The initial simplex is important. Indeed, a too small initial simplex can lead to a local search, consequently the NM can get more easily stuck. So this simplex should depend on the nature of the problem. However, the original article suggested a simplex where an initial point is given asx1{\displaystyle \mathbf {x} _{1}}, with the others generated with a fixed step along each dimension in turn. Thus the method is sensitive to scaling of the variables that make upx{\displaystyle \mathbf {x} }.
Criteria are needed to break the iterative cycle. Nelder and Mead used the sample standard deviation of the function values of the current simplex. If these fall below some tolerance, then the cycle is stopped and the lowest point in the simplex returned as a proposed optimum. Note that a very "flat" function may have almost equal function values over a large domain, so that the solution will be sensitive to the tolerance. Nash adds the test for shrinkage as another termination criterion.[6]Note that programs terminate, while iterations may converge.
|
https://en.wikipedia.org/wiki/Nelder%E2%80%93Mead_method
|
TheGauss–Newton algorithmis used to solvenon-linear least squaresproblems, which is equivalent to minimizing a sum of squared function values. It is an extension ofNewton's methodfor finding aminimumof a non-linearfunction. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximatezeroesof the components of the sum, and thus minimizing the sum. In this sense, the algorithm is also an effective method forsolving overdetermined systems of equations. It has the advantage that second derivatives, which can be challenging to compute, are not required.[1]
Non-linear least squares problems arise, for instance, innon-linear regression, where parameters in a model are sought such that the model is in good agreement with available observations.
The method is named after the mathematiciansCarl Friedrich GaussandIsaac Newton, and first appeared in Gauss's 1809 workTheoria motus corporum coelestium in sectionibus conicis solem ambientum.[2]
Givenm{\displaystyle m}functionsr=(r1,…,rm){\displaystyle {\textbf {r}}=(r_{1},\ldots ,r_{m})}(often called residuals) ofn{\displaystyle n}variablesβ=(β1,…βn),{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\ldots \beta _{n}),}withm≥n,{\displaystyle m\geq n,}the Gauss–Newton algorithmiterativelyfinds the value ofβ{\displaystyle \beta }that minimize the sum of squares[3]S(β)=∑i=1mri(β)2.{\displaystyle S({\boldsymbol {\beta }})=\sum _{i=1}^{m}r_{i}({\boldsymbol {\beta }})^{2}.}
Starting with an initial guessβ(0){\displaystyle {\boldsymbol {\beta }}^{(0)}}for the minimum, the method proceeds by the iterationsβ(s+1)=β(s)−(JrTJr)−1JrTr(β(s)),{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\left(\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right),}
where, ifrandβarecolumn vectors, the entries of theJacobian matrixare(Jr)ij=∂ri(β(s))∂βj,{\displaystyle \left(\mathbf {J_{r}} \right)_{ij}={\frac {\partial r_{i}\left({\boldsymbol {\beta }}^{(s)}\right)}{\partial \beta _{j}}},}
and the symbolT{\displaystyle ^{\operatorname {T} }}denotes thematrix transpose.
At each iteration, the updateΔ=β(s+1)−β(s){\displaystyle \Delta ={\boldsymbol {\beta }}^{(s+1)}-{\boldsymbol {\beta }}^{(s)}}can be found by rearranging the previous equation in the following two steps:
With substitutionsA=JrTJr{\textstyle A=\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} },b=−JrTr(β(s)){\displaystyle \mathbf {b} =-\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)}, andx=Δ{\displaystyle \mathbf {x} =\Delta }, this turns into the conventional matrix equation of formAx=b{\displaystyle A\mathbf {x} =\mathbf {b} }, which can then be solved in a variety of methods (seeNotes).
Ifm=n, the iteration simplifies to
β(s+1)=β(s)−(Jr)−1r(β(s)),{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\left(\mathbf {J_{r}} \right)^{-1}\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right),}
which is a direct generalization ofNewton's methodin one dimension.
In data fitting, where the goal is to find the parametersβ{\displaystyle {\boldsymbol {\beta }}}such that a given model functionf(x,β){\displaystyle \mathbf {f} (\mathbf {x} ,{\boldsymbol {\beta }})}best fits some data points(xi,yi){\displaystyle (x_{i},y_{i})}, the functionsri{\displaystyle r_{i}}are theresiduals:ri(β)=yi−f(xi,β).{\displaystyle r_{i}({\boldsymbol {\beta }})=y_{i}-f\left(x_{i},{\boldsymbol {\beta }}\right).}
Then, the Gauss–Newton method can be expressed in terms of the JacobianJf=−Jr{\displaystyle \mathbf {J_{f}} =-\mathbf {J_{r}} }of the functionf{\displaystyle \mathbf {f} }asβ(s+1)=β(s)+(JfTJf)−1JfTr(β(s)).{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}+\left(\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {J_{f}} \right)^{-1}\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right).}
Note that(JfTJf)−1JfT{\displaystyle \left(\mathbf {J_{f}} ^{\operatorname {T} }\mathbf {J_{f}} \right)^{-1}\mathbf {J_{f}} ^{\operatorname {T} }}is the leftpseudoinverseofJf{\displaystyle \mathbf {J_{f}} }.
The assumptionm≥nin the algorithm statement is necessary, as otherwise the matrixJrTJr{\displaystyle \mathbf {J_{r}} ^{T}\mathbf {J_{r}} }is not invertible and the normal equations cannot be solved (at least uniquely).
The Gauss–Newton algorithm can be derived bylinearly approximatingthe vector of functionsri. UsingTaylor's theorem, we can write at every iteration:r(β)≈r(β(s))+Jr(β(s))Δ{\displaystyle \mathbf {r} ({\boldsymbol {\beta }})\approx \mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)+\mathbf {J_{r}} \left({\boldsymbol {\beta }}^{(s)}\right)\Delta }
withΔ=β−β(s){\displaystyle \Delta ={\boldsymbol {\beta }}-{\boldsymbol {\beta }}^{(s)}}. The task of findingΔ{\displaystyle \Delta }minimizing the sum of squares of the right-hand side; i.e.,min‖r(β(s))+Jr(β(s))Δ‖22,{\displaystyle \min \left\|\mathbf {r} \left({\boldsymbol {\beta }}^{(s)}\right)+\mathbf {J_{r}} \left({\boldsymbol {\beta }}^{(s)}\right)\Delta \right\|_{2}^{2},}
is alinear least-squaresproblem, which can be solved explicitly, yielding the normal equations in the algorithm.
The normal equations arensimultaneous linear equations in the unknown incrementsΔ{\displaystyle \Delta }. They may be solved in one step, usingCholesky decomposition, or, better, theQR factorizationofJr{\displaystyle \mathbf {J_{r}} }. For large systems, aniterative method, such as theconjugate gradientmethod, may be more efficient. If there is a linear dependence between columns ofJr, the iterations will fail, asJrTJr{\displaystyle \mathbf {J_{r}} ^{T}\mathbf {J_{r}} }becomes singular.
Whenr{\displaystyle \mathbf {r} }is complexr:Cn→C{\displaystyle \mathbf {r} :\mathbb {C} ^{n}\to \mathbb {C} }the conjugate form should be used:(Jr¯TJr)−1Jr¯T{\displaystyle \left({\overline {\mathbf {J_{r}} }}^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}{\overline {\mathbf {J_{r}} }}^{\operatorname {T} }}.
In this example, the Gauss–Newton algorithm will be used to fit a model to some data by minimizing the sum of squares of errors between the data and model's predictions.
In a biology experiment studying the relation between substrate concentration[S]and reaction rate in an enzyme-mediated reaction, the data in the following table were obtained.
It is desired to find a curve (model function) of the formrate=Vmax⋅[S]KM+[S]{\displaystyle {\text{rate}}={\frac {V_{\text{max}}\cdot [S]}{K_{M}+[S]}}}
that fits best the data in the least-squares sense, with the parametersVmax{\displaystyle V_{\text{max}}}andKM{\displaystyle K_{M}}to be determined.
Denote byxi{\displaystyle x_{i}}andyi{\displaystyle y_{i}}the values of[S]andraterespectively, withi=1,…,7{\displaystyle i=1,\dots ,7}. Letβ1=Vmax{\displaystyle \beta _{1}=V_{\text{max}}}andβ2=KM{\displaystyle \beta _{2}=K_{M}}. We will findβ1{\displaystyle \beta _{1}}andβ2{\displaystyle \beta _{2}}such that the sum of squares of the residualsri=yi−β1xiβ2+xi,(i=1,…,7){\displaystyle r_{i}=y_{i}-{\frac {\beta _{1}x_{i}}{\beta _{2}+x_{i}}},\quad (i=1,\dots ,7)}
is minimized.
The JacobianJr{\displaystyle \mathbf {J_{r}} }of the vector of residualsri{\displaystyle r_{i}}with respect to the unknownsβj{\displaystyle \beta _{j}}is a7×2{\displaystyle 7\times 2}matrix with thei{\displaystyle i}-th row having the entries∂ri∂β1=−xiβ2+xi;∂ri∂β2=β1⋅xi(β2+xi)2.{\displaystyle {\frac {\partial r_{i}}{\partial \beta _{1}}}=-{\frac {x_{i}}{\beta _{2}+x_{i}}};\quad {\frac {\partial r_{i}}{\partial \beta _{2}}}={\frac {\beta _{1}\cdot x_{i}}{\left(\beta _{2}+x_{i}\right)^{2}}}.}
Starting with the initial estimates ofβ1=0.9{\displaystyle \beta _{1}=0.9}andβ2=0.2{\displaystyle \beta _{2}=0.2}, after five iterations of the Gauss–Newton algorithm, the optimal valuesβ^1=0.362{\displaystyle {\hat {\beta }}_{1}=0.362}andβ^2=0.556{\displaystyle {\hat {\beta }}_{2}=0.556}are obtained. The sum of squares of residuals decreased from the initial value of 1.445 to 0.00784 after the fifth iteration. The plot in the figure on the right shows the curve determined by the model for the optimal parameters with the observed data.
The Gauss-Newton iteration is guaranteed to converge toward a local minimum pointβ^{\displaystyle {\hat {\beta }}}under 4 conditions:[4]The functionsr1,…,rm{\displaystyle r_{1},\ldots ,r_{m}}are twice continuously differentiable in an open convex setD∋β^{\displaystyle D\ni {\hat {\beta }}}, the JacobianJr(β^){\displaystyle \mathbf {J} _{\mathbf {r} }({\hat {\beta }})}is of full column rank, the initial iterateβ(0){\displaystyle \beta ^{(0)}}is nearβ^{\displaystyle {\hat {\beta }}}, and the local minimum value|S(β^)|{\displaystyle |S({\hat {\beta }})|}is small. The convergence is quadratic if|S(β^)|=0{\displaystyle |S({\hat {\beta }})|=0}.
It can be shown[5]that the increment Δ is adescent directionforS, and, if the algorithm converges, then the limit is astationary pointofS. For large minimum value|S(β^)|{\displaystyle |S({\hat {\beta }})|}, however, convergence is not guaranteed, not evenlocal convergenceas inNewton's method, or convergence under the usual Wolfe conditions.[6]
The rate of convergence of the Gauss–Newton algorithm can approachquadratic.[7]The algorithm may converge slowly or not at all if the initial guess is far from the minimum or the matrixJrTJr{\displaystyle \mathbf {J_{r}^{\operatorname {T} }J_{r}} }isill-conditioned. For example, consider the problem withm=2{\displaystyle m=2}equations andn=1{\displaystyle n=1}variable, given byr1(β)=β+1,r2(β)=λβ2+β−1.{\displaystyle {\begin{aligned}r_{1}(\beta )&=\beta +1,\\r_{2}(\beta )&=\lambda \beta ^{2}+\beta -1.\end{aligned}}}
Forλ<1{\displaystyle \lambda <1},β=0{\displaystyle \beta =0}is a local optimum. Ifλ=0{\displaystyle \lambda =0}, then the problem is in fact linear and the method finds the optimum in one iteration. If |λ| < 1, then the method converges linearly and the error decreases asymptotically with a factor |λ| at every iteration. However, if |λ| > 1, then the method does not even converge locally.[8]
The Gauss-Newton iterationx(k+1)=x(k)−J(x(k))†f(x(k)),k=0,1,…{\displaystyle \mathbf {x} ^{(k+1)}=\mathbf {x} ^{(k)}-J(\mathbf {x} ^{(k)})^{\dagger }\mathbf {f} (\mathbf {x} ^{(k)})\,,\quad k=0,1,\ldots }is an effective method for solvingoverdetermined systemsof equations in the form off(x)=0{\displaystyle \mathbf {f} (\mathbf {x} )=\mathbf {0} }withf(x)=[f1(x1,…,xn)⋮fm(x1,…,xn)]{\displaystyle \mathbf {f} (\mathbf {x} )={\begin{bmatrix}f_{1}(x_{1},\ldots ,x_{n})\\\vdots \\f_{m}(x_{1},\ldots ,x_{n})\end{bmatrix}}}andm>n{\displaystyle m>n}whereJ(x)†{\displaystyle J(\mathbf {x} )^{\dagger }}is theMoore-Penrose inverse(also known aspseudoinverse) of theJacobian matrixJ(x){\displaystyle J(\mathbf {x} )}off(x){\displaystyle \mathbf {f} (\mathbf {x} )}.
It can be considered an extension ofNewton's methodand enjoys the same local quadratic convergence[4]toward isolated regular solutions.
If the solution doesn't exist but the initial iteratex(0){\displaystyle \mathbf {x} ^{(0)}}is near a pointx^=(x^1,…,x^n){\displaystyle {\hat {\mathbf {x} }}=({\hat {x}}_{1},\ldots ,{\hat {x}}_{n})}at which the sum of squares∑i=1m|fi(x1,…,xn)|2≡‖f(x)‖22{\textstyle \sum _{i=1}^{m}|f_{i}(x_{1},\ldots ,x_{n})|^{2}\equiv \|\mathbf {f} (\mathbf {x} )\|_{2}^{2}}reaches a small local minimum, the Gauss-Newton iteration linearly converges tox^{\displaystyle {\hat {\mathbf {x} }}}. The pointx^{\displaystyle {\hat {\mathbf {x} }}}is often called aleast squaressolution of the overdetermined system.
In what follows, the Gauss–Newton algorithm will be derived fromNewton's methodfor function optimization via an approximation. As a consequence, the rate of convergence of the Gauss–Newton algorithm can be quadratic under certain regularity conditions. In general (under weaker conditions), the convergence rate is linear.[9]
The recurrence relation for Newton's method for minimizing a functionSof parametersβ{\displaystyle {\boldsymbol {\beta }}}isβ(s+1)=β(s)−H−1g,{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}-\mathbf {H} ^{-1}\mathbf {g} ,}
wheregdenotes thegradient vectorofS, andHdenotes theHessian matrixofS.
SinceS=∑i=1mri2{\textstyle S=\sum _{i=1}^{m}r_{i}^{2}}, the gradient is given bygj=2∑i=1mri∂ri∂βj.{\displaystyle g_{j}=2\sum _{i=1}^{m}r_{i}{\frac {\partial r_{i}}{\partial \beta _{j}}}.}
Elements of the Hessian are calculated by differentiating the gradient elements,gj{\displaystyle g_{j}}, with respect toβk{\displaystyle \beta _{k}}:Hjk=2∑i=1m(∂ri∂βj∂ri∂βk+ri∂2ri∂βj∂βk).{\displaystyle H_{jk}=2\sum _{i=1}^{m}\left({\frac {\partial r_{i}}{\partial \beta _{j}}}{\frac {\partial r_{i}}{\partial \beta _{k}}}+r_{i}{\frac {\partial ^{2}r_{i}}{\partial \beta _{j}\partial \beta _{k}}}\right).}
The Gauss–Newton method is obtained by ignoring the second-order derivative terms (the second term in this expression). That is, the Hessian is approximated byHjk≈2∑i=1mJijJik,{\displaystyle H_{jk}\approx 2\sum _{i=1}^{m}J_{ij}J_{ik},}
whereJij=∂ri/∂βj{\textstyle J_{ij}={\partial r_{i}}/{\partial \beta _{j}}}are entries of the JacobianJr. Note that when the exact hessian is evaluated near an exact fit we have near-zerori{\displaystyle r_{i}}, so the second term becomes near-zero as well, which justifies the approximation. The gradient and the approximate Hessian can be written in matrix notation asg=2JrTr,H≈2JrTJr.{\displaystyle \mathbf {g} =2{\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {r} ,\quad \mathbf {H} \approx 2{\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {J_{r}} .}
These expressions are substituted into the recurrence relation above to obtain the operational equationsβ(s+1)=β(s)+Δ;Δ=−(JrTJr)−1JrTr.{\displaystyle {\boldsymbol {\beta }}^{(s+1)}={\boldsymbol {\beta }}^{(s)}+\Delta ;\quad \Delta =-\left(\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {J_{r}} \right)^{-1}\mathbf {J_{r}} ^{\operatorname {T} }\mathbf {r} .}
Convergence of the Gauss–Newton method is not guaranteed in all instances. The approximation|ri∂2ri∂βj∂βk|≪|∂ri∂βj∂ri∂βk|{\displaystyle \left|r_{i}{\frac {\partial ^{2}r_{i}}{\partial \beta _{j}\partial \beta _{k}}}\right|\ll \left|{\frac {\partial r_{i}}{\partial \beta _{j}}}{\frac {\partial r_{i}}{\partial \beta _{k}}}\right|}
that needs to hold to be able to ignore the second-order derivative terms may be valid in two cases, for which convergence is to be expected:[10]
With the Gauss–Newton method the sum of squares of the residualsSmay not decrease at every iteration. However, since Δ is a descent direction, unlessS(βs){\displaystyle S\left({\boldsymbol {\beta }}^{s}\right)}is a stationary point, it holds thatS(βs+αΔ)<S(βs){\displaystyle S\left({\boldsymbol {\beta }}^{s}+\alpha \Delta \right)<S\left({\boldsymbol {\beta }}^{s}\right)}for all sufficiently smallα>0{\displaystyle \alpha >0}. Thus, if divergence occurs, one solution is to employ a fractionα{\displaystyle \alpha }of the increment vector Δ in the updating formula:βs+1=βs+αΔ.{\displaystyle {\boldsymbol {\beta }}^{s+1}={\boldsymbol {\beta }}^{s}+\alpha \Delta .}
In other words, the increment vector is too long, but it still points "downhill", so going just a part of the way will decrease the objective functionS. An optimal value forα{\displaystyle \alpha }can be found by using aline searchalgorithm, that is, the magnitude ofα{\displaystyle \alpha }is determined by finding the value that minimizesS, usually using adirect search methodin the interval0<α<1{\displaystyle 0<\alpha <1}or abacktracking line searchsuch asArmijo-line search. Typically,α{\displaystyle \alpha }should be chosen such that it satisfies theWolfe conditionsor theGoldstein conditions.[11]
In cases where the direction of the shift vector is such that the optimal fraction α is close to zero, an alternative method for handling divergence is the use of theLevenberg–Marquardt algorithm, atrust regionmethod.[3]The normal equations are modified in such a way that the increment vector is rotated towards the direction ofsteepest descent,(JTJ+λD)Δ=−JTr,{\displaystyle \left(\mathbf {J^{\operatorname {T} }J+\lambda D} \right)\Delta =-\mathbf {J} ^{\operatorname {T} }\mathbf {r} ,}
whereDis a positive diagonal matrix. Note that whenDis the identity matrixIandλ→+∞{\displaystyle \lambda \to +\infty }, thenλΔ=λ(JTJ+λI)−1(−JTr)=(I−JTJ/λ+⋯)(−JTr)→−JTr{\displaystyle \lambda \Delta =\lambda \left(\mathbf {J^{\operatorname {T} }J} +\lambda \mathbf {I} \right)^{-1}\left(-\mathbf {J} ^{\operatorname {T} }\mathbf {r} \right)=\left(\mathbf {I} -\mathbf {J^{\operatorname {T} }J} /\lambda +\cdots \right)\left(-\mathbf {J} ^{\operatorname {T} }\mathbf {r} \right)\to -\mathbf {J} ^{\operatorname {T} }\mathbf {r} }, therefore thedirectionof Δ approaches the direction of the negative gradient−JTr{\displaystyle -\mathbf {J} ^{\operatorname {T} }\mathbf {r} }.
The so-called Marquardt parameterλ{\displaystyle \lambda }may also be optimized by a line search, but this is inefficient, as the shift vector must be recalculated every timeλ{\displaystyle \lambda }is changed. A more efficient strategy is this: When divergence occurs, increase the Marquardt parameter until there is a decrease inS. Then retain the value from one iteration to the next, but decrease it if possible until a cut-off value is reached, when the Marquardt parameter can be set to zero; the minimization ofSthen becomes a standard Gauss–Newton minimization.
For large-scale optimization, the Gauss–Newton method is of special interest because it is often (though certainly not always) true that the matrixJr{\displaystyle \mathbf {J} _{\mathbf {r} }}is moresparsethan the approximate HessianJrTJr{\displaystyle \mathbf {J} _{\mathbf {r} }^{\operatorname {T} }\mathbf {J_{r}} }. In such cases, the step calculation itself will typically need to be done with an approximate iterative method appropriate for large and sparse problems, such as theconjugate gradient method.
In order to make this kind of approach work, one needs at least an efficient method for computing the productJrTJrp{\displaystyle {\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {J_{r}} \mathbf {p} }
for some vectorp. Withsparse matrixstorage, it is in general practical to store the rows ofJr{\displaystyle \mathbf {J} _{\mathbf {r} }}in a compressed form (e.g., without zero entries), making a direct computation of the above product tricky due to the transposition. However, if one definescias rowiof the matrixJr{\displaystyle \mathbf {J} _{\mathbf {r} }}, the following simple relation holds:JrTJrp=∑ici(ci⋅p),{\displaystyle {\mathbf {J} _{\mathbf {r} }}^{\operatorname {T} }\mathbf {J_{r}} \mathbf {p} =\sum _{i}\mathbf {c} _{i}\left(\mathbf {c} _{i}\cdot \mathbf {p} \right),}
so that every row contributes additively and independently to the product. In addition to respecting a practical sparse storage structure, this expression is well suited forparallel computations. Note that every rowciis the gradient of the corresponding residualri; with this in mind, the formula above emphasizes the fact that residuals contribute to the problem independently of each other.
In aquasi-Newton method, such as that due toDavidon, Fletcher and Powellor Broyden–Fletcher–Goldfarb–Shanno (BFGS method) an estimate of the full Hessian∂2S∂βj∂βk{\textstyle {\frac {\partial ^{2}S}{\partial \beta _{j}\partial \beta _{k}}}}is built up numerically using first derivatives∂ri∂βj{\textstyle {\frac {\partial r_{i}}{\partial \beta _{j}}}}only so that afternrefinement cycles the method closely approximates to Newton's method in performance. Note that quasi-Newton methods can minimize general real-valued functions, whereas Gauss–Newton, Levenberg–Marquardt, etc. fits only to nonlinear least-squares problems.
Another method for solving minimization problems using only first derivatives isgradient descent. However, this method does not take into account the second derivatives even approximately. Consequently, it is highly inefficient for many functions, especially if the parameters have strong interactions.
The following implementation inJuliaprovides one method which uses a provided Jacobian and another computing withautomatic differentiation.
|
https://en.wikipedia.org/wiki/Gauss%E2%80%93Newton_algorithm
|
Innumerical analysis,hill climbingis amathematical optimizationtechnique which belongs to the family oflocal search.
It is aniterative algorithmthat starts with an arbitrary solution to a problem, then attempts to find a better solution by making anincrementalchange to the solution. If the change produces a better solution, another incremental change is made to the new solution, and so on until no further improvements can be found.
For example, hill climbing can be applied to thetravelling salesman problem. It is easy to find an initial solution that visits all the cities but will likely be very poor compared to the optimal solution. The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited. Eventually, a much shorter route is likely to be obtained.
Hill climbing finds optimal solutions forconvexproblems – for other problems it will find onlylocal optima(solutions that cannot be improved upon by any neighboring configurations), which are not necessarily the best possible solution (theglobal optimum) out of all possible solutions (thesearch space).
Examples of algorithms that solveconvex problemsby hill-climbing include thesimplex algorithmforlinear programmingandbinary search.[1]: 253
To attempt to avoid getting stuck in local optima, one could use restarts (i.e. repeated local search), or more complex schemes based on iterations (likeiterated local search), or on memory (like reactive search optimization andtabu search), or on memory-less stochastic modifications (likesimulated annealing).
The relative simplicity of the algorithm makes it a popular first choice amongst optimizing algorithms. It is used widely inartificial intelligence, for reaching a goal state from a starting node. Different choices for nextnodesand starting nodes are used in related algorithms. Although more advanced algorithms such assimulated annealingortabu searchmay give better results, in some situations hill climbing works just as well. Hill climbing can often produce a better result than other algorithms when the amount of time available to perform a search is limited, such as with real-time systems, so long as a small number of increments typically converges on a good solution (the optimal solution or a close approximation). At the other extreme,bubble sortcan be viewed as a hill climbing algorithm (every adjacent element exchange decreases the number of disordered element pairs), yet this approach is far from efficient for even modest N, as the number of exchanges required grows quadratically.
Hill climbing is ananytime algorithm: it can return a valid solution even if it's interrupted at any time before it ends.
Hill climbing attempts to maximize (or minimize) a targetfunctionf(x){\displaystyle f(\mathbf {x} )}, wherex{\displaystyle \mathbf {x} }is a vector of continuous and/or discrete values. At each iteration, hill climbing will adjust a single element inx{\displaystyle \mathbf {x} }and determine whether the change improves the value off(x){\displaystyle f(\mathbf {x} )}. (Note that this differs fromgradient descentmethods, which adjust all of the values inx{\displaystyle \mathbf {x} }at each iteration according to the gradient of the hill.) With hill climbing, any change that improvesf(x){\displaystyle f(\mathbf {x} )}is accepted, and the process continues until no change can be found to improve the value off(x){\displaystyle f(\mathbf {x} )}. Thenx{\displaystyle \mathbf {x} }is said to be "locally optimal".
In discrete vector spaces, each possible value forx{\displaystyle \mathbf {x} }may be visualized as avertexin agraph. Hill climbing will follow the graph from vertex to vertex, always locally increasing (or decreasing) the value off(x){\displaystyle f(\mathbf {x} )}, until alocal maximum(orlocal minimum)xm{\displaystyle x_{m}}is reached.
Insimple hill climbing, the first closer node is chosen, whereas insteepest ascent hill climbingall successors are compared and the closest to the solution is chosen. Both forms fail if there is no closer node, which may happen if there are local maxima in the search space which are not solutions. Steepest ascent hill climbing is similar tobest-first search, which tries all possible extensions of the current path instead of only one.[2]
Stochastic hill climbingdoes not examine all neighbors before deciding how to move. Rather, it selects a neighbor at random, and decides (based on the amount of improvement in that neighbor) whether to move to that neighbor or to examine another.
Coordinate descentdoes aline searchalong one coordinate direction at the current point in each iteration. Some versions of coordinate descent randomly pick a different coordinate direction each iteration.
Random-restart hill climbingis ameta-algorithmbuilt on top of the hill climbing algorithm. It is also known asShotgun hill climbing. It iteratively does hill-climbing, each time with a random initial conditionx0{\displaystyle x_{0}}. The bestxm{\displaystyle x_{m}}is kept: if a new run of hill climbing produces a betterxm{\displaystyle x_{m}}than the stored state, it replaces the stored state.
Random-restart hill climbing is a surprisingly effective algorithm in many cases. It turns out that it is often better to spend CPU time exploring the space, than carefully optimizing from an initial condition.[original research?]
Hill climbing will not necessarily find the global maximum, but may instead converge on alocal maximum. This problem does not occur if the heuristic is convex. However, as many functions are not convex hill climbing may often fail to reach a global maximum. Other local search algorithms try to overcome this problem such asstochastic hill climbing,random walksandsimulated annealing.
Ridgesare a challenging problem for hill climbers that optimize in continuous spaces. Because hill climbers only adjust one element in the vector at a time, each step will move in an axis-aligned direction. If the target function creates a narrow ridge that ascends in a non-axis-aligned direction (or if the goal is to minimize, a narrow alley that descends in a non-axis-aligned direction), then the hill climber can only ascend the ridge (or descend the alley) by zig-zagging. If the sides of the ridge (or alley) are very steep, then the hill climber may be forced to take very tiny steps as it zig-zags toward a better position. Thus, it may take an unreasonable length of time for it to ascend the ridge (or descend the alley).
By contrast, gradient descent methods can move in any direction that the ridge or alley may ascend or descend. Hence, gradient descent or theconjugate gradient methodis generally preferred over hill climbing when the target function is differentiable. Hill climbers, however, have the advantage of not requiring the target function to be differentiable, so hill climbers may be preferred when the target function is complex.
Another problem that sometimes occurs with hill climbing is that of a plateau. A plateau is encountered when the search space is flat, or sufficiently flat that the value returned by the target function is indistinguishable from the value returned for nearby regions due to the precision used by the machine to represent its value. In such cases, the hill climber may not be able to determine in which direction it should step, and may wander in a direction that never leads to improvement.
Contrastgenetic algorithm;random optimization.
|
https://en.wikipedia.org/wiki/Hill_climbing
|
Quantum annealing(QA) is an optimization process for finding theglobal minimumof a givenobjective functionover a given set of candidate solutions (candidate states), by a process usingquantum fluctuations. Quantum annealing is used mainly for problems where the search space is discrete (combinatorial optimizationproblems) with manylocal minima; such as finding[1]theground stateof aspin glassor solving thetraveling salesman problem. The term "quantum annealing" was first proposed in 1988 by B. Apolloni, N. Cesa Bianchi and D. De Falco as a quantum-inspired classical algorithm.[2][3]It was formulated in its present form by T. Kadowaki and H. Nishimori (ja) in 1998,[4]though an imaginary-time variant without quantum coherence had been discussed by A. B. Finnila, M. A. Gomez, C. Sebenik and J. D. Doll in 1994.[5]
Quantum annealing starts from a quantum-mechanical superposition of all possible states (candidate states) with equal weights. Then the system evolves following the time-dependentSchrödinger equation, a natural quantum-mechanical evolution of physical systems. The amplitudes of all candidate states keep changing, realizing a quantum parallelism, according to the time-dependent strength of the transverse field, which causesquantum tunnelingbetween states or essentially tunneling through peaks. If the rate of change of the transverse field is slow enough, the system stays close to the ground state of the instantaneousHamiltonian(also seeadiabatic quantum computation).[6]If the rate of change of the transverse field is accelerated, the system may leave the ground state temporarily but produce a higher likelihood of concluding in the ground state of the final problem Hamiltonian, i.e.,Diabaticquantum computation.[7][8]The transverse field is finally switched off, and the system is expected to have reached the ground state of the classicalIsing modelthat corresponds to the solution to the original optimization problem. An experimental demonstration of the success of quantum annealing for random magnets was reported immediately after the initial theoretical proposal.[9]Quantum annealing has also been proven to provide a fastGroveroracle for the square-root speedup in solving manyNP-complete problems.[10]
Quantum annealing can be compared tosimulated annealing, whose "temperature" parameter plays a similar role to quantum annealing's tunneling field strength. In simulated annealing, the temperature determines the probability of moving to a state of higher "energy" from a single current state. In quantum annealing, the strength of transverse field determines the quantum-mechanical probability to change the amplitudes of all states in parallel. Analytical[11]and numerical[12]evidence suggests that quantum annealing outperforms simulated annealing under certain conditions (see Heim et al[13]and see Yan and Sinitsyn[14]for a fully solvable model of quantum annealing to arbitrary target Hamiltonian and comparison of different computation approaches).
The tunneling field is basically a kinetic energy term that does not commute with the classical potential energy part of the original glass. The whole process can be simulated in a computer usingquantum Monte Carlo(or other stochastic technique), and thus obtain a heuristic algorithm for finding the ground state of the classical glass.
In the case of annealing a purely mathematicalobjective function, one may consider the variables in the problem to be classical degrees of freedom, and the cost functions to be the potential energy function (classical Hamiltonian). Then a suitable term consisting of non-commuting variable(s) (i.e. variables that have non-zero commutator with the variables of the original mathematical problem) has to be introduced artificially in the Hamiltonian to play the role of the tunneling field (kinetic part). Then one may carry out the simulation with the quantum Hamiltonian thus constructed (the original function + non-commuting part) just as described above. Here, there is a choice in selecting the non-commuting term and the efficiency of annealing may depend on that.
It has been demonstrated experimentally as well as theoretically, that quantum annealing can outperform thermal annealing (simulated annealing) in certain cases, especially where the potential energy (cost) landscape consists of very high but thin barriers surrounding shallow local minima.[15]Since thermal transition probabilities (proportional toe−ΔkBT{\displaystyle e^{-{\frac {\Delta }{k_{B}T}}}}, withT{\displaystyle T}the temperature andkB{\displaystyle k_{B}}theBoltzmann constant) depend only on the heightΔ{\displaystyle \Delta }of the barriers, for very high barriers, it is extremely difficult for thermal fluctuations to get the system out from such local minima. However, as argued earlier in 1989 by Ray, Chakrabarti & Chakrabarti,[1]the quantum tunneling probability through the same barrier (considered in isolation) depends not only on the heightΔ{\displaystyle \Delta }of the barrier, but also on its widthw{\displaystyle w}and is approximately given bye−ΔwΓ{\displaystyle e^{-{\frac {{\sqrt {\Delta }}w}{\Gamma }}}}, whereΓ{\displaystyle \Gamma }is the tunneling field.[16]This additional handle through the widthw{\displaystyle w}, in presence of quantum tunneling, can be of major help: If the barriers are thin enough (i.e.w≪Δ{\displaystyle w\ll {\sqrt {\Delta }}}), quantum fluctuations can surely bring the system out of the shallow local minima. For anN{\displaystyle N}-spin glass, the barrier heightΔ{\displaystyle \Delta }becomes of orderN{\displaystyle N}. For constant value ofw{\displaystyle w}one getsτ{\displaystyle \tau }proportional toeN{\displaystyle e^{\sqrt {N}}}for the annealing time (instead ofτ{\displaystyle \tau }proportional toeN{\displaystyle e^{N}}for thermal annealing), whileτ{\displaystyle \tau }can even becomeN{\displaystyle N}-independent for cases wherew{\displaystyle w}decreases as1/N{\displaystyle 1/{\sqrt {N}}}.[17][18]
It is speculated that in aquantum computer, such simulations would be much more efficient and exact than that done in a classical computer, because it can perform the tunneling directly, rather than needing to add it by hand. Moreover, it may be able to do this without the tight error controls needed to harness thequantum entanglementused in more traditional quantum algorithms. Some confirmation of this is found in exactly solvable models.[19][20]
Timeline of ideas related to quantum annealing in Ising spin glasses:
In 2011,D-Wave Systemsannounced the first commercial quantum annealer on the market by the name D-Wave One and published a paper in Nature on its performance.[22]The company claims this system uses a 128qubitprocessor chipset.[23]On May 25, 2011, D-Wave announced thatLockheed MartinCorporation entered into an agreement to purchase a D-Wave One system.[24]On October 28, 2011University of Southern California's (USC)Information Sciences Institutetook delivery of Lockheed's D-Wave One.
In May 2013, it was announced that a consortium ofGoogle,NASA Amesand the non-profitUniversities Space Research Associationpurchased an adiabatic quantum computer from D-Wave Systems with 512 qubits.[25][26]An extensive study of its performance as quantum annealer, compared to some classical annealing algorithms, is available.[27]
In June 2014, D-Wave announced a new quantum applications ecosystem with computational finance firm1QB Information Technologies(1QBit) and cancer research group DNA-SEQ to focus on solving real-world problems with quantum hardware.[28]As the first company dedicated to producing software applications for commercially available quantum computers, 1QBit's research and development arm has focused on D-Wave's quantum annealing processors and has demonstrated that these processors are suitable for solving real-world applications.[29]
With demonstrations ofentanglementpublished,[30]the question of whether or not the D-Wave machine can demonstratequantum speedupover all classical computers remains unanswered. A study published inSciencein June 2014, described as "likely the most thorough and precise study that has been done on the performance of the D-Wave machine"[31]and "the fairest comparison yet", attempted to define and measure quantum speedup. Several definitions were put forward as some may be unverifiable by empirical tests, while others, though falsified, would nonetheless allow for the existence of performance advantages. The study found that the D-Wave chip "produced no quantum speedup" and did not rule out the possibility in future tests.[32]The researchers, led by Matthias Troyer at theSwiss Federal Institute of Technology, found "no quantum speedup" across the entire range of their tests, and only inconclusive results when looking at subsets of the tests. Their work illustrated "the subtle nature of the quantum speedup question". Further work[33]has advanced understanding of these test metrics and their reliance on equilibrated systems, thereby missing any signatures of advantage due to quantum dynamics.
There are many open questions regarding quantum speedup. The ETH reference in the previous section is just for one class of benchmark problems. Potentially there may be other classes of problems where quantum speedup might occur. Researchers at Google, LANL, USC, Texas A&M, and D-Wave are working to find such problem classes.[34]
In December 2015, Google announced that theD-Wave 2Xoutperforms bothsimulated annealingandQuantum Monte Carloby up to a factor of 100,000,000 on a set of hard optimization problems.[35]
D-Wave's architecture differs from traditional quantum computers. It is not known to be polynomially equivalent to auniversal quantum computerand, in particular, cannot executeShor's algorithmbecause Shor's algorithm requires precise gate operations and quantum Fourier transforms which are currently unavailable in quantum annealing architectures.[36]Shor's algorithm requires a universal quantum computer. During the Qubits 2021 conference held by D-Wave, it was announced[37]that the company is developing their first universal quantum computers, capable of running Shor's algorithm in addition to other gate-model algorithms such asQAOAandVQE.
"A cross-disciplinary introduction to quantum annealing-based algorithms"[38]presents an introduction to combinatorial optimization (NP-hard) problems, the general structure of quantum annealing-based algorithms and two examples of this kind of algorithms for solving instances of the max-SAT (maximum satisfiable problem) and Minimum Multicut problems, together with an overview of the quantum annealing systems manufactured by D-Wave Systems. Hybrid quantum-classic algorithms for large-scale discrete-continuous optimization problems were reported to illustrate thequantum advantage.[39][40]
|
https://en.wikipedia.org/wiki/Quantum_annealing
|
Incomputational complexity theory, thecomplexity classTFNPis the class of total function problems which can be solved in nondeterministic polynomial time. That is, it is the class of function problems that are guaranteed to have an answer, and this answer can be checked in polynomial time, or equivalently it is the subset ofFNPwhere a solution is guaranteed to exist. The abbreviation TFNP stands for "Total Function Nondeterministic Polynomial".
TFNP contains many natural problems that are of interest to computer scientists. These problems includeinteger factorization, finding a Nash Equilibrium of a game, and searching for local optima. TFNP is widely conjectured to contain problems that are computationally intractable, and several such problems have been shown to be hard under cryptographic assumptions.[1][2]However, there are no known unconditional intractability results or results showing NP-hardness of TFNP problems. TFNP is not believed to have any complete problems.[3]
The class TFNP is formally defined as follows.
It was first defined by Megiddo and Papadimitriou in 1989,[4]although TFNP problems and subclasses of TFNP had been defined and studied earlier.[5]
Letxbe a mapping, andya 2-tuple of items in its domain. The binary relation in questionP(x,y) has the meaning "the images of both entries ofyunderxare equal", which, since the mapping is polynomially computable, is polynomially decidable. Moreover, such tupleymust exist for any mapping because of thepigeonhole principle.
The complexity classF(NP∩coNP){\displaystyle {\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})}can be defined in two different ways, and those ways are not known to be equivalent. One way applies F to themachine modelforNP∩coNP{\displaystyle {\mathsf {NP}}\cap {\mathsf {coNP}}}. It is known that with this definition,F(NP∩coNP){\displaystyle {\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})}coincides with TFNP.[4]To see this, first notice that the inclusionTFNP⊆F(NP∩coNP){\displaystyle {\mathsf {TFNP}}\subseteq {\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})}follows easily from the definitions of the classes. All "yes" answers to problems in TFNP can be easily verified by definition, and since problems in TFNP are total, there are no "no" answers, so it is vacuously true that "no" answers can be easily verified. For the reverse inclusion, letRbe a binary relation inF(NP∩coNP){\displaystyle {\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})}. DecomposeRintoR1∪R2{\displaystyle R_{1}\cup R_{2}}such that(x,0y)∈R1{\displaystyle (x,0y)\in R_{1}}precisely when(x,y)∈R{\displaystyle (x,y)\in R}andyis a "yes" answer, and letR2be(x,1y){\displaystyle (x,1y)}such(x,y)∈R{\displaystyle (x,y)\in R}andyis a "no" answer. Then the binary relationR1∪R2{\displaystyle R_{1}\cup R_{2}}is in TFNP.
The other definition uses thatNP∩coNP{\displaystyle {\mathsf {NP}}\cap {\mathsf {coNP}}}is known to be a well-behavedclassofdecision problems, and applies F to that class. With this definition, ifNP∩coNP=P{\displaystyle {\mathsf {NP}}\cap {\mathsf {coNP}}={\mathsf {P}}}thenF(NP∩coNP)=FP{\displaystyle {\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})={\mathsf {\color {Blue}FP}}}.
NPis one of the most widely studied complexity classes. The conjecture that there are intractable problems in NP is widely accepted and often used as the most basic hardness assumption. Therefore, it is only natural to ask how TFNP is related to NP. It is not difficult to see that solutions to problems in NP can imply solutions to problems in TFNP. However, there are no TFNP problems which are known to beNP-hard. Intuition for this fact comes from the fact that problems in TFNP are total. For a problem to be NP-hard, there must exist a reduction from someNP-completeproblem to the problem of interest. A typical reduction from problemAto problemBis performed by creating and analyzing a map that sends "yes" instances ofAto "yes" instances ofBand "no" instances ofAto "no" instances ofB. However, TFNP problems are total, so there are no "no" instances for this type of reduction, causing common techniques to be difficult to apply. Beyond this rough intuition, there are several concrete results that suggest that it might be difficult or even impossible to prove NP-hardness for TFNP problems. For example, if any TFNP problem is NP-complete, then NP = coNP,[3]which is generally conjectured to be false, but is still a major open problem in complexity theory. This lack of connections with NP is a major motivation behind the study of TFNP as its own independent class.
The structure of TFNP is often studied through the study of its subclasses. These subclasses are defined by the mathematical theorem by which solutions to the problems are guaranteed. One appeal of studying subclasses of TFNP is that although TFNP is believed not to have any complete problems, these subclasses are defined by a certain complete problem, making them easier to reason about.
PLS(standing for "Polynomial Local Search") is a class of problems designed to model the process of searching for a local optimum for a function. In particular, it is the class of total function problems that are polynomial-time reducible to the following problem
It contains the class CLS.
PPA(standing for "Polynomial time Parity Argument") is the class of problems whose solution is guaranteed by thehandshaking lemma:any undirected graph with an odd degree vertex must have another odd degree vertex. It contains the subclassPPAD.
PPP(standing for "Polynomial time Pigeonhole Principle") is the class of problems whose solution is guaranteed by thePigeonhole principle. More precisely, it is the class of problems that can be reduced in polynomial time to the Pigeon problem, defined as follows
PPP contains the classesPPADand PWPP. Notable problems in this class include theshort integer solution problem.[6]
PPAD(standing for "Polynomial time Parity Argument, Directed") is a restriction of PPA to problems whose solutions are guaranteed by a directed version of thehandshake lemma. It is often defined as the set of problems that are polynomial-time reducible to End-of-a-Line:
PPAD is in the intersection of PPA and PPP, and it contains CLS.
Here, the circuitSin the definition sends each point of the line to its successor, or to itself if the point is a sink. LikewisePsends each point of the line to its predecessor, or to itself if the point is a source. Points outside of all lines are identified by being fixed under bothPandS(in other words, any isolated points are removed from the graph). Then the conditionP(S(x))≠x{\displaystyle P(S(x))\neq x}defines the end of a line, which is either a sink or is such thatS(x) =S(y) for some other pointy; similarly the conditionS(P(x))≠x{\displaystyle S(P(x))\neq x}defines the beginning of a line (since we assume that 0 is a source, we require the solution be nonzero in this case).
Continuous local search (CLS)is a class of search problems designed to model the process of finding a local optima of a continuous function over a continuous domain. It is defined as the class of problems that are polynomial-time reducible to the Continuous Localpoint problem:
This class was first defined by Daskalakis and Papadimitriou in 2011.[7]It is contained in the intersection of PPAD and PLS, and in 2020 it has been proven thatCLS=PPAD∩PLS{\displaystyle {\mathsf {CLS}}={\mathsf {PPAD}}\cap {\mathsf {PLS}}}.[8][9]It was designed to be a class of relatively simple optimization problems that still contains many interesting problems that are believed to be hard.
Complete problems for CLS are for example finding an ε-KKTpoint,[10]finding an ε-Banach fixed point[11]and the Meta-Metric-Contraction problem.[12]
EOPL and UEOPL (which stands for "end of potential line" and "unique end of potential line") were introduced in 2020 by.[10]
EOPL captures search problems that can be solved by local search, i.e. it is possible to jump from one candidate solution to the next one in polynomial time. A problem in EOPL can be interpreted as an exponentially large, directed, acyclic graph where each node is a candidate solution and has a cost (also called potential) which increases along the edges. The in- and out-degree of each node is at most one which means that the nodes form a collection of exponentially long lines. The end of each line is the node with highest cost on that line.
EOPL contains all problems that can be reduced in polynomial time to the search problem End-of-Potential-Line:
UEOPL is defined very similarly, but it is promised that there is only one line. Hence finding the second type of solution above would violate the promise ensuring that the first type of solution is unique. A fourth solution type is added to provide another way of detecting the presence of a second line:
A solution of this type either indicates thatxandyare on different lines, or indicates a violation of the condition that values on the same line are strictly increasing. The advantage of including this condition is that it may be easier to findxandyas required than to find the start of their lines, or an explicit violation of the increasing cost condition.
UEOPL contains, among others, the problem of solving theP-matrix-Linear complementarity problem,[10]finding the sink of aUnique sink orientationin cubes,[10]solving a simple stochastic game[10]and the α-Ham Sandwich problem.[13]Complete problems of UEOPL are Unique-End-of-Potential-Line, some variants of it with costs increasing exactly by 1 or an instance without thePcircuit, and One-Permutation-Discrete-Contraction.[10]
EOPL captures search problems like the ones in UEOPL with the relaxation that there are multiple lines allowed and it is searched for any end of a line. There are currently no problems known that are in EOPL but not in UEOPL.
EOPL is a subclass of CLS, it is unknown whether they are equal or not. UEOPL is trivially contained in EOPL.
FP (complexity)(standing for "Function Polynomial") is the class of function problems that can be solved in deterministic polynomial time.FP⊆CLS{\displaystyle {\mathsf {FP}}\subseteq {\mathsf {CLS}}}, and it is conjectured that this inclusion is strict. This class represents the class of function problems that are believed to be computationally tractable (without randomization). If TFNP = FP, thenP=NP∩coNP{\displaystyle {\mathsf {P}}={\mathsf {NP}}\cap {\mathsf {coNP}}}, which should be intuitive given the fact thatTFNP=F(NP∩coNP){\displaystyle {\mathsf {TFNP}}={\mathsf {F}}({\mathsf {NP}}\cap {\mathsf {coNP}})}. However, it is generally conjectured thatP≠NP∩coNP{\displaystyle {\mathsf {P}}\neq {\mathsf {NP}}\cap {\mathsf {coNP}}}, and so TFNP ≠ FP.
|
https://en.wikipedia.org/wiki/TFNP#CLS
|
Neuroevolution, orneuro-evolution, is a form ofartificial intelligencethat usesevolutionary algorithmsto generateartificial neural networks(ANN), parameters, and rules.[1]It is most commonly applied inartificial life,general game playing[2]andevolutionary robotics. The main benefit is that neuroevolution can be applied more widely thansupervised learningalgorithms, which require a syllabus of correct input-output pairs. In contrast, neuroevolution requires only a measure of a network's performance at a task. For example, the outcome of a game (i.e., whether one player won or lost) can be easily measured without providing labeled examples of desired strategies. Neuroevolution is commonly used as part of thereinforcement learningparadigm, and it can be contrasted with conventional deep learning techniques that usebackpropagation(gradient descenton a neural network) with a fixed topology.
Many neuroevolutionalgorithmshave been defined. One common distinction is between algorithms that evolve only the strength of the connection weights for a fixed network topology (sometimes called conventional neuroevolution), and algorithms that evolve both the topology of the network and its weights (called TWEANNs, for Topology and Weight Evolving Artificial Neural Network algorithms).
A separate distinction can be made between methods that evolve the structure of ANNs in parallel to its parameters (those applying standard evolutionary algorithms) and those that develop them separately (throughmemetic algorithms).[3]
Most neural networks use gradient descent rather than neuroevolution. However, around 2017 researchers atUberstated they had found that simple structural neuroevolution algorithms were competitive with sophisticated modern industry-standard gradient-descentdeep learningalgorithms, in part because neuroevolution was found to be less likely to get stuck in local minima. InScience,
journalist Matthew Hutson speculated that part of the reason neuroevolution is succeeding where it had failed before is due to the increased computational power available in the 2010s.[4]
It can be shown that there is a correspondence between neuroevolution and gradient descent.[5]
Evolutionary algorithms operate on a population ofgenotypes(also referred to asgenomes). In neuroevolution, a genotype is mapped to a neural networkphenotypethat is evaluated on some task to derive itsfitness.
Indirectencoding schemes the genotype directly maps to the phenotype. That is, every neuron and connection in the neural network is specified directly and explicitly in the genotype. In contrast, inindirectencoding schemes the genotype specifies indirectly how that network should be generated.[6]
Indirect encodings are often used to achieve several aims:[6][7][8][9][10]
Traditionally indirect encodings that employ artificialembryogeny(also known asartificial development) have been categorised along the lines of agrammatical approachversus acell chemistry approach.[11]The former evolves sets of rules in the form of grammatical rewrite systems. The latter attempts to mimic how physical structures emerge in biology through gene expression. Indirect encoding systems often use aspects of both approaches.
Stanley and Miikkulainen[11]propose a taxonomy for embryogenic systems that is intended to reflect their underlying properties. The taxonomy identifies five continuous dimensions, along which any embryogenic system can be placed:
Examples of neuroevolution methods (those with direct encodings are necessarily non-embryogenic):
|
https://en.wikipedia.org/wiki/Neuroevolution
|
Instatistics, thebias of an estimator(orbias function) is the difference between thisestimator'sexpected valueand thetrue valueof the parameter being estimated. An estimator or decision rule with zero bias is calledunbiased. In statistics, "bias" is anobjectiveproperty of an estimator. Bias is a distinct concept fromconsistency: consistent estimators converge in probability to the true value of the parameter, but may be biased or unbiased (seebias versus consistencyfor more).
All else being equal, an unbiased estimator is preferable to a biased estimator, although in practice, biased estimators (with generally small bias) are frequently used. When a biased estimator is used, bounds of the bias are calculated. A biased estimator may be used for various reasons: because an unbiased estimator does not exist without further assumptions about a population; because an estimator is difficult to compute (as inunbiased estimation of standard deviation); because a biased estimator may be unbiased with respect to different measures ofcentral tendency; because a biased estimator gives a lower value of someloss function(particularlymean squared error) compared with unbiased estimators (notably inshrinkage estimators); or because in some cases being unbiased is too strong a condition, and the only unbiased estimators are not useful.
Bias can also be measured with respect to themedian, rather than the mean (expected value), in which case one distinguishesmedian-unbiased from the usualmean-unbiasedness property.
Mean-unbiasedness is not preserved under non-lineartransformations, though median-unbiasedness is (see§ Effect of transformations); for example, thesample varianceis a biased estimator for the population variance. These are all illustrated below.
An unbiased estimator for a parameter need not always exist. For example, there is no unbiased estimator for the reciprocal of the parameter of a binomial random variable.[1]
Suppose we have astatistical model, parameterized by a real numberθ, giving rise to a probability distribution for observed data,Pθ(x)=P(x∣θ){\displaystyle P_{\theta }(x)=P(x\mid \theta )}, and a statisticθ^{\displaystyle {\hat {\theta }}}which serves as anestimatorofθbased on any observed datax{\displaystyle x}. That is, we assume that our data follows some unknown distributionP(x∣θ){\displaystyle P(x\mid \theta )}(whereθis a fixed, unknown constant that is part of this distribution), and then we construct some estimatorθ^{\displaystyle {\hat {\theta }}}that maps observed data to values that we hope are close toθ. Thebiasofθ^{\displaystyle {\hat {\theta }}}relative toθ{\displaystyle \theta }is defined as[2]
whereEx∣θ{\displaystyle \operatorname {E} _{x\mid \theta }}denotesexpected valueover the distributionP(x∣θ){\displaystyle P(x\mid \theta )}(i.e., averaging over all possible observationsx{\displaystyle x}). The second equation follows sinceθis measurable with respect to the conditional distributionP(x∣θ){\displaystyle P(x\mid \theta )}.
An estimator is said to beunbiasedif its bias is zero for all values of the parameterθ, or equivalently, if the expected value of the estimator matches that of the parameter.[3]Unbiasedness is not guaranteed to carry over. For example, ifθ^{\displaystyle {\hat {\theta }}}is an unbiased estimator for parameterθ, it is not guaranteed in general that g(θ^{\displaystyle {\hat {\theta }}}) is an unbiased estimator forg(θ), unlessgis a linear function.[4]
In a simulation experiment concerning the properties of an estimator, the bias of the estimator may be assessed using themean signed difference.
Thesample varianceof a random variable demonstrates two aspects of estimator bias: firstly, the naive estimator is biased, which can be corrected by a scale factor; second, the unbiased estimator is not optimal in terms ofmean squared error(MSE), which can be minimized by using a different scale factor, resulting in a biased estimator with lower MSE than the unbiased estimator. Concretely, the naive estimator sums the squared deviations and divides byn,which is biased. Dividing instead byn− 1 yields an unbiased estimator. Conversely, MSE can be minimized by dividing by a different number (depending on distribution), but this results in a biased estimator. This number is always larger thann− 1, so this is known as ashrinkage estimator, as it "shrinks" the unbiased estimator towards zero; for the normal distribution the optimal value isn+ 1.
SupposeX1, ...,Xnareindependent and identically distributed(i.i.d.) random variables withexpectationμandvarianceσ2. If thesample meanand uncorrectedsample varianceare defined as
thenS2is a biased estimator ofσ2, because
To continue, we note that by subtractingμ{\displaystyle \mu }from both sides ofX¯=1n∑i=1nXi{\displaystyle {\overline {X}}={\frac {1}{n}}\sum _{i=1}^{n}X_{i}}, we get
Meaning, (by cross-multiplication)n⋅(X¯−μ)=∑i=1n(Xi−μ){\displaystyle n\cdot ({\overline {X}}-\mu )=\sum _{i=1}^{n}(X_{i}-\mu )}. Then, the previous becomes:
This can be seen by noting the following formula, which follows from theBienaymé formula, for the term in the inequality for the expectation of the uncorrected sample variance above:E[(X¯−μ)2]=1nσ2{\displaystyle \operatorname {E} {\big [}({\overline {X}}-\mu )^{2}{\big ]}={\frac {1}{n}}\sigma ^{2}}.
In other words, the expected value of the uncorrected sample variance does not equal the population varianceσ2, unless multiplied by a normalization factor. The sample mean, on the other hand, is an unbiased[5]estimator of the population meanμ.[3]
Note that the usual definition of sample variance isS2=1n−1∑i=1n(Xi−X¯)2{\displaystyle S^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}(X_{i}-{\overline {X}}\,)^{2}}, and this is an unbiased estimator of the population variance.
Algebraically speaking,E[S2]{\displaystyle \operatorname {E} [S^{2}]}is unbiased because:
where the transition to the second line uses the result derived above for the biased estimator. ThusE[S2]=σ2{\displaystyle \operatorname {E} [S^{2}]=\sigma ^{2}}, and thereforeS2=1n−1∑i=1n(Xi−X¯)2{\displaystyle S^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}(X_{i}-{\overline {X}}\,)^{2}}is an unbiased estimator of the population variance,σ2. The ratio between the biased (uncorrected) and unbiased estimates of the variance is known asBessel's correction.
The reason that an uncorrected sample variance,S2, is biased stems from the fact that the sample mean is anordinary least squares(OLS) estimator forμ:X¯{\displaystyle {\overline {X}}}is the number that makes the sum∑i=1n(Xi−X¯)2{\displaystyle \sum _{i=1}^{n}(X_{i}-{\overline {X}})^{2}}as small as possible. That is, when any other number is plugged into this sum, the sum can only increase. In particular, the choiceμ≠X¯{\displaystyle \mu \neq {\overline {X}}}gives,
and then
The above discussion can be understood in geometric terms: the vectorC→=(X1−μ,…,Xn−μ){\displaystyle {\vec {C}}=(X_{1}-\mu ,\ldots ,X_{n}-\mu )}can be decomposed into the "mean part" and "variance part" by projecting to the direction ofu→=(1,…,1){\displaystyle {\vec {u}}=(1,\ldots ,1)}and to that direction's orthogonal complement hyperplane. One getsA→=(X¯−μ,…,X¯−μ){\displaystyle {\vec {A}}=({\overline {X}}-\mu ,\ldots ,{\overline {X}}-\mu )}for the part alongu→{\displaystyle {\vec {u}}}andB→=(X1−X¯,…,Xn−X¯){\displaystyle {\vec {B}}=(X_{1}-{\overline {X}},\ldots ,X_{n}-{\overline {X}})}for the complementary part. Since this is an orthogonal decomposition, Pythagorean theorem says|C→|2=|A→|2+|B→|2{\displaystyle |{\vec {C}}|^{2}=|{\vec {A}}|^{2}+|{\vec {B}}|^{2}}, and taking expectations we getnσ2=nE[(X¯−μ)2]+nE[S2]{\displaystyle n\sigma ^{2}=n\operatorname {E} \left[({\overline {X}}-\mu )^{2}\right]+n\operatorname {E} [S^{2}]}, as above (but timesn{\displaystyle n}).
If the distribution ofC→{\displaystyle {\vec {C}}}is rotationally symmetric, as in the case whenXi{\displaystyle X_{i}}are sampled from a Gaussian, then on average, the dimension alongu→{\displaystyle {\vec {u}}}contributes to|C→|2{\displaystyle |{\vec {C}}|^{2}}equally as then−1{\displaystyle n-1}directions perpendicular tou→{\displaystyle {\vec {u}}}, so thatE[(X¯−μ)2]=σ2n{\displaystyle \operatorname {E} \left[({\overline {X}}-\mu )^{2}\right]={\frac {\sigma ^{2}}{n}}}andE[S2]=(n−1)σ2n{\displaystyle \operatorname {E} [S^{2}]={\frac {(n-1)\sigma ^{2}}{n}}}. This is in fact true in general, as explained above.
A far more extreme case of a biased estimator being better than any unbiased estimator arises from thePoisson distribution.[6][7]Suppose thatXhas a Poisson distribution with expectationλ. Suppose it is desired to estimate
with a sample of size 1. (For example, when incoming calls at a telephone switchboard are modeled as a Poisson process, andλis the average number of calls per minute, thene−2λ(the estimand) is the probability that no calls arrive in the next two minutes.)
Since the expectation of an unbiased estimatorδ(X) is equal to theestimand, i.e.
the only function of the data constituting an unbiased estimator is
To see this, note that when decomposing e−λfrom the above expression for expectation, the sum that is left is aTaylor seriesexpansion of e−λas well, yielding e−λe−λ= e−2λ(seeCharacterizations of the exponential function).
If the observed value ofXis 100, then the estimate is 1, although the true value of the quantity being estimated is very likely to be near 0, which is the opposite extreme. And, ifXis observed to be 101, then the estimate is even more absurd: It is −1, although the quantity being estimated must be positive.
The (biased)maximum likelihood estimator
is far better than this unbiased estimator. Not only is its value always positive but it is also more accurate in the sense that itsmean squared error
is smaller; compare the unbiased estimator's MSE of
The MSEs are functions of the true valueλ. The bias of the maximum-likelihood estimator is:
The bias of maximum-likelihood estimators can be substantial. Consider a case wherentickets numbered from 1 tonare placed in a box and one is selected at random, giving a valueX. Ifnis unknown, then the maximum-likelihood estimator ofnisX, even though the expectation ofXgivennis only (n+ 1)/2; we can be certain only thatnis at leastXand is probably more. In this case, the natural unbiased estimator is 2X− 1.
The theory ofmedian-unbiased estimators was revived by George W. Brown in 1947:[8]
An estimate of a one-dimensional parameter θ will be said to be median-unbiased, if, for fixed θ, the median of the distribution of the estimate is at the value θ; i.e., the estimate underestimates just as often as it overestimates. This requirement seems for most purposes to accomplish as much as the mean-unbiased requirement and has the additional property that it is invariant under one-to-one transformation.
Further properties of median-unbiased estimators have been noted by Lehmann, Birnbaum, van der Vaart and Pfanzagl.[9]In particular, median-unbiased estimators exist in cases where mean-unbiased andmaximum-likelihoodestimators do not exist. They are invariant underone-to-one transformations.
There are methods of construction median-unbiased estimators for probability distributions that havemonotone likelihood-functions, such as one-parameter exponential families, to ensure that they are optimal (in a sense analogous to minimum-variance property considered for mean-unbiased estimators).[10][11]One such procedure is an analogue of the Rao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao–Blackwell procedure for mean-unbiased estimation but for a larger class of loss-functions.[11]
Any minimum-variancemean-unbiased estimator minimizes therisk(expected loss) with respect to the squared-errorloss function(among mean-unbiased estimators), as observed byGauss.[12]A minimum-average absolute deviationmedian-unbiased estimator minimizes the risk with respect to theabsoluteloss function (among median-unbiased estimators), as observed byLaplace.[12][13]Other loss functions are used in statistics, particularly inrobust statistics.[12][14]
For univariate parameters, median-unbiased estimators remain median-unbiased undertransformationsthat preserve order (or reverse order).
Note that, when a transformation is applied to a mean-unbiased estimator, the result need not be a mean-unbiased estimator of its corresponding population statistic. ByJensen's inequality, aconvex functionas transformation will introduce positive bias, while aconcave functionwill introduce negative bias, and a function of mixed convexity may introduce bias in either direction, depending on the specific function and distribution. That is, for a non-linear functionfand a mean-unbiased estimatorUof a parameterp, the composite estimatorf(U) need not be a mean-unbiased estimator off(p). For example, thesquare rootof the unbiased estimator of the populationvarianceisnota mean-unbiased estimator of the populationstandard deviation: the square root of the unbiasedsample variance, the correctedsample standard deviation, is biased. The bias depends both on the sampling distribution of the estimator and on the transform, and can be quite involved to calculate – seeunbiased estimation of standard deviationfor a discussion in this case.
While bias quantifies theaveragedifference to be expected between an estimator and an underlying parameter, an estimator based on a finite sample can additionally be expected to differ from the parameter due to the randomness in the sample.
An estimator that minimises the bias will not necessarily minimise the mean square error.
One measure which is used to try to reflect both types of difference is themean square error,[2]
This can be shown to be equal to the square of the bias, plus the variance:[2]
When the parameter is a vector, an analogous decomposition applies:[15]
wheretrace(Cov(θ^)){\displaystyle \operatorname {trace} (\operatorname {Cov} ({\hat {\theta }}))}is the trace (diagonal sum) of thecovariance matrixof the estimator and‖Bias(θ^,θ)‖2{\displaystyle \left\Vert \operatorname {Bias} ({\hat {\theta }},\theta )\right\Vert ^{2}}is the squarevector norm.
For example,[16]suppose an estimator of the form
is sought for the population variance as above, but this time to minimise the MSE:
If the variablesX1...Xnfollow a normal distribution, thennS2/σ2has achi-squared distributionwithn− 1 degrees of freedom, giving:
and so
With a little algebra it can be confirmed that it isc= 1/(n+ 1) which minimises this combined loss function, rather thanc= 1/(n− 1) which minimises just the square of the bias.
More generally it is only in restricted classes of problems that there will be an estimator that minimises the MSE independently of the parameter values.
However it is very common that there may be perceived to be abias–variance tradeoff, such that a small increase in bias can be traded for a larger decrease in variance, resulting in a more desirable estimator overall.
Most bayesians are rather unconcerned about unbiasedness (at least in the formal sampling-theory sense above) of their estimates. For example, Gelman and coauthors (1995) write: "From a Bayesian perspective, the principle of unbiasedness is reasonable in the limit of large samples, but otherwise it is potentially misleading."[17]
Fundamentally, the difference between theBayesian approachand the sampling-theory approach above is that in the sampling-theory approach the parameter is taken as fixed, and then probability distributions of a statistic are considered, based on the predicted sampling distribution of the data. For a Bayesian, however, it is thedatawhich are known, and fixed, and it is the unknown parameter for which an attempt is made to construct a probability distribution, usingBayes' theorem:
Here the second term, thelikelihoodof the data given the unknown parameter value θ, depends just on the data obtained and the modelling of the data generation process. However a Bayesian calculation also includes the first term, theprior probabilityfor θ, which takes account of everything the analyst may know or suspect about θbeforethe data comes in. This information plays no part in the sampling-theory approach; indeed any attempt to include it would be considered "bias" away from what was pointed to purely by the data. To the extent that Bayesian calculations include prior information, it is therefore essentially inevitable that their results will not be "unbiased" in sampling theory terms.
But the results of a Bayesian approach can differ from the sampling theory approach even if the Bayesian tries to adopt an "uninformative" prior.
For example, consider again the estimation of an unknown population variance σ2of a Normal distribution with unknown mean, where it is desired to optimisecin the expected loss function
A standard choice of uninformative prior for this problem is theJeffreys prior,p(σ2)∝1/σ2{\displaystyle \scriptstyle {p(\sigma ^{2})\;\propto \;1/\sigma ^{2}}}, which is equivalent to adopting a rescaling-invariant flat prior forln(σ2).
One consequence of adopting this prior is thatS2/σ2remains apivotal quantity, i.e. the probability distribution ofS2/σ2depends only onS2/σ2, independent of the value ofS2or σ2:
However, while
in contrast
— when the expectation is taken over the probability distribution of σ2givenS2, as it is in the Bayesian case, rather thanS2given σ2, one can no longer take σ4as a constant and factor it out. The consequence of this is that, compared to the sampling-theory calculation, the Bayesian calculation puts more weight on larger values of σ2, properly taking into account (as the sampling-theory calculation cannot) that under this squared-loss function the consequence of underestimating large values of σ2is more costly in squared-loss terms than that of overestimating small values of σ2.
The worked-out Bayesian calculation gives ascaled inverse chi-squared distributionwithn− 1 degrees of freedom for the posterior probability distribution of σ2. The expected loss is minimised whencnS2= <σ2>; this occurs whenc= 1/(n− 3).
Even with an uninformative prior, therefore, a Bayesian calculation may not give the same expected-loss minimising result as the corresponding sampling-theory calculation.
|
https://en.wikipedia.org/wiki/Bias_of_an_estimator
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.